# agent-infrastructure

Agent infrastructure that runs itself. Six tools for autonomous agents: free LLM model recommendations across 17 models, real-time provider availability monitoring, LLM routing proxy with 3% margin, …

## Quick Start

```bash
# Connect this server (installs CLI if needed)
npx -y smithery mcp add getkin/agent-infrastructure

# Browse available tools
npx -y smithery tool list getkin/agent-infrastructure

# Get full schema for a tool
npx -y smithery tool get getkin/agent-infrastructure recommend_model

# Call a tool
npx -y smithery tool call getkin/agent-infrastructure recommend_model '{}'
```

## Direct MCP Connection

Endpoint: `https://agent-infrastructure--getkin.run.tools`

## Tools (6)

- `recommend_model` — Get the optimal LLM for any task based on cost, speed, and quality.
- `check_availability` — Check real-time availability and latency of LLM provider APIs.
- `route_llm_call` — Route an LLM chat completion through GetKin's proxy. OpenAI-compatible
- `compress_memory` — Compress a raw session dump into structured short-term, medium-term,
- `list_models` — List all models available through GetKin's routing proxy with
- `check_memory_usage` — Check how many free memory compressions remain for an agent.

```bash
# Get full input/output schema for a tool
npx -y smithery tool get getkin/agent-infrastructure <tool-name>
```

## Resources

- `getkin://status` — Current status of all GetKin services.
