# threadline

Persistent memory and context layer for AI agents. inject() before your LLM call, update() after. Relevance-scored injection, grant-based access control, user-owned context. Works with OpenAI, Anthro…

## Quick Start

```bash
# Connect this server (installs CLI if needed)
npx -y @smithery/cli@latest mcp add vidursharma202-del/threadline

# Browse available tools
npx -y @smithery/cli@latest tool list vidursharma202-del/threadline

# Get full schema for a tool
npx -y @smithery/cli@latest tool get vidursharma202-del/threadline inject

# Call a tool
npx -y @smithery/cli@latest tool call vidursharma202-del/threadline inject '{}'
```

## Direct MCP Connection

Endpoint: `https://threadline--vidursharma202-del.run.tools`

**Required config:**
- `apiKey` (query) — Your Threadline API key — get one at threadline.to/dashboard

## Tools (2)

- `inject` — Inject user context into a base system prompt before an LLM call. Returns an enriched prompt with relevant facts about …
- `update` — Update a user's context after an LLM interaction. Extracts structured facts from the conversation and stores them for f…

```bash
# Get full input/output schema for a tool
npx -y @smithery/cli@latest tool get vidursharma202-del/threadline <tool-name>
```
