Process textual and multimedia files with various LLM providers using the llm CLI...
This skill enables seamless interaction with multiple LLM providers (OpenAI, Anthropic, Google Gemini, Ollama) through the llm CLI tool. It processes textual and multimedia information with support for both one-off executions and interactive conversation modes.
Trigger this skill when:
Example user requests:
gpt-5 - Most advanced modelgpt-4-1 / gpt-4.1 - Latest high-performancegpt-4-1-mini / gpt-4.1-mini - Smaller, faster versiongpt-4o - Multimodal omni modelgpt-4o-mini - Lightweight multimodalo3 - Advanced reasoningo3-mini / o3-mini-high - Reasoning variantsAliases: openai, gpt
claude-sonnet-4.5 - Latest flagship modelclaude-opus-4.1 - Complex task specialistclaude-opus-4 - Coding specialistclaude-sonnet-4 - Balanced performanceclaude-3.5-sonnet - Previous generationclaude-3.5-haiku - Fast & efficientAliases: anthropic, claude
gemini-2.5-pro - Most advancedgemini-2.5-flash - Default fast modelgemini-2.5-flash-lite - Speed optimizedgemini-2.0-flash - Previous generationgemini-2.5-computer-use - UI interactionAliases: google, gemini
llama3.1 - Meta's latest (8b, 70b, 405b)llama3.2 - Compact versions (1b, 3b)mistral-large-2 - Mistral flagshipdeepseek-coder - Code specialiststarcode2 - Code modelsAliases: ollama, local
User Input (with optional model)
↓
Check Available Providers (env vars)
↓
Determine Model to Use:
- If specified: Use provided model
- If ambiguous: Show selection menu
- Otherwise: Use last remembered choice
↓
Load/Create Config (~/.claude/llm-skill-config.json)
↓
Detect Input Type:
- stdin/piped
- file path
- inline text
↓
Execute llm CLI:
- Non-interactive: Process & return
- Interactive: Keep conversation loop
↓
Save Model Choice to Config
OPENAI_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY, OLLAMA_BASE_URLgpt-4o, claude-opus, gemini-2.5-pro)openai, anthropic, google, ollama)~/.claude/llm-skill-config.jsonllm "Your prompt here"
llm --model gpt-4o "Process this text"
llm < file.txt
cat document.md | llm "Summarize"
llm --interactive
llm -i
llm --model claude-opus --interactive
Persistent config location: ~/.claude/llm-skill-config.json
{
"last_model": "claude-sonnet-4.5",
"default_provider": "anthropic",
"available_providers": ["openai", "anthropic", "google", "ollama"]
}
llm_skill.py - Main skill orchestrationproviders.py - Provider detection & configmodels.py - Model definitions & aliasesexecutor.py - Execution logic (interactive/non-interactive)input_handler.py - Input type detectiondetect_providers()get_model_selector(input_text, provider=None)last_model config preferenceload_input(input_source)execute_llm(content, model, interactive=False)llm CLI with appropriate parametersWhen user invokes this skill, Claude should:
--model gpt-4o)pip install llmUsers can pre-configure preferences:
{
"last_model": "claude-sonnet-4.5",
"default_provider": "anthropic",
"interactive_mode": false,
"available_providers": ["openai", "anthropic"]
}
Support /llm command:
/llm process this text
/llm --interactive
/llm --model gpt-4o analyze this