Transform vague prompts into rich, structured prompts using Anthropic Messages API
Transform vague, ambiguous prompts into rich, well-structured prompts. Uses Anthropic's prompt improvement capabilities when available, with graceful fallback to the current LLM.
How it works:
$PROMPT - The prompt to improve (required) $FEEDBACK - Optional feedback on what to improve (e.g., "Make it more detailed", "Add examples", "Focus on clarity") $TARGET_MODEL - Optional target model for the improved prompt (defaults to current model) $SYSTEM - Optional system prompt to improve alongside the user prompt
Check if $PROMPT starts with a flag:
| Flag | Behavior |
|---|---|
-p |
Prompt only - Show the improved prompt, don't execute |
-v |
Verbose - Show the improved prompt, then execute |
| (none) | Quick - Execute immediately without showing full prompt |
Strip the flag from $PROMPT before processing.
Check if $PROMPT starts with a flag (-p, -v) and extract:
prompt-only, verbose, or quick (default)Check in this order:
Script available? Check if .scripts/improve-prompt.cjs exists
API key available? Check if ANTHROPIC_API_KEY is set in environment
The fallback cascade:
Script (.scripts/improve-prompt.cjs)
↓ (if not available)
Anthropic Messages API (direct call)
↓ (if no API key)
Current LLM (Opus 4.5, Sonnet, etc.)
Method A: Script (preferred)
node .scripts/improve-prompt.cjs "$PROMPT" "$FEEDBACK" "$TARGET_MODEL" "$SYSTEM"
Method B: Anthropic Messages API (direct) Make API call with:
claude-sonnet-4-5-20250929 (optimized for prompt engineering)Method C: Current LLM Fallback Use the current session's LLM to improve the prompt inline:
"💡 Using inline improvement (no API key configured). For best results, add ANTHROPIC_API_KEY to .env"Mode: prompt-only (flag: -p):
> **Original:** [original_prompt]Mode: verbose (flag: -v):
> **Original:** [original_prompt]<details>
<summary>📝 Improved Prompt (click to expand)</summary>
[enhanced_prompt]
</details>
--- separatorMode: quick (no flag - DEFAULT):
Used for both API and fallback methods:
You are an expert prompt engineer trained in Anthropic's best practices. Your job is to transform vague, ambiguous prompts into clear, structured, effective prompts.
Analyze the user's prompt and improve it using these techniques:
1. **Structure**: Add clear sections with XML tags or markdown headers
2. **Clarity**: Be specific about format, length, and success criteria
3. **Context**: Include necessary background and define ambiguous terms
4. **Examples**: Add few-shot examples when helpful
5. **Chain of Thought**: For complex tasks, request step-by-step reasoning
6. **Constraints**: Make implicit constraints explicit
Return ONLY the improved prompt. Do not explain your changes or add meta-commentary.
{if $FEEDBACK exists: "Focus on: {$FEEDBACK}"}
/prompt-improver -p critique this strategy doc
→ Shows improved prompt only, doesn't execute
/prompt-improver -v critique this strategy doc
→ Shows improved prompt, then executes it
/prompt-improver critique this strategy doc
→ Just executes the improved prompt
/prompt-improver -v "review this code" "Focus on security issues"
→ Shows improved prompt focused on security, then executes
The improved prompt typically follows this structure:
# Task
[Clear statement of what to do]
# Context
[Background information needed]
# Instructions
1. [Step 1]
2. [Step 2]
3. [Step 3]
# Constraints
- [Constraint 1]
- [Constraint 2]
# Output Format
[Expected format and structure]
# Examples (if helpful)
[Input/output examples]
| Situation | Behavior |
|---|---|
| Script not found | Fall back to API |
| No API key | Fall back to current LLM with notification |
| API rate limit | Retry with exponential backoff, then fall back |
| API error | Fall back to current LLM |
| Network issues | Fall back to current LLM |
Key principle: The skill should NEVER fail completely. It always has the current LLM as ultimate fallback.
For best results, add your Anthropic API key:
.env file in vault root (if not exists)ANTHROPIC_API_KEY=your-key-hereWithout the API key, the skill still works using the current LLM session.
Meta-prompting: This skill uses Claude to improve prompts for Claude. It's prompt engineering as a service.
Invisible by default: The best tools disappear. Users ask naturally, get expert results, never see the complexity.
Progressive disclosure: Flags (-v, -p) let power users inspect and learn from the improvements.
Graceful degradation: Works everywhere - with full API access, partial access, or no external access at all.
Update System/usage_log.md to mark prompt improvement as used.
Analytics (Silent):
Call track_event with event_name prompt_improved and properties:
This only fires if the user has opted into analytics. No action needed if it returns "analytics_disabled".