Build complete agent prompts deterministically via Python script. Use BEFORE spawning any BAZINGA agent (Developer, QA, Tech Lead, PM, etc.).
You are the prompt-builder skill. Your role is to build complete agent prompts by calling prompt_builder.py, which handles everything deterministically.
This skill builds complete agent prompts by calling a Python script that:
bazinga/bazinga.db exists)config-seeder skill first at session start)agents/ directoryWhen invoked, you must:
The orchestrator writes a params JSON file before invoking this skill. Look for it at:
bazinga/prompts/{session_id}/params_{agent_type}_{group_id}.json
Example: bazinga/prompts/bazinga_20251217_120000/params_developer_CALC.json
Params file format:
{
"agent_type": "developer",
"session_id": "bazinga_20251217_120000",
"group_id": "CALC",
"task_title": "Implement calculator",
"task_requirements": "Create add/subtract functions",
"branch": "main",
"mode": "simple",
"testing_mode": "full",
"model": "haiku",
"output_file": "bazinga/prompts/bazinga_20251217_120000/developer_CALC.md"
}
Additional fields for retries:
{
"qa_feedback": "Tests failed: test_add expected 4, got 5",
"tl_feedback": "Error handling needs improvement"
}
Additional fields for CRP (Compact Return Protocol):
{
"prior_handoff_file": "bazinga/artifacts/bazinga_20251217_120000/CALC/handoff_developer.json"
}
Additional fields for PM spawns:
{
"pm_state": "{...json...}",
"resume_context": "Resuming after developer completion"
}
Run the prompt builder with the params file:
python3 .claude/skills/prompt-builder/scripts/prompt_builder.py --params-file "bazinga/prompts/{session_id}/params_{agent_type}_{group_id}.json"
The script will:
output_file pathThe script outputs JSON to stdout:
Success response:
{
"success": true,
"prompt_file": "bazinga/prompts/bazinga_20251217_120000/developer_CALC.md",
"tokens_estimate": 10728,
"lines": 1406,
"markers_ok": true,
"missing_markers": [],
"error": null
}
Error response:
{
"success": false,
"prompt_file": null,
"tokens_estimate": 0,
"lines": 0,
"markers_ok": false,
"missing_markers": ["READY_FOR_QA"],
"error": "Prompt validation failed - missing required markers"
}
Return this JSON to the orchestrator so it can:
success is trueprompt_file for the Task spawnmarkers_ok is true🔴 DO NOT STOP after receiving JSON. IMMEDIATELY call Task() to spawn the agent.
After verifying success: true, spawn the agent in the SAME assistant turn:
Task(
subagent_type: "general-purpose",
model: "{haiku|sonnet|opus}",
description: "{agent_type} working on {group_id}",
prompt: "FIRST: Read {prompt_file} which contains your complete instructions.
THEN: Execute ALL instructions in that file.
Do NOT proceed without reading the file first."
)
🚫 ANTI-PATTERN:
❌ WRONG: "Prompt built successfully. JSON result: {...}" [STOPS - turn ends]
→ Agent never spawns. Workflow hangs until user says "continue".
✅ CORRECT: "Prompt built successfully." [IMMEDIATELY calls Task() with prompt_file]
→ Agent spawns automatically. Workflow continues.
The entire sequence (params file → prompt-builder → Task spawn) MUST complete in ONE assistant turn.
| Field | Required | Example | Description |
|---|---|---|---|
agent_type |
Yes | developer |
developer, qa_expert, tech_lead, project_manager, etc. |
session_id |
Yes | bazinga_20251217_120000 |
Current session ID |
group_id |
Non-PM | CALC |
Task group ID |
task_title |
No | Implement calculator |
Brief title |
task_requirements |
No | Create functions... |
Detailed requirements |
branch |
Yes | main |
Git branch name |
mode |
Yes | simple |
simple or parallel |
testing_mode |
Yes | full |
full, minimal, or disabled |
model |
No | haiku |
haiku, sonnet, or opus (default: sonnet) |
output_file |
No | bazinga/prompts/.../dev.md |
Where to save prompt |
qa_feedback |
No | Tests failed... |
For developer retry after QA fail |
tl_feedback |
No | Needs refactoring |
For developer retry after TL review |
pm_state |
No | {...json...} |
PM state for resume spawns |
resume_context |
No | Resuming after... |
Context for PM resume |
prior_handoff_file |
No | bazinga/artifacts/.../handoff_developer.json |
CRP: Prior agent's handoff file (see behavior below) |
speckit_mode |
No | true |
Enable SpecKit integration (pre-planned tasks) |
feature_dir |
No | .specify/features/001-auth/ |
SpecKit feature directory path |
speckit_context |
No | {"tasks": "...", "spec": "...", "plan": "..."} |
SpecKit artifact contents |
prior_handoff_file Behavior:
bazinga/artifacts/, match handoff_*.json pattern, no path traversal (../)task_groups.specializations → reads template filescontext_packages, error_patterns, agent_reasoningagents/*.md) - 800-2500 linesoutput_file| Error | JSON Response | Action |
|---|---|---|
| Params file not found | success: false, error: "Params file not found" |
Check file path |
| Invalid JSON in params | success: false, error: "Invalid JSON..." |
Fix params file |
| Missing markers | success: false, markers_ok: false |
Agent file corrupted |
| Agent file not found | success: false, error: "Agent file not found" |
Invalid agent_type |
| Database not found | Warning, continues | Proceeds without DB data |
If the result has success: false, do NOT proceed with agent spawn. Report the error to orchestrator.
The script still supports direct CLI invocation for manual testing:
python3 .claude/skills/prompt-builder/scripts/prompt_builder.py \
--agent-type developer \
--session-id "bazinga_123" \
--branch "main" \
--mode "simple" \
--testing-mode "full"
Add --json-output to get JSON response in CLI mode.