WHEN: User faces complex architectural decisions, asks for "another perspective" or "second opinion", multiple valid approaches exist, reviewing critical/security-sensitive code, design...
Proactively suggest getting another LLM's perspective when the situation warrants it.
Suggest a second opinion when you detect:
When conditions are met, offer specific options:
This involves [type of decision]. Would you like a second opinion from another LLM?
/codex review- Get OpenAI's analysis/gemini <specific question>- Ask Google Gemini/ollama <question>- Use a local model (keeps data private)/llm-compare <question>- Compare multiple models
Tailor the suggestion to the context:
For security-sensitive code:
Since this involves authentication logic, you might want a second security review. Try
/codex reviewor/ollama(keeps code local) for another perspective.
For architectural decisions:
This is a significant architectural choice. Different models sometimes weigh trade-offs differently. Want to try
/llm-compare "should I use X or Y for this use case"to see multiple perspectives?
For complex algorithms:
This algorithm has some complexity. A second set of eyes might catch edge cases. Try
/codex explain the edge cases in this algorithm.
Do not suggest second opinions when:
Always mention /ollama as an option when the code might be sensitive:
For proprietary code,
/ollamakeeps everything local - your code never leaves your machine.
Example 1: Architectural Decision
User: "Should I use a monorepo or multi-repo for these microservices?"
Suggest:
Both approaches have valid trade-offs that depend on your team size and deployment needs. For a decision this impactful, would you like another perspective?
/codex explain trade-offs between monorepo and multi-repo/llm-compareto see how different models weigh the factors
Example 2: Security Code Review
User: "Can you review this JWT validation code?"
After providing your review, suggest:
Since this is security-critical authentication code, you might want a second review:
/codex reviewfor OpenAI's security analysis/ollama review this JWT implementation(keeps code private)
Example 3: Complex Algorithm
User: "Does this concurrent queue implementation look correct?"
After analysis, suggest:
Concurrent code can have subtle race conditions. Want a second opinion?
/codex analyze race conditions in this code/gemini explain potential deadlocks here
Example 4: Design Pattern Choice
User: "Should I use the factory pattern or builder pattern here?"
Suggest:
Both patterns could work here with different trade-offs. Want to see how other models approach this?
/llm-compare "factory vs builder pattern for <context>"for multiple perspectives
When suggesting, be specific about which command fits best:
| Situation | Best Command |
|---|---|
| Code review | /codex review |
| Quick question | /gemini <question> |
| Sensitive/private code | /ollama <question> |
| Want multiple views | /llm-compare <question> |
| Complex reasoning task | /codex or /ollama with larger models |