Use when agents discover better patterns, find gaps or inaccuracies in existing skills, or need to contribute validated improvements to shared knowledge, or found unique experience that could be...
This meta-skill teaches agents to recognize valuable learnings during their work and contribute improvements back to the communal skill repository through pull requests.
This repository is a living knowledge base that improves through collective learning. As agents work on tasks, they discover:
These discoveries should flow back into the repository so all agents benefit.
Critical principle: Skills must contain general-purpose knowledge that helps many agents across different contexts. This is not a place for project-specific configurations, personal preferences, or one-off solutions. Focus on patterns, principles, and practices that are broadly applicable.
Use this skill when:
Pay attention to signals that indicate learnable moments:
Time investment signals:
Repetition signals:
Correction signals:
Consult references/recognizing-learnings.md for detailed patterns.
Before proposing changes, validate based on contribution type:
For Tool/SDK Documentation:
For Pattern Contributions:
See references/validation-criteria.md for detailed guidance.
Update existing skill when:
Create new skill when:
Do NOT contribute when:
Note in conversation only when:
Important: All contributions go through pull requests, not direct commits to main.
PR workflow:
Consult references/pr-workflow.md for detailed process.
Clear rationale:
"I encountered rate limiting with multiple API providers 5 times. Added exponential
backoff pattern with jitter which resolved all instances. This pattern
isn't documented anywhere in the skills catalog."
Before/after comparison:
Before: "Use full rewrites for updates"
After: "Use append-only updates for concurrent writes (safer), use full rewrites
only for single-agent exclusive access"
Why: Prevents data loss in multi-agent scenarios
Evidence of validation:
"Tested across 3 different projects, pattern held. Also confirmed in
product docs. Previous approach caused data loss 2/3 times."
Preserved existing knowledge:
❌ Premature optimization - Changing after single instance without validation
❌ Over-generalization - "This worked for me once" → "Always do this"
❌ Opinion as fact - Personal preference without objective improvement
❌ Churn - Changes that are different but not better
❌ Deleting context - Removing information that might help others
Even well-intentioned contributions can miss the mark. Here are patterns to watch for:
Pattern: Documenting your specific solution instead of extracting the general pattern.
Example - TOO SPECIFIC:
Skill: `<example-skill>` (git workflow conventions)
Content: "Always end commits with: Written by <name> ◯ <organization>"
Problem: This is a personal preference, not general knowledge
Example - APPROPRIATELY GENERAL:
Skill: `<example-skill>` (revised concept for discovering repo conventions)
Content: "Check repository for commit conventions in CONTRIBUTING.md or recent commits"
Better: Teaches pattern of discovering conventions, applies to any repository
How to avoid:
Pattern: Creating skills that just reformat well-known documentation.
Example - JUST DOCUMENTATION:
Skill: "How to use git"
Content: Explains git commands, branching, committing, PR creation
Problem: This is standard git knowledge available everywhere
Example - SPECIALIZED KNOWLEDGE:
Skill: `<example-skill>` (protocol server builder in this repository)
Content: Patterns for creating servers - not just "how to use the protocol"
but specific guidance on tool design, error handling patterns, testing strategies
Better: Takes general protocol knowledge and adds specialized patterns for building quality servers
How to avoid:
Pattern: "I made one mistake" → "Let's create 3 new skills to prevent it"
Real example from this repository (November 2025):
Observation: One agent submitted overly-specific `<example-skill>` (git workflow conventions)
Initial reaction: "We need CULTURE.md + 3 new skills (knowledge-curation,
agent-human-collaboration, pattern-recognition) + extended documentation"
Correction: Another agent called this out as the exact over-generalization we warn against
Right-sized solution: Add this "Common Pitfalls" section instead
Learning: The system worked - peer review caught premature abstraction
How to avoid:
Understanding where your learning sits on the abstraction ladder:
Level 1 - Specific Solution (TOO LOW for skills):
"I configured Firebase Auth with Google OAuth by setting these environment
variables and calling these specific API endpoints"
→ Too specific, only helps others using Firebase + Google OAuth
Level 2 - Pattern (GOOD for skills):
"OAuth integration strategies: Provider discovery, token management, callback
handling, session persistence. Applies to Google, GitHub, Microsoft, etc."
→ General enough to help with any OAuth integration
Level 3 - Principle (ALSO GOOD for skills):
"Delegating authentication to specialized providers: Trade-offs between
managed auth services vs. self-hosted, security considerations, user experience"
→ Helps with authentication decisions broadly
Where to contribute:
How to climb the ladder:
Pattern: "I like doing it this way" → "Everyone should do it this way"
Example:
"Always use arrow functions in JavaScript"
"Always put API calls in src/api/ directory"
"Always use the premium model over the balanced model"
→ These are preferences without objective evidence of superiority
How to avoid:
Pattern: Creating new skills/docs when information should be added to existing ones.
Signs you're fragmenting:
How to avoid:
Ask yourself:
If you can confidently answer yes to all five → Contribute
If you're unsure on any → More validation needed or reconsider contribution type
Use this template for skill contributions:
## What
[Brief description of what's changing]
## Why
[Problem you encountered / gap you found / improvement opportunity]
## Evidence
[How you validated this is better / how you tested / how often you saw this]
## Impact
[Who benefits / what situations this helps / what it prevents]
## Testing
[How you verified the change works / examples you tried]
See references/contribution-examples.md for real examples.
During task: "Following `<example-skill>` (shared state updates), I used full-rewrite updates for
concurrent writes. Result: data loss when two agents wrote simultaneously."
Validation: "Checked references/<example-reference>.md (concurrency guidance) - it says append-only updates are
safer but warning wasn't prominent. Tested append-only updates with concurrent
writes - no data loss."
Action:
1. Create feature branch: fix/<example-skill>-concurrency-warning
2. Update SKILL.md to make warning more prominent
3. Add concrete example of data loss scenario
4. Create PR: "Emphasize append-only updates for concurrent writes"
5. Explain in PR: "Misread the guidance, led to data loss. Making warning
more visible to prevent this for other agents."
During task: "Hit API rate limits 5 times across different projects.
Spent 30min each time figuring out exponential backoff."
Validation: "Pattern works consistently. Checked the skills catalog - not documented.
This is generalizable beyond my specific use case."
Action:
1. Create feature branch: add/<example-skill>-rate-limiting
2. Create new skill: .claude/skills/<example-skill>/ (API rate limiting patterns)
3. Document exponential backoff pattern with code examples
4. Create PR: "Add API rate limiting patterns"
5. Explain: "Common pattern that caused repeated debugging time. Validated
across 5 instances with different APIs."
During task: "Skill said 'use appropriate model' but didn't define criteria.
Tried premium, balanced, and budget model tiers before finding best fit."
Validation: "Through testing, identified that task complexity + budget
constraints should guide model choice. This clarification would have saved
1 hour."
Action:
1. Create feature branch: clarify/<example-skill>-selection-criteria
2. Add decision tree to skill
3. Include examples: "For X task → Y model because Z"
4. Create PR: "Add model selection decision tree"
5. Explain: "Ambiguous guidance led to trial-and-error. Adding decision
criteria to help agents choose upfront."
When you make mistakes:
When you discover better approaches:
When skills lead you astray:
Before submitting PR, ask:
Attribution:
Collaboration:
Continuous improvement:
After contributing:
The goal: A knowledge base that gets smarter with every agent interaction.