Llm Txt Protocol
Skill Profile
(Select at least one profile to enable specific modules)
Overview
LLM Text Protocol provides standardized patterns for text-based interactions with language models. It encompasses prompt engineering, response formatting, and text processing patterns that ensure consistent, reliable, and effective communication with LLMs in production environments.
Why This Matters
- Reduces Processing Time: Standardized protocols streamline text processing and reduce latency
- Increases Consistency: Consistent prompts and formats ensure predictable outputs
- Lowers Development Costs: Reusable patterns reduce development time and maintenance
- Improves Quality: Structured validation and error handling improve output reliability
- Enables Automation: Standardized interfaces enable automated workflows
- Facilitates Integration: Clear protocols simplify integration with other systems
Core Concepts & Rules
1. Core Principles
- Follow established patterns and conventions
- Maintain consistency across codebase
- Document decisions and trade-offs
2. Implementation Guidelines
- Start with the simplest viable solution
- Iterate based on feedback and requirements
- Test thoroughly before deployment
Inputs / Outputs / Contracts
- Inputs:
- Text prompts (instructions, questions, tasks)
- Context information (background, history, preferences)
- Format specifications (JSON, XML, YAML)
- Entry Conditions:
- LLM API configured with valid credentials
- Prompt templates defined and validated
- Response schemas defined
- Outputs:
- Structured responses (JSON, XML, YAML)
- Validated data (Pydantic models)
- Error messages with fallback responses
- Artifacts Required (Deliverables):
- Prompt template implementations
- Response validation schemas
- Error handling and retry logic
- Thai language support
- Acceptance Evidence:
- Prompt template tests passing
- Response validation tests passing
- Error handling tests passing
- Thai language support verified
- Success Criteria:
- Response validation success rate > 95%
- Prompt template reuse rate > 80%
- Error recovery rate > 90%
- Thai response accuracy > 85%
Skill Composition
Quick Start / Implementation Example
- Review requirements and constraints
- Set up development environment
- Implement core functionality following patterns
- Write tests for critical paths
- Run tests and fix issues
- Document any deviations or decisions
# Example implementation following best practices
def example_function():
# Your implementation here
pass
Assumptions / Constraints / Non-goals
- Assumptions:
- Development environment is properly configured
- Required dependencies are available
- Team has basic understanding of domain
- Constraints:
- Must follow existing codebase conventions
- Time and resource limitations
- Compatibility requirements
- Non-goals:
- This skill does not cover edge cases outside scope
- Not a replacement for formal training
Compatibility & Prerequisites
- Supported Versions:
- Python 3.8+
- Node.js 16+
- Modern browsers (Chrome, Firefox, Safari, Edge)
- Required AI Tools:
- Code editor (VS Code recommended)
- Testing framework appropriate for language
- Version control (Git)
- Dependencies:
- Language-specific package manager
- Build tools
- Testing libraries
- Environment Setup:
.env.example keys: API_KEY, DATABASE_URL (no values)
Test Scenario Matrix (QA Strategy)
| Type |
Focus Area |
Required Scenarios / Mocks |
| Unit |
Core Logic |
Must cover primary logic and at least 3 edge/error cases. Target minimum 80% coverage |
| Integration |
DB / API |
All external API calls or database connections must be mocked during unit tests |
| E2E |
User Journey |
Critical user flows to test |
| Performance |
Latency / Load |
Benchmark requirements |
| Security |
Vuln / Auth |
SAST/DAST or dependency audit |
| Frontend |
UX / A11y |
Accessibility checklist (WCAG), Performance Budget (Lighthouse score) |
Technical Guardrails & Security Threat Model
1. Security & Privacy (Threat Model)
- Top Threats: Injection attacks, authentication bypass, data exposure
2. Performance & Resources
3. Architecture & Scalability
4. Observability & Reliability
Agent Directives & Error Recovery
(ข้อกำหนดสำหรับ AI Agent ในการคิดและแก้ปัญหาเมื่อเกิดข้อผิดพลาด)
- Thinking Process: Analyze root cause before fixing. Do not brute-force.
- Fallback Strategy: Stop after 3 failed test attempts. Output root cause and ask for human intervention/clarification.
- Self-Review: Check against Guardrails & Anti-patterns before finalizing.
- Output Constraints: Output ONLY the modified code block. Do not explain unless asked.
Definition of Done (DoD) Checklist
Anti-patterns / Pitfalls
- ⛔ Don't: Log PII, catch-all exception, N+1 queries
- ⚠️ Watch out for: Common symptoms and quick fixes
- 💡 Instead: Use proper error handling, pagination, and logging
Reference Links & Examples
- Internal documentation and examples
- Official documentation and best practices
- Community resources and discussions
Versioning & Changelog
- Version: 1.0.0
- Changelog:
- 2026-02-22: Initial version with complete template structure