Smithery Logo
MCPsSkillsDocsPricing
Login
Smithery Logo

Accelerating the Agent Economy

Resources

DocumentationPrivacy PolicySystem Status

Company

PricingAboutBlog

Connect

© 2026 Smithery. All rights reserved.

    amnadtaowsoam

    llm-text-protocol

    amnadtaowsoam/llm-text-protocol
    AI & ML
    1

    About

    SKILL.md

    Install

    Install via Skills CLI

    or add to your agent
    • Claude Code
      Claude Code
    • Codex
      Codex
    • OpenClaw
      OpenClaw
    • Cursor
      Cursor
    • Amp
      Amp
    • GitHub Copilot
      GitHub Copilot
    • Gemini CLI
      Gemini CLI
    • Kilo Code
      Kilo Code
    • Junie
      Junie
    • Replit
      Replit
    • Windsurf
      Windsurf
    • Cline
      Cline
    • Continue
      Continue
    • OpenCode
      OpenCode
    • OpenHands
      OpenHands
    • Roo Code
      Roo Code
    • Augment
      Augment
    • Goose
      Goose
    • Trae
      Trae
    • Zencoder
      Zencoder
    • Antigravity
      Antigravity
    ├─
    ├─
    └─

    About

    Standardized protocol for text-based interactions with language models, including prompt engineering, response formatting, and text processing patterns.

    SKILL.md

    Llm Txt Protocol

    Skill Profile

    (Select at least one profile to enable specific modules)

    • DevOps
    • Backend
    • Frontend
    • AI-RAG
    • Security Critical

    Overview

    LLM Text Protocol provides standardized patterns for text-based interactions with language models. It encompasses prompt engineering, response formatting, and text processing patterns that ensure consistent, reliable, and effective communication with LLMs in production environments.

    Why This Matters

    • Reduces Processing Time: Standardized protocols streamline text processing and reduce latency
    • Increases Consistency: Consistent prompts and formats ensure predictable outputs
    • Lowers Development Costs: Reusable patterns reduce development time and maintenance
    • Improves Quality: Structured validation and error handling improve output reliability
    • Enables Automation: Standardized interfaces enable automated workflows
    • Facilitates Integration: Clear protocols simplify integration with other systems

    Core Concepts & Rules

    1. Core Principles

    • Follow established patterns and conventions
    • Maintain consistency across codebase
    • Document decisions and trade-offs

    2. Implementation Guidelines

    • Start with the simplest viable solution
    • Iterate based on feedback and requirements
    • Test thoroughly before deployment

    Inputs / Outputs / Contracts

    • Inputs:
      • Text prompts (instructions, questions, tasks)
      • Context information (background, history, preferences)
      • Format specifications (JSON, XML, YAML)
    • Entry Conditions:
      • LLM API configured with valid credentials
      • Prompt templates defined and validated
      • Response schemas defined
    • Outputs:
      • Structured responses (JSON, XML, YAML)
      • Validated data (Pydantic models)
      • Error messages with fallback responses
    • Artifacts Required (Deliverables):
      • Prompt template implementations
      • Response validation schemas
      • Error handling and retry logic
      • Thai language support
    • Acceptance Evidence:
      • Prompt template tests passing
      • Response validation tests passing
      • Error handling tests passing
      • Thai language support verified
    • Success Criteria:
      • Response validation success rate > 95%
      • Prompt template reuse rate > 80%
      • Error recovery rate > 90%
      • Thai response accuracy > 85%

    Skill Composition

    • Depends on: llm-integration, ai-agents
    • Compatible with: chatbot-integration, ai-search, multi-language
    • Conflicts with: None (generic protocol skill)
    • Related Skills: llm-function-calling, langchain-patterns, prompting-patterns

    Quick Start / Implementation Example

    1. Review requirements and constraints
    2. Set up development environment
    3. Implement core functionality following patterns
    4. Write tests for critical paths
    5. Run tests and fix issues
    6. Document any deviations or decisions
    # Example implementation following best practices
    def example_function():
        # Your implementation here
        pass
    

    Assumptions / Constraints / Non-goals

    • Assumptions:
      • Development environment is properly configured
      • Required dependencies are available
      • Team has basic understanding of domain
    • Constraints:
      • Must follow existing codebase conventions
      • Time and resource limitations
      • Compatibility requirements
    • Non-goals:
      • This skill does not cover edge cases outside scope
      • Not a replacement for formal training

    Compatibility & Prerequisites

    • Supported Versions:
      • Python 3.8+
      • Node.js 16+
      • Modern browsers (Chrome, Firefox, Safari, Edge)
    • Required AI Tools:
      • Code editor (VS Code recommended)
      • Testing framework appropriate for language
      • Version control (Git)
    • Dependencies:
      • Language-specific package manager
      • Build tools
      • Testing libraries
    • Environment Setup:
      • .env.example keys: API_KEY, DATABASE_URL (no values)

    Test Scenario Matrix (QA Strategy)

    Type Focus Area Required Scenarios / Mocks
    Unit Core Logic Must cover primary logic and at least 3 edge/error cases. Target minimum 80% coverage
    Integration DB / API All external API calls or database connections must be mocked during unit tests
    E2E User Journey Critical user flows to test
    Performance Latency / Load Benchmark requirements
    Security Vuln / Auth SAST/DAST or dependency audit
    Frontend UX / A11y Accessibility checklist (WCAG), Performance Budget (Lighthouse score)

    Technical Guardrails & Security Threat Model

    1. Security & Privacy (Threat Model)

    • Top Threats: Injection attacks, authentication bypass, data exposure
    • Data Handling: Sanitize all user inputs to prevent Injection attacks. Never log raw PII
    • Secrets Management: No hardcoded API keys. Use Env Vars/Secrets Manager
    • Authorization: Validate user permissions before state changes

    2. Performance & Resources

    • Execution Efficiency: Consider time complexity for algorithms
    • Memory Management: Use streams/pagination for large data
    • Resource Cleanup: Close DB connections/file handlers in finally blocks

    3. Architecture & Scalability

    • Design Pattern: Follow SOLID principles, use Dependency Injection
    • Modularity: Decouple logic from UI/Frameworks

    4. Observability & Reliability

    • Logging Standards: Structured JSON, include trace IDs request_id
    • Metrics: Track error_rate, latency, queue_depth
    • Error Handling: Standardized error codes, no bare except
    • Observability Artifacts:
      • Log Fields: timestamp, level, message, request_id
      • Metrics: request_count, error_count, response_time
      • Dashboards/Alerts: High Error Rate > 5%

    Agent Directives & Error Recovery

    (ข้อกำหนดสำหรับ AI Agent ในการคิดและแก้ปัญหาเมื่อเกิดข้อผิดพลาด)

    • Thinking Process: Analyze root cause before fixing. Do not brute-force.
    • Fallback Strategy: Stop after 3 failed test attempts. Output root cause and ask for human intervention/clarification.
    • Self-Review: Check against Guardrails & Anti-patterns before finalizing.
    • Output Constraints: Output ONLY the modified code block. Do not explain unless asked.

    Definition of Done (DoD) Checklist

    • Tests passed + coverage met
    • Lint/Typecheck passed
    • Logging/Metrics/Trace implemented
    • Security checks passed
    • Documentation/Changelog updated
    • Accessibility/Performance requirements met (if frontend)

    Anti-patterns / Pitfalls

    • ⛔ Don't: Log PII, catch-all exception, N+1 queries
    • ⚠️ Watch out for: Common symptoms and quick fixes
    • 💡 Instead: Use proper error handling, pagination, and logging

    Reference Links & Examples

    • Internal documentation and examples
    • Official documentation and best practices
    • Community resources and discussions

    Versioning & Changelog

    • Version: 1.0.0
    • Changelog:
      • 2026-02-22: Initial version with complete template structure
    Recommended Servers
    Browser tool
    Browser tool
    LILT
    LILT
    InfraNodus Knowledge Graphs & Text Analysis
    InfraNodus Knowledge Graphs & Text Analysis
    Repository
    amnadtaowsoam/cerebraskills
    Files