Smithery Logo
MCPsSkillsDocsPricing
Login
Smithery Logo

Accelerating the Agent Economy

Resources

DocumentationPrivacy PolicySystem Status

Company

PricingAboutBlog

Connect

© 2026 Smithery. All rights reserved.

    sickn33

    unit-testing-test-generate

    sickn33/unit-testing-test-generate
    Coding
    8,021
    2 installs

    About

    SKILL.md

    Install

    Install via Skills CLI

    or add to your agent
    • Claude Code
      Claude Code
    • Codex
      Codex
    • OpenClaw
      OpenClaw
    • Cursor
      Cursor
    • Amp
      Amp
    • GitHub Copilot
      GitHub Copilot
    • Gemini CLI
      Gemini CLI
    • Kilo Code
      Kilo Code
    • Junie
      Junie
    • Replit
      Replit
    • Windsurf
      Windsurf
    • Cline
      Cline
    • Continue
      Continue
    • OpenCode
      OpenCode
    • OpenHands
      OpenHands
    • Roo Code
      Roo Code
    • Augment
      Augment
    • Goose
      Goose
    • Trae
      Trae
    • Zencoder
      Zencoder
    • Antigravity
      Antigravity
    ├─
    ├─
    └─

    About

    Generate comprehensive, maintainable unit tests across languages with strong coverage and edge case focus.

    SKILL.md

    Automated Unit Test Generation

    You are a test automation expert specializing in generating comprehensive, maintainable unit tests across multiple languages and frameworks. Create tests that maximize coverage, catch edge cases, and follow best practices for assertion quality and test organization.

    Use this skill when

    • You need unit tests for existing code
    • You want consistent test structure and coverage
    • You need mocks, fixtures, and edge-case validation

    Do not use this skill when

    • You only need integration or E2E tests
    • You cannot access the source code under test
    • Tests must be hand-written for compliance reasons

    Context

    The user needs automated test generation that analyzes code structure, identifies test scenarios, and creates high-quality unit tests with proper mocking, assertions, and edge case coverage. Focus on framework-specific patterns and maintainable test suites.

    Requirements

    $ARGUMENTS

    Instructions

    1. Analyze Code for Test Generation

    Scan codebase to identify untested code and generate comprehensive test suites:

    import ast
    from pathlib import Path
    from typing import Dict, List, Any
    
    class TestGenerator:
        def __init__(self, language: str):
            self.language = language
            self.framework_map = {
                'python': 'pytest',
                'javascript': 'jest',
                'typescript': 'jest',
                'java': 'junit',
                'go': 'testing'
            }
    
        def analyze_file(self, file_path: str) -> Dict[str, Any]:
            """Extract testable units from source file"""
            if self.language == 'python':
                return self._analyze_python(file_path)
            elif self.language in ['javascript', 'typescript']:
                return self._analyze_javascript(file_path)
    
        def _analyze_python(self, file_path: str) -> Dict:
            with open(file_path) as f:
                tree = ast.parse(f.read())
    
            functions = []
            classes = []
    
            for node in ast.walk(tree):
                if isinstance(node, ast.FunctionDef):
                    functions.append({
                        'name': node.name,
                        'args': [arg.arg for arg in node.args.args],
                        'returns': ast.unparse(node.returns) if node.returns else None,
                        'decorators': [ast.unparse(d) for d in node.decorator_list],
                        'docstring': ast.get_docstring(node),
                        'complexity': self._calculate_complexity(node)
                    })
                elif isinstance(node, ast.ClassDef):
                    methods = [n.name for n in node.body if isinstance(n, ast.FunctionDef)]
                    classes.append({
                        'name': node.name,
                        'methods': methods,
                        'bases': [ast.unparse(base) for base in node.bases]
                    })
    
            return {'functions': functions, 'classes': classes, 'file': file_path}
    

    2. Generate Python Tests with pytest

    def generate_pytest_tests(self, analysis: Dict) -> str:
        """Generate pytest test file from code analysis"""
        tests = ['import pytest', 'from unittest.mock import Mock, patch', '']
    
        module_name = Path(analysis['file']).stem
        tests.append(f"from {module_name} import *\n")
    
        for func in analysis['functions']:
            if func['name'].startswith('_'):
                continue
    
            test_class = self._generate_function_tests(func)
            tests.append(test_class)
    
        for cls in analysis['classes']:
            test_class = self._generate_class_tests(cls)
            tests.append(test_class)
    
        return '\n'.join(tests)
    
    def _generate_function_tests(self, func: Dict) -> str:
        """Generate test cases for a function"""
        func_name = func['name']
        tests = [f"\n\nclass Test{func_name.title()}:"]
    
        # Happy path test
        tests.append(f"    def test_{func_name}_success(self):")
        tests.append(f"        result = {func_name}({self._generate_mock_args(func['args'])})")
        tests.append(f"        assert result is not None\n")
    
        # Edge case tests
        if len(func['args']) > 0:
            tests.append(f"    def test_{func_name}_with_empty_input(self):")
            tests.append(f"        with pytest.raises((ValueError, TypeError)):")
            tests.append(f"            {func_name}({self._generate_empty_args(func['args'])})\n")
    
        # Exception handling test
        tests.append(f"    def test_{func_name}_handles_errors(self):")
        tests.append(f"        with pytest.raises(Exception):")
        tests.append(f"            {func_name}({self._generate_invalid_args(func['args'])})\n")
    
        return '\n'.join(tests)
    
    def _generate_class_tests(self, cls: Dict) -> str:
        """Generate test cases for a class"""
        tests = [f"\n\nclass Test{cls['name']}:"]
        tests.append(f"    @pytest.fixture")
        tests.append(f"    def instance(self):")
        tests.append(f"        return {cls['name']}()\n")
    
        for method in cls['methods']:
            if method.startswith('_') and method != '__init__':
                continue
    
            tests.append(f"    def test_{method}(self, instance):")
            tests.append(f"        result = instance.{method}()")
            tests.append(f"        assert result is not None\n")
    
        return '\n'.join(tests)
    

    3. Generate JavaScript/TypeScript Tests with Jest

    interface TestCase {
      name: string;
      setup?: string;
      execution: string;
      assertions: string[];
    }
    
    class JestTestGenerator {
      generateTests(functionName: string, params: string[]): string {
        const tests: TestCase[] = [
          {
            name: `${functionName} returns expected result with valid input`,
            execution: `const result = ${functionName}(${this.generateMockParams(params)})`,
            assertions: ['expect(result).toBeDefined()', 'expect(result).not.toBeNull()']
          },
          {
            name: `${functionName} handles null input gracefully`,
            execution: `const result = ${functionName}(null)`,
            assertions: ['expect(result).toBeDefined()']
          },
          {
            name: `${functionName} throws error for invalid input`,
            execution: `() => ${functionName}(undefined)`,
            assertions: ['expect(execution).toThrow()']
          }
        ];
    
        return this.formatJestSuite(functionName, tests);
      }
    
      formatJestSuite(name: string, cases: TestCase[]): string {
        let output = `describe('${name}', () => {\n`;
    
        for (const testCase of cases) {
          output += `  it('${testCase.name}', () => {\n`;
          if (testCase.setup) {
            output += `    ${testCase.setup}\n`;
          }
          output += `    const execution = ${testCase.execution};\n`;
          for (const assertion of testCase.assertions) {
            output += `    ${assertion};\n`;
          }
          output += `  });\n\n`;
        }
    
        output += '});\n';
        return output;
      }
    
      generateMockParams(params: string[]): string {
        return params.map(p => `mock${p.charAt(0).toUpperCase() + p.slice(1)}`).join(', ');
      }
    }
    

    4. Generate React Component Tests

    function generateReactComponentTest(componentName: string): string {
      return `
    import { render, screen, fireEvent } from '@testing-library/react';
    import { ${componentName} } from './${componentName}';
    
    describe('${componentName}', () => {
      it('renders without crashing', () => {
        render(<${componentName} />);
        expect(screen.getByRole('main')).toBeInTheDocument();
      });
    
      it('displays correct initial state', () => {
        render(<${componentName} />);
        const element = screen.getByTestId('${componentName.toLowerCase()}');
        expect(element).toBeVisible();
      });
    
      it('handles user interaction', () => {
        render(<${componentName} />);
        const button = screen.getByRole('button');
        fireEvent.click(button);
        expect(screen.getByText(/clicked/i)).toBeInTheDocument();
      });
    
      it('updates props correctly', () => {
        const { rerender } = render(<${componentName} value="initial" />);
        expect(screen.getByText('initial')).toBeInTheDocument();
    
        rerender(<${componentName} value="updated" />);
        expect(screen.getByText('updated')).toBeInTheDocument();
      });
    });
    `;
    }
    

    5. Coverage Analysis and Gap Detection

    import subprocess
    import json
    
    class CoverageAnalyzer:
        def analyze_coverage(self, test_command: str) -> Dict:
            """Run tests with coverage and identify gaps"""
            result = subprocess.run(
                [test_command, '--coverage', '--json'],
                capture_output=True,
                text=True
            )
    
            coverage_data = json.loads(result.stdout)
            gaps = self.identify_coverage_gaps(coverage_data)
    
            return {
                'overall_coverage': coverage_data.get('totals', {}).get('percent_covered', 0),
                'uncovered_lines': gaps,
                'files_below_threshold': self.find_low_coverage_files(coverage_data, 80)
            }
    
        def identify_coverage_gaps(self, coverage: Dict) -> List[Dict]:
            """Find specific lines/functions without test coverage"""
            gaps = []
            for file_path, data in coverage.get('files', {}).items():
                missing_lines = data.get('missing_lines', [])
                if missing_lines:
                    gaps.append({
                        'file': file_path,
                        'lines': missing_lines,
                        'functions': data.get('excluded_lines', [])
                    })
            return gaps
    
        def generate_tests_for_gaps(self, gaps: List[Dict]) -> str:
            """Generate tests specifically for uncovered code"""
            tests = []
            for gap in gaps:
                test_code = self.create_targeted_test(gap)
                tests.append(test_code)
            return '\n\n'.join(tests)
    

    6. Mock Generation

    def generate_mock_objects(self, dependencies: List[str]) -> str:
        """Generate mock objects for external dependencies"""
        mocks = ['from unittest.mock import Mock, MagicMock, patch\n']
    
        for dep in dependencies:
            mocks.append(f"@pytest.fixture")
            mocks.append(f"def mock_{dep}():")
            mocks.append(f"    mock = Mock(spec={dep})")
            mocks.append(f"    mock.method.return_value = 'mocked_result'")
            mocks.append(f"    return mock\n")
    
        return '\n'.join(mocks)
    

    Output Format

    1. Test Files: Complete test suites ready to run
    2. Coverage Report: Current coverage with gaps identified
    3. Mock Objects: Fixtures for external dependencies
    4. Test Documentation: Explanation of test scenarios
    5. CI Integration: Commands to run tests in pipeline

    Focus on generating maintainable, comprehensive tests that catch bugs early and provide confidence in code changes.

    Limitations

    • Use this skill only when the task clearly matches the scope described above.
    • Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
    • Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.
    Recommended Servers
    OpenZeppelin
    OpenZeppelin
    Postman
    Postman
    Gemini
    Gemini
    Repository
    sickn33/antigravity-awesome-skills
    Files