Generate comprehensive test boilerplate following project TDD conventions. Use when creating new tests for modules, classes, or functions...
BEFORE generating tests, you MUST read and understand the following project documentation:
After reading these files, proceed with your test generation task below.
Automatically generate comprehensive test boilerplate that follows this project's strict TDD and quality standards.
✅ Test files with proper imports and structure ✅ Test classes organized by functionality ✅ Parametrized test templates ✅ AAA pattern (Arrange-Act-Assert) structure ✅ Complete type hints ✅ Google-style docstrings ✅ Fixture templates (when needed)
# User provides source file
generate tests for src/python_modern_template/validators.py
Output: Creates tests/test_validators.py with:
# User describes the function
generate tests for validate_email function that checks email format
Output: Creates test class with:
# User needs test fixtures
generate pytest fixture for database connection
Output: Creates conftest.py entry with proper fixture structure
If source file provided:
If described by user:
Select appropriate template from templates/:
test_function.py - Simple function teststest_class.py - Class with methods teststest_parametrized.py - Data-driven teststest_async.py - Async function teststest_exception.py - Error handling testsconftest_fixture.py - Pytest fixturesUse templates from .claude/skills/test-generator/templates/ directory.
Template Variables:
{module_name} - Source module name{class_name} - Class being tested{function_name} - Function being tested{test_class_name} - Generated test class name{import_path} - Full import path{param_names} - Function parameters{return_type} - Function return typeGenerate test cases for:
File: templates/test_function.py
"""Tests for {module_name}.{function_name}."""
from __future__ import annotations
import pytest
from {import_path} import {function_name}
class Test{FunctionNameCamelCase}:
"""Test cases for {function_name}."""
def test_{function_name}_basic(self) -> None:
"""Test basic functionality of {function_name}."""
# Arrange
# TODO: Set up test data
# Act
result = {function_name}()
# Assert
# TODO: Add assertions
assert result is not None
def test_{function_name}_with_valid_input(self) -> None:
"""Test {function_name} with valid input."""
# Arrange
# TODO: Prepare valid input
# Act
# TODO: Call function
# Assert
# TODO: Verify result
pass
def test_{function_name}_with_invalid_input(self) -> None:
"""Test {function_name} handles invalid input."""
# Arrange
# TODO: Prepare invalid input
# Act & Assert
with pytest.raises(ValueError):
{function_name}() # TODO: Add invalid input
def test_{function_name}_edge_cases(self) -> None:
"""Test {function_name} edge cases."""
# TODO: Test empty input, None, boundary values
pass
File: templates/test_parametrized.py
"""Tests for {module_name}.{function_name}."""
from __future__ import annotations
import pytest
from {import_path} import {function_name}
class Test{FunctionNameCamelCase}:
"""Parametrized test cases for {function_name}."""
@pytest.mark.parametrize(
"input_value,expected",
[
# Happy path cases
("valid_input_1", "expected_output_1"),
("valid_input_2", "expected_output_2"),
# Edge cases
("", "expected_for_empty"),
(None, "expected_for_none"),
# TODO: Add more test cases
],
)
def test_{function_name}_parametrized(
self,
input_value: str, # TODO: Adjust type
expected: str, # TODO: Adjust type
) -> None:
"""Test {function_name} with various inputs."""
# Act
result = {function_name}(input_value)
# Assert
assert result == expected
@pytest.mark.parametrize(
"invalid_input,expected_error",
[
("invalid_1", ValueError),
("invalid_2", TypeError),
# TODO: Add more error cases
],
)
def test_{function_name}_error_cases(
self,
invalid_input: str, # TODO: Adjust type
expected_error: type[Exception],
) -> None:
"""Test {function_name} raises appropriate errors."""
# Act & Assert
with pytest.raises(expected_error):
{function_name}(invalid_input)
File: templates/test_class.py
"""Tests for {module_name}.{class_name}."""
from __future__ import annotations
import pytest
from {import_path} import {class_name}
class Test{ClassName}:
"""Test cases for {class_name}."""
@pytest.fixture
def instance(self) -> {class_name}:
"""Create {class_name} instance for testing."""
return {class_name}() # TODO: Add initialization parameters
def test_initialization(self, instance: {class_name}) -> None:
"""Test {class_name} initialization."""
# Assert
assert isinstance(instance, {class_name})
# TODO: Verify initial state
def test_method_name(self, instance: {class_name}) -> None:
"""Test method_name behavior."""
# Arrange
# TODO: Set up test data
# Act
result = instance.method_name() # TODO: Add actual method
# Assert
# TODO: Verify result
assert result is not None
File: templates/test_async.py
"""Tests for async {module_name}.{function_name}."""
from __future__ import annotations
import pytest
from {import_path} import {function_name}
class Test{FunctionNameCamelCase}:
"""Test cases for async {function_name}."""
@pytest.mark.asyncio
async def test_{function_name}_basic(self) -> None:
"""Test basic async functionality."""
# Arrange
# TODO: Set up test data
# Act
result = await {function_name}()
# Assert
assert result is not None
@pytest.mark.asyncio
async def test_{function_name}_concurrent(self) -> None:
"""Test concurrent execution."""
# Arrange
import asyncio
# Act
results = await asyncio.gather(
{function_name}(),
{function_name}(),
)
# Assert
assert len(results) == 2
File: templates/conftest_fixture.py
"""Pytest fixtures for {test_module}."""
from __future__ import annotations
from collections.abc import Generator
import pytest
@pytest.fixture
def {fixture_name}() -> Generator[{ResourceType}, None, None]:
"""Provide {description} for tests.
Yields:
{ResourceType}: {Description of what is yielded}
"""
# Setup
resource = {ResourceType}() # TODO: Initialize resource
try:
yield resource
finally:
# Teardown
pass # TODO: Add cleanup code
All generated tests MUST include:
✅ Type Hints
-> None✅ Docstrings
✅ AAA Pattern
def test_example() -> None:
"""Test description."""
# Arrange - Set up test data
data = prepare_test_data()
# Act - Execute the function
result = function_under_test(data)
# Assert - Verify the result
assert result == expected
✅ Proper Imports
from __future__ import annotations # Always first
import pytest
from python_modern_template.module import function
✅ No Excessive Mocking
After generating tests, provide:
## Generated Test File: tests/test_{module}.py
**Summary:**
- Test Classes: X
- Test Functions: X
- Parametrized Tests: X
- Fixtures: X
- Lines of Code: ~XXX
**Test Coverage:**
- Function 1: 4 test cases (happy path, edge cases, error handling)
- Function 2: 3 test cases
- Total: X test cases
**Next Steps:**
1. Review generated tests and customize TODO sections
2. Add specific test data for your use case
3. Run tests to verify they fail (TDD red phase):
```bash
make test
make coverage
Files Created/Modified:
## Integration with Project Tools
This skill integrates with:
- `make test` - Run generated tests
- `make coverage` - Verify test coverage
- `make format` - Format generated code
- `tdd-reviewer` agent - Verify TDD compliance
## Script Usage
For programmatic generation, use:
```bash
# Generate from source file
python .claude/skills/test-generator/scripts/generate.py \
--source src/python_modern_template/module.py \
--output tests/test_module.py
# Generate from function name
python .claude/skills/test-generator/scripts/generate.py \
--function validate_email \
--module validators \
--template parametrized
User Request:
generate tests for the validate_email function in src/python_modern_template/validators.py
Skill Actions:
User Request:
I'm creating a UserManager class with methods: create_user, delete_user, find_user. Generate tests.
Skill Actions:
User Request:
Generate tests for async function fetch_data(url: str) -> dict
Skill Actions:
TDD Workflow:
This skill helps you start TDD right—tests first, always!