This skill should be used when systematically testing the NetBox MCP server after code changes or to validate tool functionality.
Use this skill when:
Before testing, verify:
/mcp command)If prerequisites aren't met, testing cannot proceed reliably.
Execute comprehensive test scenarios autonomously, prioritizing high-value tests first.
Before beginning testing, assess the current state:
Discover available NetBox MCP tools by listing them. For each tool, document:
Apply systematic testing to each discovered tool. Capture response times and metrics throughout.
For each tool, test:
Valid inputs - Execute with typical, valid parameters
Invalid inputs - Execute with incorrect parameters
Edge cases - Execute boundary conditions
Data integrity - Validate returned data
Performance benchmarks - Monitor efficiency metrics
When encountering failures or unexpected behavior:
Document clearly - Capture error messages, inputs, expected vs. actual behavior, performance metrics, affected features, and severity
Investigate root causes - Don't just report symptoms
Propose solutions - Provide actionable recommendations
When the NetBox MCP server code has been modified, immediately:
Reload the MCP server - Disconnect and reconnect the mcp
Re-run full test protocol to validate changes
Compare before/after results
Document impact of changes
Produce a detailed test report as the primary deliverable.
Test reports should follow this naming pattern:
TEST_REPORT_YYYYMMDD_HHMM_[scope].md
Examples:
TEST_REPORT_20251017_1430_full.md - Full test suiteTEST_REPORT_20251017_1530_post_code_change.md - After modificationsTEST_REPORT_20251017_1600_[tool_name].md - Specific toolUse assets/TEST_REPORT_TEMPLATE.md as the structure - it includes all required sections (Executive Summary, Tool Inventory, Detailed Test Results with pass/fail/warnings, Performance Metrics, Recommendations, and Test Coverage Summary). Ensure proposed solutions from your investigations are included in the report.
IMPORTANT: Write test reports to the project root directory (not the assets folder). The template is in assets for reference, but the actual reports should be created at the root of the project.
Balance comprehensive testing with time constraints:
"Tool not found": MCP server isn't connected - verify with /mcp command
"Authentication failed": Token invalid or expired - check NETBOX_TOKEN value
"Connection refused": NetBox instance not accessible - verify NETBOX_URL
Invalid parameters: Review tool's expected parameters in its docstring or description
Testing a hypothetical netbox_get_objects tool:
✅ Valid: {"object_type": "devices", "filters": {}} → 150ms response
❌ Invalid: {"object_type": "invalid-type", "filters": {}} → ValueError (expected)
⚠️ Edge: {"object_type": "devices", "filters": {"name": "nonexistent"}} → Empty list, 200ms
Adapt this pattern to whatever tools are actually available.