Structured logging, debugging techniques (pdb/ipdb), profiling (cProfile/line_profiler), stack traces, performance monitoring, and metrics...
Comprehensive guide to logging, debugging, profiling, and performance monitoring in Python applications.
Structured logging with JSON format for machine-readable logs and rich context.
Why Structured Logging?
Key Features:
Example:
import logging
import json
logger = logging.getLogger(__name__)
logger.info("User action", extra={
"user_id": 123,
"action": "login",
"ip": "192.168.1.1"
})
See: docs/structured-logging.md for Python logging setup and patterns
Interactive debugging with pdb/ipdb and effective debugging strategies.
Tools:
pdb Commands:
n (next) - Execute current lines (step) - Step into functionc (continue) - Continue executionp variable - Print variable valuel - List source codeq - Quit debuggerExample:
import pdb; pdb.set_trace() # Debugger starts here
See: docs/debugging.md for interactive debugging patterns
CPU and memory profiling to identify performance bottlenecks.
Tools:
cProfile Example:
python -m cProfile -s cumulative script.py
Profile Decorator:
import cProfile
import pstats
def profile(func):
def wrapper(*args, **kwargs):
profiler = cProfile.Profile()
profiler.enable()
result = func(*args, **kwargs)
profiler.disable()
stats = pstats.Stats(profiler)
stats.sort_stats('cumulative')
stats.print_stats(10) # Top 10 functions
return result
return wrapper
@profile
def slow_function():
# Your code here
pass
See: docs/profiling.md for comprehensive profiling techniques
Performance monitoring, timing decorators, and simple metrics.
Timing Patterns:
Simple Metrics:
Example:
import time
from functools import wraps
def timer(func):
@wraps(func)
def wrapper(*args, **kwargs):
start = time.time()
result = func(*args, **kwargs)
duration = time.time() - start
print(f"{func.__name__} took {duration:.2f}s")
return result
return wrapper
@timer
def process_data():
# Your code here
pass
See: docs/monitoring-metrics.md for stack traces, timers, and metrics
Debugging strategies and logging anti-patterns to avoid.
Debugging Best Practices:
Logging Anti-Patterns to Avoid:
See: docs/best-practices-antipatterns.md for detailed strategies
| Tool | Use Case | Details |
|---|---|---|
| Structured Logging | Production logs | docs/structured-logging.md |
| pdb/ipdb | Interactive debugging | docs/debugging.md |
| cProfile | CPU profiling | docs/profiling.md |
| line_profiler | Line-by-line profiling | docs/profiling.md |
| memory_profiler | Memory analysis | docs/profiling.md |
| Timer decorator | Function timing | docs/monitoring-metrics.md |
| Context timer | Code block timing | docs/monitoring-metrics.md |
import logging
# Setup
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
# Usage
logger.debug("Debug message") # Detailed diagnostic
logger.info("Info message") # General information
logger.warning("Warning message") # Warning (recoverable)
logger.error("Error message") # Error (handled)
logger.critical("Critical message") # Critical (unrecoverable)
# With context
logger.info("User action", extra={"user_id": 123, "action": "login"})
# pdb
import pdb; pdb.set_trace()
# ipdb (enhanced)
import ipdb; ipdb.set_trace()
# Post-mortem (debug after crash)
import pdb, sys
try:
# Your code
pass
except Exception:
pdb.post_mortem(sys.exc_info()[2])
# CPU profiling
python -m cProfile -s cumulative script.py
# Line profiling
kernprof -l -v script.py
# Memory profiling
python -m memory_profiler script.py
# Sampling profiler (no code changes)
py-spy top --pid 12345
This skill uses progressive disclosure to prevent context bloat:
docs/*.md files with implementation details (loaded on-demand)Available Documentation:
docs/structured-logging.md - Logging setup, levels, JSON format, best practicesdocs/debugging.md - Print debugging, pdb/ipdb, post-mortem debuggingdocs/profiling.md - cProfile, line_profiler, memory_profiler, py-spydocs/monitoring-metrics.md - Stack traces, timing patterns, simple metricsdocs/best-practices-antipatterns.md - Debugging strategies and logging anti-patternsRelated Skills:
Related Tools:
FORBIDDEN:
print() for production logging (MUST use structured logging)except Exception: (or except Exception as e:) without a subsequent raise or logging.exception()/logger.error(..., exc_info=True)except: pass — discards exception with zero handlingexcept Exception: pass — syntactically explicit but semantically identical to bare except: passcontextlib.suppress() wrapping error-critical operations without inline justification commentfinally blocks that contain return, break, or continue — these suppress any pending exception from the try bodyREQUIRED (compliant exception handling MUST use at least one of):
raise (bare) or raise NewError(...) from original_exc to propagate the exceptionexc_info: logger.error("Operation failed", exc_info=True) or logging.exception("Operation failed") — preserves full stack trace without suppressingcontextlib.suppress() with justification: Acceptable ONLY for genuinely non-critical cleanup operations; MUST include an inline comment explaining why suppression is safe# COMPLIANT: re-raise after logging
try:
process(data)
except ValueError as exc:
logger.error("Invalid data: %s", exc, exc_info=True)
raise
# COMPLIANT: log with exc_info (caller gets full stack trace in logs)
try:
send_metric(value)
except ExternalServiceError:
logger.exception("Metric send failed — continuing without metric")
# COMPLIANT: contextlib.suppress with justification
with contextlib.suppress(FileNotFoundError):
# Optional cache file; absence is expected on first run
cache_path.unlink()
# NON-COMPLIANT: silent swallow
try:
critical_operation()
except Exception:
pass # FORBIDDEN
# NON-COMPLIANT: log without exc_info and without re-raise
try:
critical_operation()
except Exception as e:
logger.error("Failed: %s", e) # FORBIDDEN — no stack trace, exception swallowed