// Advanced coverage analysis with actionable insights. Use to identify coverage gaps, suggest specific tests, track coverage trends, and highlight critical uncovered code. Essential for reaching 80%+ coverage target.
| name | coverage-analyzer |
| description | Advanced coverage analysis with actionable insights. Use to identify coverage gaps, suggest specific tests, track coverage trends, and highlight critical uncovered code. Essential for reaching 80%+ coverage target. |
BEFORE starting coverage analysis, you MUST read and understand the following project documentation:
After reading these files, proceed with your coverage analysis task below.
Provide advanced test coverage analysis with actionable insights for improving coverage to meet the 80%+ requirement.
make coverage to understand gapsโ Detailed Coverage Analysis
โ Actionable Recommendations
โ Coverage Gaps Identification
โ Test Suggestions
โ Trend Tracking
# Generate detailed coverage analysis
analyze test coverage
Output: Comprehensive report with gaps and recommendations
# Focus on critical uncovered code
show critical coverage gaps
Output: High-priority uncovered code (error handling, security, edge cases)
# Get specific test recommendations
suggest tests for uncovered code in src/python_modern_template/validators.py
Output: Concrete test cases to add
# See coverage over time
show coverage trend for last month
Output: Graph showing coverage changes
# Generate coverage report
make coverage
This creates:
htmlcov/.coverage data fileRead coverage data from multiple sources:
# Read terminal output for overall stats
# Read htmlcov/index.html for detailed breakdown
# Parse .coverage file for line-by-line data
For each source file:
For each uncovered section:
Create specific test suggestions:
CRITICAL - Must cover immediately:
HIGH - Should cover soon:
MEDIUM - Good to cover:
LOW - Optional:
# Test Coverage Analysis
## Executive Summary
**Current Coverage:** 75.3%
**Target:** 80%+
**Gap:** 4.7% (23 uncovered lines)
**Status:** โ ๏ธ Below target
**Breakdown:**
- src/python_modern_template/: 73.2% (18 uncovered lines)
- tests/: 100% (fully covered)
---
## Critical Gaps (MUST FIX)
### 1. Error Handling in validators.py โ ๏ธ CRITICAL
**File:** src/python_modern_template/validators.py
**Lines:** 45-52 (8 lines)
**Function:** `validate_email()`
**Uncovered Code:**
```python
45: except ValueError as e:
46: logger.error(f"Email validation failed: {e}")
47: raise ValidationError(
48: "Invalid email format"
49: ) from e
50: except Exception:
51: logger.critical("Unexpected validation error")
52: return False
Why Critical: Error handling paths are not tested, could hide bugs
Recommended Test:
def test_validate_email_value_error_handling() -> None:
"""Test email validation handles ValueError correctly."""
# Arrange
invalid_email = "not-an-email"
# Act & Assert
with pytest.raises(ValidationError) as exc_info:
validate_email(invalid_email)
assert "Invalid email format" in str(exc_info.value)
assert exc_info.value.__cause__ is not None
def test_validate_email_unexpected_error_handling() -> None:
"""Test email validation handles unexpected errors."""
# Arrange
# Mock to raise unexpected exception
with patch('validators.EMAIL_REGEX.match', side_effect=RuntimeError("Unexpected")):
# Act
result = validate_email("test@example.com")
# Assert
assert result is False
Impact: Covers 8 lines, adds 3.5% coverage
File: src/python_modern_template/parser.py
Lines: 67-70 (4 lines)
Function: parse_config()
Uncovered Code:
67: if not config_data:
68: logger.warning("Empty configuration provided")
69: return DEFAULT_CONFIG
70: # Unreachable line removed
Why Critical: Edge case handling not tested
Recommended Test:
def test_parse_config_empty_data() -> None:
"""Test parser handles empty configuration."""
# Arrange
empty_config = {}
# Act
result = parse_config(empty_config)
# Assert
assert result == DEFAULT_CONFIG
def test_parse_config_none_data() -> None:
"""Test parser handles None configuration."""
# Arrange
# Act
result = parse_config(None)
# Assert
assert result == DEFAULT_CONFIG
Impact: Covers 4 lines, adds 1.7% coverage
File: src/python_modern_template/api_client.py
Lines: 112-118 (7 lines)
Function: retry_with_backoff()
Uncovered Code:
112: @retry(max_attempts=3, backoff=2.0)
113: def retry_with_backoff(self, operation: Callable) -> Any:
114: """Retry operation with exponential backoff."""
115: try:
116: return operation()
117: except ConnectionError:
118: logger.warning("Connection failed, retrying...")
Why High: Integration logic with retry mechanism
Recommended Test:
@pytest.mark.parametrize("attempt,should_succeed", [
(1, True), # Succeeds first try
(2, True), # Succeeds second try
(3, True), # Succeeds third try
(4, False), # Fails after max attempts
])
def test_retry_with_backoff(attempt: int, should_succeed: bool) -> None:
"""Test retry mechanism with various scenarios."""
# Arrange
client = APIClient()
call_count = 0
def flaky_operation():
nonlocal call_count
call_count += 1
if call_count < attempt:
raise ConnectionError("Connection failed")
return "success"
# Act & Assert
if should_succeed:
result = client.retry_with_backoff(flaky_operation)
assert result == "success"
assert call_count == attempt
else:
with pytest.raises(ConnectionError):
client.retry_with_backoff(flaky_operation)
Impact: Covers 7 lines, adds 3.0% coverage
| Module | Coverage | Uncovered Lines | Priority |
|---|---|---|---|
| validators.py | 65% | 12 | CRITICAL |
| parser.py | 80% | 4 | HIGH |
| api_client.py | 75% | 7 | HIGH |
| utils.py | 95% | 1 | LOW |
Total: 75.3% (23 uncovered lines)
These tests would quickly boost coverage:
Add error handling tests (validators.py)
Add edge case tests (parser.py)
Add integration tests (api_client.py)
Total Impact: +8.2% coverage (reaching 83.5%) Total Time: ~30 minutes
Week 1: 70% โโโโโโโโโโ
Week 2: 72% โโโโโโโโโโ
Week 3: 75% โโโโโโโโโโ
Week 4: 75% โโโโโโโโโโ โ Current (stalled)
Target: 80% โโโโโโโโโโ
Trend: +5% over 3 weeks, then stalled Recommendation: Focus on quick wins above to break through 80%
Covered:
Not Covered:
Missing Test Types:
Covered:
Not Covered:
Missing Test Types:
Covered:
Not Covered:
Missing Test Types:
Add error handling tests to validators.py
# Edit tests/test_validators.py
# Add test_validate_email_value_error_handling()
# Add test_validate_email_unexpected_error_handling()
Add edge case tests to parser.py
# Edit tests/test_parser.py
# Add test_parse_config_empty_data()
# Add test_parse_config_none_data()
Add integration tests to api_client.py
# Edit tests/test_api_client.py
# Add test_retry_with_backoff()
Run coverage again
make coverage
Expected Result: 83.5% coverage (exceeds 80% target!)
# Run tests to ensure they fail (TDD)
make test
# Verify uncovered code is exercised
# Fix any test issues
# Re-run coverage
make coverage
# Should see 80%+ coverage
For multiple similar test cases:
@pytest.mark.parametrize("email,valid", [
("test@example.com", True),
("invalid-email", False),
("", False),
(None, False),
("test@", False),
("@example.com", False),
])
def test_validate_email_parametrized(email: str | None, valid: bool) -> None:
"""Test email validation with various inputs."""
if valid:
assert validate_email(email) is True
else:
assert validate_email(email) is False
@pytest.fixture
def sample_config():
"""Provide sample configuration for tests."""
return {
"api_url": "https://api.example.com",
"timeout": 30,
"retries": 3,
}
def test_parse_config_with_defaults(sample_config):
"""Test config parsing with defaults."""
result = parse_config(sample_config)
assert result["timeout"] == 30
Priority order:
# Coverage is part of quality gates
make check
# Must pass 80%+ coverage requirement
# TDD reviewer checks coverage compliance
[tdd-reviewer]
# Includes coverage verification
# Generate HTML report
make coverage
# View in browser
open htmlcov/index.html
"Coverage percentage is a measure, not a goal." "Aim for meaningful tests, not just high numbers."
Good coverage means:
Bad coverage means:
Focus on quality coverage, not just quantity!
## Advanced Features
### Historical Tracking
Store coverage data over time:
```bash
# Save current coverage
echo "$(date +%Y-%m-%d),$(coverage report | grep TOTAL | awk '{print $4}')" >> .coverage_history
# View trend
cat .coverage_history
Analyze each module separately:
# Get coverage for specific module
coverage report --include="src/python_modern_template/validators.py"
Not just line coverage, but branch coverage:
# Enable branch coverage in pyproject.toml
[tool.pytest.ini_options]
branch = true
# Shows uncovered branches (if/else not both tested)
Focus on changed lines only:
# Install diff-cover
pip install diff-cover
# Check coverage of git diff
diff-cover htmlcov/coverage.xml --compare-branch=main
Coverage analysis is a tool for improvement, not a report card. Use it to:
But always remember: 100% coverage โ bug-free code. Write meaningful tests!