with one click
testing-patterns
// This skill should be used when the user asks to "write tests", "test strategy", "coverage", "unit test", "integration test", or needs testing guidance. Provides testing methodology and patterns.
// This skill should be used when the user asks to "write tests", "test strategy", "coverage", "unit test", "integration test", or needs testing guidance. Provides testing methodology and patterns.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | Testing Patterns |
| description | This skill should be used when the user asks to "write tests", "test strategy", "coverage", "unit test", "integration test", or needs testing guidance. Provides testing methodology and patterns. |
| version | 2.0.0 |
<test_phase>Act: Execute the code under test</test_phase>
total = cart.calculate_total
<test_phase>Assert: Verify expected outcomes</test_phase>
assert_equal 0, total
Separates setup, execution, and verification into distinct phases
BDD-style test structure focusing on behavior
Is the test focused on business behavior rather than technical implementation?
Apply given-when-then pattern for BDD-style tests
Use arrange-act-assert for technical unit tests
Given: Initial context (preconditions)
given_a_user_with_an_empty_cart
<bdd_step>When: Action or trigger</bdd_step>
when_the_user_calculates_total
<bdd_step>Then: Expected outcome</bdd_step>
then_the_total_should_be_zero
Emphasizes business behavior over technical implementation
Provide canned responses for dependencies
Does the test need dependency responses but not interaction verification?
Apply stub pattern for canned responses
Use mock if interaction verification is needed
api_client = stub(
fetch_user: { id: 1, name: "John" }
)
Replace slow/unreliable dependencies
Verify interactions occurred with dependencies
Does the test need to verify specific interactions occurred?
Apply mock pattern to verify method calls and arguments
Use stub if only canned responses are needed
email_service = mock()
email_service.expect(:send_email, args: ["user@example.com", "Welcome"])
user_service.register(email_service)
email_service.verify
Ensure methods called with correct arguments
Record calls while using real implementation
Does the test need real behavior plus interaction verification?
Apply spy pattern to record calls while using real implementation
Use stub for canned responses or mock for behavior replacement
logger = spy(Logger.new)
service.process(logger)
assert_called logger, :log, with: "Processing complete"
Verify side effects without changing behavior
Working implementation suitable for testing
Does the test need a simplified but working implementation?
Apply fake pattern for lightweight working implementation
Use stub for simple canned responses
class FakeDatabase
def initialize
@data = {}
end
def save(key, value)
@data[key] = value
end
def find(key)
@data[key]
end
end
In-memory database, fake file system
Test names that clearly describe scenario and outcome
Is this a technical unit test for a specific method?
Apply descriptive naming with method-scenario-result format
Consider should naming for BDD-style tests
test_calculateTotal_withEmptyCart_returnsZero
test_calculateTotal_withMultipleItems_returnsSumOfPrices
test_calculateTotal_withDiscount_appliesDiscountCorrectly
Format: test_[method]_[scenario]_[expected_result]
BDD-style naming that reads like natural language
Is this a behavior-focused test readable by non-technical stakeholders?
Apply should naming for natural language readability
Use descriptive naming for technical unit tests
calculateTotal_should_returnZero_when_cartIsEmpty
calculateTotal_should_applyDiscount_when_couponIsValid
calculateTotal_should_throwError_when_pricesAreNegative
Format: [method]_should_[expected_behavior]_when_[condition]
Generate random inputs to verify properties that should always hold
Does the function have a property or invariant that holds for all valid inputs?
Apply property-based testing to generate random inputs and verify invariants
Use example-based tests with arrange-act-assert
Haskell (QuickCheck/Hedgehog)
prop_reverse_involutive xs = reverse (reverse xs) == xs
<note>JS/TS (fast-check)</note>
fc.assert(
fc.property(fc.array(fc.integer()), (arr) =>
deepEqual(arr, reverse(reverse(arr)))
)
)
QuickCheck (Haskell), Hedgehog (Haskell), fast-check (JS/TS), Hypothesis (Python)
Serialization round-trips, sorting invariants, mathematical properties, parser correctness
Capture output and compare against a stored reference snapshot
Is the output complex and best verified by comparing against a known-good reference?
Apply snapshot testing to detect unintended output changes
Use explicit assertions for specific values
expect(renderComponent()).toMatchSnapshot()
Vitest 4.x supports visual regression testing via Browser Mode for comparing rendered UI screenshots against baseline images
Component rendering, serialized data structures, CLI output
<best_practices> Test happy path first Start with the normal, expected flow before edge cases test_userLogin_withValidCredentials_succeeds test_userLogin_withInvalidPassword_fails test_userLogin_withLockedAccount_fails
Test edge cases Test boundary conditions and limits Empty inputs, maximum values, null values, zero values, negative numbers Test error cases Verify error handling paths work correctly Invalid inputs, network failures, permission errors, timeout scenarios Isolate tests Each test should be independent Use setup/teardown to reset state def setup @database = TestDatabase.new @service = UserService.new(@database) end def teardown
@database.clear
end
</example>
Make tests readable
Tests serve as documentation
Good: Clear and descriptive
test_userRegistration_withExistingEmail_returnsError
<note>Bad: Unclear purpose</note>
test_user_reg_1
</example>
One assertion per concept
Each test should verify one logical concept
Good: Single concept
test_userCreation_setsDefaultRole
user = create_user
assert_equal "member", user.role
end
<note>Avoid: Multiple unrelated assertions</note>
test_userCreation
user = create_user
assert_equal "member", user.role
assert_not_nil user.email
assert_true user.active
end
</example>
Use test fixtures and factories
Extract common test data setup
Create reusable test data
def create_test_user(overrides = {})
defaults = {
name: "Test User",
email: "test@example.com",
role: "member"
}
User.new(defaults.merge(overrides))
end
Avoid magic numbers
Use named constants for test values
Good
VALID_USER_AGE = 25
MINIMUM_AGE = 18
test_userValidation_withValidAge_succeeds
user = User.new(age: VALID_USER_AGE)
assert user.valid?
end
<bad_example>Bad</bad_example>
test_userValidation_withValidAge_succeeds
user = User.new(age: 25)
assert user.valid?
end
</example>
Test corner cases
Test unusual combinations and scenarios
Concurrent access, timezone edge cases, leap years, DST transitions
<anti_patterns> Testing implementation details instead of behavior Focus on testing observable behavior and outcomes, not internal implementation details. Test what the code does, not how it does it.
Over-mocking dependencies throughout test suites Use real implementations where practical; excessive mocking often indicates poor design. Only mock external dependencies or slow operations. Tests that sometimes pass and sometimes fail Ensure tests are deterministic by controlling time, randomness, and async operations. Use fixed timestamps, seeded random generators, and proper async handling. Tests that take too long to run Use unit tests for fast feedback; reserve slow integration/e2e tests for critical paths. Unit tests should run in milliseconds, not seconds. Tests that depend on execution order or shared state Make each test independent with proper setup/teardown and isolated state. Each test should create its own test data. Aim for high coverage but prioritize meaningful tests over coverage numbers 80%+ coverage is a good target for critical code paths 100% coverage does not guarantee bug-free code Focus on testing behavior, not achieving coverage metrics<error_escalation> Minor coverage gap in non-critical path Note in report, proceed Test flakiness detected Document issue, use AskUserQuestion for clarification Critical path lacks test coverage STOP, present options to user Tests reveal security vulnerability BLOCK operation, require explicit user acknowledgment </error_escalation>
Keep guidance evidence-based and version-aware Prefer project conventions over generic defaults<related_agents> Locate relevant code patterns Review output consistency </related_agents>
Follow project test patterns Run tests after creation Cover critical paths first Creating tests without understanding implementation Writing flaky or non-deterministic tests Ignoring existing test conventions<related_skills> Use to define test requirements and acceptance criteria Use to implement tests as part of feature development workflow Use when debugging test failures or flaky tests </related_skills>