with one click
integration-testing
// Integration/runtime test workflow. Apply when running tests against a live or simulated environment — any setup with external dependencies (servers, simulators, databases, message brokers, hardware emulators).
// Integration/runtime test workflow. Apply when running tests against a live or simulated environment — any setup with external dependencies (servers, simulators, databases, message brokers, hardware emulators).
| name | integration-testing |
| description | Integration/runtime test workflow. Apply when running tests against a live or simulated environment — any setup with external dependencies (servers, simulators, databases, message brokers, hardware emulators). |
| user-invocable | false |
Apply these rules when running integration/runtime tests against a live or simulated environment.
BEFORE starting any integration test, actively probe every external endpoint declared in the test config — ping the host AND TCP port-check each port. Log results visibly. If ANY endpoint is down, abort test and investigate the environment FIRST before assuming code bug. "Should be up" is not verification. When you see connection errors mid-test, return to this step before diagnosing as code.
Organize tests bottom-up — unit-level module tests first, then controller/service tests, then end-to-end scenarios, then error/recovery, then soak tests. Track each test case with ID, status (PASS/FAIL/SKIP), and notes.
For each test — trigger the action, capture logs immediately (process manager logs, container logs, system journal, etc.), verify (a) no runtime errors, (b) expected output, (c) expected state changes.
Runtime bugs belong to the source project, not the test harness or consumer. Fix root cause → rebuild → restart → retest. Grep for the same pattern project-wide before moving on.
Never SKIP a test due to harness/flow issues. On any test failure: (1) identify root cause (application code vs flow/harness/preconditions), (2) fix the source regardless of which project it lives in, (3) retest and verify PASS, (4) only then move to the next test. Consumer-specific bugs (wrong params in test harness, stale test data) are still bugs that must be fixed.
Diagnose blockers, don't bypass: When a test hits a stuck step, silent failure, or unexpected state, immediately diagnose the root cause. Never use workflow controls (STEP_OVER, retry, skip, force-cancel) or config changes to bypass without first understanding why. If the root cause is hard to fix, explain the analysis and proposed fix, get user approval, then proceed. Continuing past a blocker without understanding it means the next step's results can't be trusted.
When a test fails due to missing state (wrong initial data, wrong device position, stale records), set up the required preconditions and retest — never rationalize the failure as "simulator limitation", "expected fail", or "not a code issue" to justify skipping. Always attempt to set up required preconditions first using available tools/APIs. Only mark a test as SKIP with explicit user approval after demonstrating that precondition setup is technically impossible.
When the system under test connects to remote services (simulators, test environments, external APIs), verify whether failures originate locally or remotely. Use SSH, network tools, or remote logs to confirm before assuming a code bug.
When the user authorizes a specific test scope (e.g. "full cycle", "end-to-end", "start to end/cancel"), do not silently fall back to weaker verification (isolated unit test, init-only smoke test, code review) on hitting blockers. Hitting a blocker during authorized work requires one of:
Never declare a test "complete" or "passed" for a scope narrower than what was authorized.
After all manual tests pass, let the system run unattended for a defined period and verify zero errors in logs.
After testing, revert all test-specific config changes (environment switches, commented-out code, dependency overrides). Never commit test-only modifications.
Never truncate log files during testing (no pm2 flush, truncate, docker logs --tail overwrites, etc.). Intermittent errors may not reproduce on demand, and the only evidence is in the log file. Use grep on the full log file directly instead of relying on tail-window commands which may miss errors past the window.
Test-time only changes (config switches like config.test.sim, simulator-friendly preset comments, debug-mode interval shortening, mock auth headers, etc.) MUST be isolated and reverted — never committed to source.
git stash push -m "test-patches: <project>") or a dedicated test-only branch. Never commit them to feature/main branches.stash drop, git gc); the documented list is the recovery source.git stash list → pop the labeled entry; if missing, re-apply manually from the documentation.git diff --staged is clean of test-patch files before any commit.When integrating a test branch into a separate working clone (e.g., production v1 clone consumes typescript-migration clone via file: reference), move changes between clones via standard git mechanisms — never ad-hoc directory copy:
git fetch ../<other-clone> <branch> from the consuming clone, then git checkout <branch> or git merge FETCH_HEAD. Local-to-local fetch keeps the change set transparent and reversible.git format-patch from the source clone → git am in the consuming clone.cp -r <other-clone>/<files> <this-clone>/ or rsync-style copy. Bypasses git tracking, mixes test changes with working tree, and produces unreviewable state.file: npm symlink and there are multiple parallel source clones (e.g., bss-core/v1 and bss-core/typescript-migration), there are two valid integration patterns. Confirm with the user which one applies before any environment write:
package.json file: to point at the test clone, run npm install to relink. Revert at session end. The source clones stay on their own branches.file: ref unchanged; in the source clone behind the symlink, git checkout <test-branch> so the working tree behind the symlink becomes the test code.When the user authorizes a specific environment mutation (e.g., "switch nodered branch only", "do all work in dir X"), do NOT mutate anything outside that scope. Before any extra mutation (other files, other repos, package manifests, lockfiles, dependency caches, env vars, parallel clones), STOP and ask. Each unrelated mutation requires its own approval.
package.json file: refs when the user said "change branch", running npm install that rewrites lockfiles, deleting node_modules, modifying sibling repos, applying patches to a "reference-only" clone, touching configs the user did not name.git checkout, git apply, git stash apply, or any write to Y. Apply the learned patterns inside X instead.[HINT] Download the complete skill directory including SKILL.md and all related files