with one click
ln-522-manual-tester
// Performs manual testing of Story AC via executable bash scripts in tests/manual/. Use when Story implementation needs hands-on AC verification.
// Performs manual testing of Story AC via executable bash scripts in tests/manual/. Use when Story implementation needs hands-on AC verification.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | ln-522-manual-tester |
| description | Performs manual testing of Story AC via executable bash scripts in tests/manual/. Use when Story implementation needs hands-on AC verification. |
| license | MIT |
Paths: File paths (
references/,../ln-*) are relative to this skill directory.
MANDATORY READ: Load references/ci_tool_detection.md ā compact output flags, pipefail, and failure-artifact policy for bash/curl/Puppeteer scripts.
| Input | Required | Source | Description |
|---|---|---|---|
storyId | Yes | args, git branch, kanban, user | Story to process |
Resolution: Story Resolution Chain. Status filter: To Review
Type: L3 Worker
Manually verifies Story AC on running code and reports structured results for the quality gate.
tests/manual/ folder of target project.addComment) with pass/fail per AC and script path.CRITICAL: Tests MUST return 1 (fail) immediately when any criterion is not met.
Never use: print_status "WARN" + return 0 for validation failures, graceful degradation without explicit flags, silent fallbacks that hide errors.
Exceptions (WARN is OK): Informational warnings that don't affect correctness, optional features (with clear justification in comments), infrastructure issues (e.g., missing Nginx in dev environment).
CRITICAL: Tests MUST compare actual results against expected reference files, not apply heuristics or algorithmic checks.
Directory structure:
tests/manual/NN-feature/
āāā samples/ # Input files
āāā expected/ # Expected output files (REQUIRED!)
ā āāā {base_name}_{source_lang}-{target_lang}.{ext}
āāā test-*.sh
Heuristics acceptable ONLY for: dynamic/non-deterministic data (timestamps, UUIDs, tokens - normalize before comparison; JSON with unordered keys - use jq --sort-keys).
Test results saved to tests/manual/results/ (persistent, in .gitignore). Named: result_{ac_name}.{ext} or response_{ac_name}.json. Inspectable after test completion for debugging.
To create expected files:
results/ folderexpected/ folder with proper namingIMPORTANT: Never blindly copy results to expected. Always validate correctness first.
MANDATORY READ: Load references/input_resolution_pattern.md
docs/project/infrastructure.md ā get port allocation, service endpoints, base URLs. Read docs/project/runbook.md ā get Docker commands, test prerequisites, environment setuptests/manual/ folder exists in project roottests/manual/config.sh ā shared configuration (BASE_URL, helpers, colors)tests/manual/README.md ā folder documentation (see README.md template below)tests/manual/test-all.sh ā master script to run all test suites (see test-all.sh template below)tests/manual/results/ ā folder for test outputs (add to .gitignore)tests/manual/results/ to project .gitignore if not presentconfig.sh to reuse settings (BASE_URL, tokens)references/puppeteer_patterns.mdtests/manual/{NN}-{story-slug}/samples/ ā input files (if needed)tests/manual/{NN}-{story-slug}/expected/ ā expected output files (REQUIRED for deterministic tests)tests/manual/{NN}-{story-slug}/test-{story-slug}.sh
tests/manual/results/chmod +x)tests/manual/README.md:
tests/manual/test-all.sh:
MANDATORY READ: Load references/test_result_format_v1.md
addComment, per test_result_format_v1.md) with:
tests/manual/{NN}-{story-slug}/test-{story-slug}.shcd tests/manual && ./{NN}-{story-slug}/test-{story-slug}.shtests/manual/, NOT temp files.MANDATORY READ: Load references/test_planning_summary_contract.md, references/test_planning_worker_runtime_contract.md
Runtime profile:
test-planning-workerln-522test-planning-workerworker, status, warnings, manual_result_pathInvocation rules:
runId and summaryArtifactPathrunId and exact summaryArtifactPathTest scripts always go to tests/manual/, never to the project root.
MANDATORY READ: Load references/monitor_integration_pattern.md
When running test scripts expected to take >30 seconds:
Monitor(command="bash tests/manual/{suite}/test-{slug}.sh 2>&1", timeout_ms=300000, description="manual test: {slug}")
Fallback: if Monitor is unavailable (Bedrock/Vertex), use Bash(run_in_background=true).
tests/manual/ structure exists (config.sh, README.md, test-all.sh, results/ created if missing).tests/manual/results/ added to project .gitignore.tests/manual/{NN}-{story-slug}/test-{story-slug}.sh.expected/ folder created with at least 1 expected file per deterministic AC.tests/manual/results/ for debugging.# Manual Testing Scripts
> **SCOPE:** Bash scripts for manual API testing. Complements automated tests with CLI-based workflows.
## Quick Start
```bash
cd tests/manual
./00-setup/create-account.sh # (if auth required)
./test-all.sh # Run ALL test suites
docker compose ps)apt-get install jq or brew install jq)tests/manual/
āāā config.sh # Shared configuration (BASE_URL, helpers, colors)
āāā README.md # This file
āāā test-all.sh # Run all test suites
āāā 00-setup/ # Account & token setup (if auth required)
ā āāā create-account.sh
ā āāā get-token.sh
āāā {NN}-{topic}/ # Test suites by Story
āāā test-{slug}.sh
| Suite | Story | AC Covered | Run Command |
|---|---|---|---|
| ā | ā | ā | ā |
{NN}-{topic}/test-{slug}.shtest-all.sh (add to SUITES array)
### test-all.sh (created once per project)
```bash
#!/bin/bash
# =============================================================================
# Run all manual test suites
# =============================================================================
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/config.sh"
echo "=========================================="
echo "Running ALL Manual Test Suites"
echo "=========================================="
check_jq
check_api
# Setup (if exists)
[ -f "$SCRIPT_DIR/00-setup/create-account.sh" ] && "$SCRIPT_DIR/00-setup/create-account.sh"
[ -f "$SCRIPT_DIR/00-setup/get-token.sh" ] && "$SCRIPT_DIR/00-setup/get-token.sh"
# Test suites (add new suites here)
SUITES=(
# "01-auth/test-auth-flow.sh"
# "02-translation/test-translation.sh"
)
PASSED=0; FAILED=0
for suite in "${SUITES[@]}"; do
echo ""
echo "=========================================="
echo "Running: $suite"
echo "=========================================="
if "$SCRIPT_DIR/$suite"; then
((++PASSED))
print_status "PASS" "$suite"
else
((++FAILED))
print_status "FAIL" "$suite"
fi
done
echo ""
echo "=========================================="
echo "TOTAL: $PASSED suites passed, $FAILED failed"
echo "=========================================="
[ $FAILED -eq 0 ] && exit 0 || exit 1
#!/bin/bash
# Shared configuration for manual testing scripts
export BASE_URL="${BASE_URL:-http://localhost:8080}"
export RED='\033[0;31m'
export GREEN='\033[0;32m'
export YELLOW='\033[1;33m'
export NC='\033[0m'
print_status() {
local status=$1; local message=$2
case $status in
"PASS") echo -e "${GREEN}[PASS]${NC} $message" ;;
"FAIL") echo -e "${RED}[FAIL]${NC} $message" ;;
"WARN") echo -e "${YELLOW}[WARN]${NC} $message" ;;
"INFO") echo -e "[INFO] $message" ;;
esac
}
check_jq() {
command -v jq &> /dev/null || { echo "Error: jq required"; exit 1; }
}
check_api() {
local response=$(curl -s -o /dev/null -w "%{http_code}" "$BASE_URL/health" 2>/dev/null)
if [ "$response" != "200" ]; then
echo "Error: API not reachable at $BASE_URL"
exit 1
fi
print_status "INFO" "API reachable at $BASE_URL"
}
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
export SCRIPT_DIR
| Template | Use Case | Location |
|---|---|---|
| template-api-endpoint.sh | API endpoint tests (NO async jobs) | template-api-endpoint.sh |
| template-document-format.sh | Document/file processing (WITH async jobs) | template-document-format.sh |
Quick start:
cp references/templates/template-api-endpoint.sh {NN}-feature/test-{feature}.sh # Endpoint tests
cp references/templates/template-document-format.sh {NN}-feature/test-{format}.sh # Document tests
tests/manual/ (production example)references/templates/test_task_template.md (or local docs/templates/ in target project)references/risk_based_testing_guide.mdreferences/puppeteer_patterns.mdreferences/test_result_format_v1.mdVersion: 1.0.0 Last Updated: 2026-01-15