| name | tool-fit |
| description | Strategic lens skill for evaluating when a Career Hub opportunity should be answered with an interactive tool (calculator, decision tool, or generator) rather than (or alongside) an article. Use only at Stage 1 Discovery and Stage 2 Strategy to tag opportunities, score moat strength, and apply the AI-substitution-resistance modifier. Does not generate code; hand off to tool-page-builder for implementation. |
| metadata | {"family":"seo","owner":"seo","last_reviewed":"2026-05-01T00:00:00.000Z","version":"1.0.0","related_skills":["seo-foundations","marketing-foundations","opportunity-discovery","tool-page-builder","tool-fit-validation","ai-seo"],"kpis":["AI-substitution-resistance multiplier applied to 100% of tool-fit opportunities","Tool-fit test pass rate ≥60% on candidates tagged interactive_tool (rejecting bad tool ideas before Strategy)","Median tool moat score of approved tool briefs ≥3.5/5"],"marketing_pillar":24,"seo_standard":"C","kpi_tier":123,"funnel_stage":"mofu_bofu","content_class":"transactional","maturity_stage":"prescriptive","used_by_stages":[1,2]} |
Tool-Fit Lens
Decide when a topic should be answered by an interactive tool instead of (or alongside) an article. The strategic teeth behind the interactive_tool 90/10 class.
Why This Exists
AI Overviews substitute long-form informational answers (Princeton/Aggarwal et al. KDD 2024 — AIO impressions reduce CTR by ~58% per seo-foundations). They cannot substitute useful — Schwartz Standard C Rule 2 (Product-Led SEO, 2021). Interactive tools convert that gap into durable, AI-resistant traffic and earn referring domains as linkable assets.
This lens prevents two failure modes:
- False negative — answering a
do query with prose when a tool would compound. Discovery never tags it interactive_tool and the moat is left on the table.
- False positive — building a tool when an article wins (low volume, no moat across any of the six pillars, unbounded inputs, or no freshness commitment). Even with AI generation collapsing implementation cost, shipped tools still consume reviewer attention and dilute cluster uniqueness.
What "Moat" Means Here (Six Pillars, Not Just Owned Data)
The original framing of "moat = Indeed Flex proprietary data" was too narrow. With AI agents producing implementation cheaply and authoritative public data available to everyone, the durable moat shifts to what you do with the data. A tool earns its place when any one of these six pillars is strong:
- Owned data — Indeed Flex platform data
- Layered public data — IRS + BLS + Census + state combinations
- Update velocity — first to ship when authoritative numbers update (operationalized via ops/data-source-watcher)
- Audience translation — expert-domain language re-framed for hourly workers
- Output design — what users can do with the result that the SERP competitor can't
- Methodology rigor — published, citable assumptions
Score the highest of the six (not the average) per reference/moat-scoring.md Axis 2. A single strong pillar can carry a tool.
When to Use
- Stage 1 Discovery — tag every candidate with a tool-fit verdict before scoring. Apply the
ai_substitution_resistance modifier to lift_per_effort per PIPELINE.md.
- Stage 2 Strategy — for any brief tagged
interactive_tool or transactional, fill the moat scorecard and pick a tool sub-type (calculator / decision tool / generator).
Do Not Use When
- The opportunity is a
refresh-existing route — ops/content-refresh and existing template logic apply.
- The implementation is already approved — hand off to code/tool-page-builder.
- The opportunity is a YMYL article (Pillar 3 financial guide) — use the article path with E-E-A-T standards from seo-foundations Standard B. Tools embedded inside YMYL articles are subject to extra disclaimer rules; route through
tool-fit-validation.
Workflow
Step 1 — Run the Tool-Fit Test
Five binary questions at reference/tool-fit-test.md. The candidate must pass at least 4 of 5 to be tagged interactive_tool. Below 4 → tag stays striking_distance / template_extension / linkable_asset / other_with_text per the existing 90/10 taxonomy.
Step 2 — Pick the Sub-Type
Per reference/tool-subtypes.md. Effort baselines are reviewer + integration hours under the AI-agent generation model — implementation hours collapse to near-zero, so the cost is editorial:
- Calculator — numeric → numeric.
effort_baseline = new_calculator: 8.
- Decision tool — inputs → categorical recommendation.
effort_baseline = new_decision_tool: 6.
- Lookup / generator — inputs → templated text.
effort_baseline = new_generator_or_template_extension: 3.
Pillar 3 YMYL calculators are additionally subject to the reviewer-throughput cap (default 4/month) per marketing-foundations/reference/measurement-framework.md. Non-YMYL tools and generator extensions are not capped.
Step 3 — Score the Moat
Five-axis 1-5 rubric at reference/moat-scoring.md:
- AI-substitution resistance
- Data moat strength (Schwartz Rule 1)
- Re-engagement potential
- Link-bait potential
- Conversion proximity to the app-download funnel
Median axis score becomes the basis for the ai_substitution_resistance multiplier (per marketing-foundations/reference/measurement-framework.md).
Step 4 — Check the Kill List
Reject the tool framing if any condition at reference/kill-list.md fires. The opportunity may still be valid as an article — tag stays striking_distance / template_extension.
Step 5 — Emit Tags
Append to the opportunity record (per opportunity-discovery output contract):
tool_fit:
passes_test: true | false
test_score: 0-5
subtype: calculator | decision_tool | generator | none
moat_score_median: 1-5
moat_axes:
ai_substitution_resistance: 1-5
data_moat: 1-5
re_engagement: 1-5
link_bait: 1-5
funnel_proximity: 1-5
kill_list_triggered: false | "<reason-id>"
ai_substitution_resistance_multiplier: 0.8-1.4
recommended_effort_baseline: new_calculator | new_decision_tool | new_generator_or_template_extension | n/a
Cross-Skill Boundaries
- Tool-fit lens (this skill) — decides whether a tool is the right answer + scores the bet.
ops/tool-fit-validation — pre-Stage-4 specialist that validates a build with a one-page report (demand floor, data moat, prototype spec, kill conditions). Runs only after Stage 3 routes a brief to create-new with tag interactive_tool.
code/tool-page-builder — implementation skill for the actual repo diff once validated.
This skill does not produce code. It produces tags + a scorecard. The next stage decides what to do with them.
Anti-Rigidity Guard
Tool-fit is a lens, not a stage. If the moat score is low or the kill list fires, the opportunity falls back to its existing 90/10 class. The system never requires a tool — it just stops failing to consider one when AI substitution would otherwise eat the lift.
References