| name | research-and-draft |
| description | Researches each prospect in the current batch and drafts a Greg-style opener for each — campaign-scoped. Resolves the active campaign from `<project-dir>/.hhq-campaign.json` (default `default`). Two entry points — (1) normal flow after surface-next-5 ("let's go", "draft them", etc.) reads `/api/me/campaigns/{slug}/current-batch`, (2) quick-start flow after onboard-helperhq ("let's go on quick start", "research my 5") reads `config.quick_start.urls` from the user-level config and operates inside the `default` campaign (quick-start always runs in the default campaign in V1). Both paths use the Chrome connector to read each prospect's LinkedIn profile + recent posts, save per-campaign findings to `research` via PUT /api/me/campaigns/{slug}/contacts/{contactSlug}, draft a short signal-referenced opener using the user's base voice merged with any campaign voice_additions, append to per-campaign `messages`, set per-campaign status to `drafted`, then present all openers cleanly. Run AFTER surface-next-5 (normal) or AFTER onboard-helperhq with quick-start URLs queued. |
Research and Draft — Sales Helper Lite
You are doing pass-2 enrichment and opener drafting for the 5 prospects the user just confirmed in surface-next-5. For each prospect: read their public LinkedIn profile + recent posts via the Chrome connector, capture findings to the backend, draft a short signal-referenced opener in the Greg style, save artifacts, present the openers for copying.
This is the heaviest V1 skill and it's where the actual product value lives. Work through it carefully — quality of research and quality of drafts is the whole product.
When this skill runs
Two entry points: the normal flow (after surface-next-5) and the quick-start flow (after onboard-helperhq Phase 7 captured up to 5 LinkedIn profile URLs the user wants to reach out to immediately).
Normal flow triggers
Trigger when the user says any variant of:
- "let's go"
- "draft them"
- "start drafting"
- "go for it"
- "do it"
- "yes draft"
- "give me the openers"
…AND /api/me/campaigns/{slug}/current-batch returns a non-empty batch.
Quick-start flow triggers
Trigger when the user says any variant of:
- "let's go on quick start"
- "do the quick start"
- "research my 5"
- "research my quick start"
- "quick start"
…AND config.quick_start.urls is non-empty AND config.quick_start.status is "queued" (not yet completed).
Disambiguation
If a phrase like "let's go" is ambiguous (current_batch non-empty AND quick_start.status is "queued"), prefer the quick-start path — those URLs were captured most recently and the user is more likely to mean them. After they complete, the user will reach for "let's go" on the normal flow.
If the user says one of the normal-flow phrases but the current batch is empty AND quick_start is unavailable, route them to surface-next-5 first ("There's no current batch — let's surface 5 prospects first. Say 'get me the next 5'.").
Phase 0 — Auth and campaign
Step 0a — Get the project folder
Use mcp__ccd_directory__request_directory (no arguments) to get the persistent Cowork project folder. The user accepts a permission prompt the first time. Save as <project-dir>.
If the tool isn't registered (rare CLI case), fall back to ~/.hhq/sales-helper/.
Step 0b — Resolve auth (per-project session)
Read <project-dir>/.hhq-session.json (per-project session auth, v0.11+).
- Found → parse
backend_url, license_key, session_id, jwt, jwt_expires_at. Continue.
- Not found, but legacy
<project-dir>/.hhq-auth.json exists → migrate by renaming to .hhq-session.json. Continue.
- Neither found → "No auth — say
/hhq:connect to link this project (or /hhq:onboard if you're brand-new)." Stop.
If jwt_expires_at is past or within 60s of expiry, proactively refresh: POST <backend_url>/api/refresh with Authorization: Bearer <old jwt> (the endpoint accepts expired tokens). Save the new jwt + jwt_expires_at to .hhq-session.json (preserving other fields).
All API calls below use Authorization: Bearer <jwt> and curl -sk. Never log the JWT or licence key.
On 401 from any API call below, read error.code from the response body and recover ONCE:
token_expired → POST <backend_url>/api/refresh with the current JWT. Save new JWT. Retry the original call.
session_revoked or invalid_token → POST <backend_url>/api/activate with the existing session_id + license_key from .hhq-session.json (NOT a fresh UUID — reusing the same UUID keeps this idempotent and avoids burning a slot). Save new JWT. Tell the user: "Your session for this project had been released — I've re-established it. If that wasn't intentional, release it again from /sessions and close this chat." Retry.
license_inactive → tell the user to contact help@helperhq.co. Stop.
On 403 during recovery relay the backend's error.message verbatim and stop (session_limit_reached includes the /sessions URL).
On a second 401 of the same call after recovery, surface honestly and stop. Never loop. Never generate a fresh session UUID — only /hhq:connect and /hhq:onboard mint new UUIDs.
Step 0c — Resolve current campaign
Read <project-dir>/.hhq-campaign.json to get campaign_slug. If missing, write {"campaign_slug": "default"} and use default.
This skill operates inside that campaign — its current-batch, per-prospect status, research, and messages all scope to it. Quick-start (Phase Q) always runs inside default regardless of pin (quick-start is a one-time onboarding artifact at user level).
Phase Q — Quick-start branch (alternative entry point)
If the user invoked this skill via a quick-start trigger phrase (see "When this skill runs"), run Phase Q instead of the normal Pre-flight + Phase 1 loop. Phase Q uses the same research methodology (Phase 2) and drafting methodology (Phase 3) as the normal flow — the difference is the input (URLs, no contacts yet) and what we do at the end (create contacts after research, since they don't exist yet).
Step Q1 — Read user config and default-campaign config
User-level: GET <backend_url>/api/me/config.
- Read
quick_start.urls (array of LinkedIn profile URLs).
- Read
quick_start.status. If "completed" → "You've already completed quick start — say 'get me the next 5' to surface from your full contact list." Stop.
- Read
voice_profile — the user's base voice for drafting.
- If
quick_start.urls is empty/null → "No quick-start URLs queued. Want to onboard or surface a normal batch?" Stop.
Default-campaign: GET <backend_url>/api/me/campaigns/default/config.
- Read
offer, offer_hook, offer_profile, icp, signals.weighted.
- Read
voice_additions if present — layer additively on top of the user's base voice when drafting.
Quick-start always operates in the default campaign in V1 (quick-start lives at user level, was captured during initial onboarding, and the default campaign is the user's first/primary campaign). For other campaigns, the user surfaces normally via surface-next-5.
Step Q2 — Tell the user what's happening
"Right — researching your <N> quick-start prospects and drafting openers. Same methodology as the normal flow. Takes a few minutes; I'll work through them one at a time."
Step Q3 — Per-URL loop
For each URL in quick_start.urls, do the following sequentially:
Q3a — Read profile via Chrome. Same as Step 2c in the normal flow. Use mcp__Claude_in_Chrome__navigate then mcp__Claude_in_Chrome__read_page.
Capture from the profile:
- First name + last name
- Headline / current role + company
- Location
- About section (only if offer-relevant)
- Featured / pinned content (if any)
Q3b — Read recent activity. Same as Step 2d. URL: <profile_url>/recent-activity/all/. Read most recent 5–10 posts.
Q3c — Analyse (in-context, same as Phase 2 Step 2e). Apply the methodology in Phase 2 — what to look for, what counts as a signal, how to weight a recent post against a role change. The user picked these prospects intentionally, so the fit is presumed; lean into the recent signal in the opener.
Q3d — Draft opener (in-context, same as Phase 3). Apply the Greg-style methodology in Phase 3 below. Use the user's voice_profile (summary, do/dont, phrases) to shape phrasing — match their tone, follow the do/don't rules, optionally riff on a sound-check phrase. Use offer_hook and offer_profile.canonical_phrases for the angle and language; keep the actual ask soft.
Q3e — Create the contact via POST. Now that we have name, company, position from the profile read, create the contact:
POST <backend_url>/api/me/contacts/import
Content-Type: application/json
Authorization: Bearer <jwt>
{
"contacts": [
{
"first_name": "...",
"last_name": "...",
"company": "...",
"position": "...",
"linkedin_url": "<the URL>",
"connected_on": null,
"raw_csv": null
}
]
}
The endpoint upserts by linkedin_url, so if the prospect later appears in the bulk LinkedIn export, the existing record will be updated rather than duplicated. The response includes the contact's slug — use it for the next step.
Q3f — Save per-campaign research and message.
Quick-start writes to the default campaign. Per-campaign status, research, and messages live on campaign_contacts:
PUT <backend_url>/api/me/campaigns/default/contacts/{contactSlug}
{
"research": { ...the structured research from Q3c... },
"messages": [
{
"kind": "drafted_opener",
"drafted_at": "ISO-8601",
"body": "<the drafted opener>",
"source": "quick_start"
}
],
"status": "drafted"
}
If profile read fails for a URL (private, 404, rate-limited), DO NOT crash the batch. Save what you can (URL only as the linkedin_url, status "research_failed"), draft a generic opener using only what's visible (the URL itself contains the handle), flag the issue clearly when presenting, and continue.
Step Q4 — Present openers
Use the same presentation format as Phase 5 in the normal flow. List all <N> openers with research summaries, LinkedIn URLs, and notes-file paths. Same Greg-style block per prospect.
Step Q5 — Mark quick_start completed
PUT <backend_url>/api/me/config with the existing config + quick_start.status updated to "completed" and quick_start.completed_at set to the ISO timestamp. The URLs themselves stay in config as a record.
Step Q6 — Close
"Done — <N> quick-start openers ready. When you've sent any, mark them contacted like usual.
Pipeline view: https://helperhq.co/pipeline
When your LinkedIn bulk export lands, drop the CSV in and I'll work through the rest of your network from there."
Phase Q ends here. Do NOT continue into Phase 1 — the normal flow doesn't apply.
Prompts come from the backend
Drafting and research-analysis prompts are server-controlled as of v0.5. Two prompt templates live on the backend, admin-tunable via the Filament admin UI (and AI-assisted tuning), no plugin update needed when prompts change:
draft_opener — the prompt that turns research + offer + voice into an opening message
research_analyze — the prompt that turns raw profile + activity into a structured signal finding
How to use them
At the start of each invocation (once per session is enough — cache for the rest of the session):
GET <backend_url>/api/mcp/prompts/draft_opener
GET <backend_url>/api/mcp/prompts/research_analyze
Both return:
{
"name": "...",
"template": "<the prompt template, with {{placeholders}}>",
"version": 7,
"updated_at": "ISO-8601"
}
For each prospect, when you draft:
-
Substitute the user's data into the template:
{{user_name_or_self}} — user's first name from auth, or "you"
{{offer_summary}} — config.offer_profile.summary or config.offer
{{offer_hook}} — config.offer_hook
{{voice_summary}} — config.voice_profile.summary or "(no voice profile yet)"
{{voice_tone}} — config.voice_profile.tone joined with commas
{{voice_do}} — config.voice_profile.do as a numbered list
{{voice_dont}} — config.voice_profile.dont as a numbered list
{{voice_phrases}} — config.voice_profile.phrases as a bulleted list
{{prospect_first_name}}, {{prospect_last_name}}, {{prospect_position}}, {{prospect_company}} — from contact record
{{research_summary}} — your structured analysis from Phase 2
{{research_signal}} — the specific signal text from Phase 2
-
Run the substituted prompt as your drafting brief. The prompt itself defines the universal rules (no cliches, no exclamation marks unless mirroring voice, soft ask, etc.) — you don't need to re-encode them. Voice rules are user-controlled overlay; universal rules win on conflict (the prompt says so).
-
Same shape for research analysis — substitute {{profile_raw}}, {{activity_raw}}, {{offer_summary}}, {{icp_summary}}, {{prospect_first_name}}, etc., into research_analyze and run it.
Fallback
If the prompt fetch fails (404, network, backend down):
- Use the embedded baseline below ("Phase 3 — Fallback drafting methodology"). Treat it as a thin baseline, not the canonical IP.
- Flag it in the result block:
[Drafted with embedded fallback prompt — backend prompt unreachable]. The user notices and we know to investigate.
- Don't crash the batch.
Why this matters
- Prompt tuning is a backend deploy. Brad can iterate on opener quality without bumping the plugin or asking users to update.
- Voice profile (per-user) and the global prompt (admin) compose cleanly: global = methodology + universal rules; voice = user's specific style. The prompt template substitutes both.
Pre-flight checks
Step A — Fetch the campaign batch
GET <backend_url>/api/me/campaigns/<campaign_slug>/current-batch
- HTTP 200 with non-empty
batch → continue.
- HTTP 200 with empty
batch → "No active batch in <campaign_slug> — say 'get me the next 5' first." Stop.
- HTTP 404
campaign_not_found → "This project is pinned to campaign <campaign_slug> but the backend doesn't have it. Either fix .hhq-campaign.json or run /hhq:new-campaign." Stop.
- HTTP 401 → run the auth fallback once and retry.
Hold the batch in memory: it's a list of {contact_id, surfaced_at, drafted_at, reasoning}.
Step B — Fetch user voice + campaign config
User-level: GET <backend_url>/api/me/config — read voice_profile (the user's base voice).
Campaign-level: GET <backend_url>/api/me/campaigns/<campaign_slug>/config — read offer, offer_hook, offer_profile, icp, signals.weighted, voice_additions.
The drafting heuristics in Phase 3 below use all of these:
offer + offer_hook + offer_profile.canonical_phrases (campaign) shape the offer language and angle.
voice_profile.summary + do + dont + phrases + tone (user-level base) shape voice.
voice_additions.summary + do + dont + phrases + tone + notes (campaign-level) layer additively on top of the base. Concatenate the arrays, append the additions summary/notes — never replace.
icp + signals.weighted (campaign) shape what counts as a relevant signal.
If campaign config is missing or incomplete (no offer at minimum), route to /hhq:new-campaign (or offer-review if the campaign already exists but is empty). Missing base voice_profile or offer_profile is fine — fall back to a more generic style and note it honestly ("voice not yet tuned — generic phrasing").
Step C — Map contact_ids to slugs
For each contact_id in the batch, GET <backend_url>/api/me/campaigns/<campaign_slug>/contacts/<that contact's slug> to get both the contact identity and the campaign-scoped fields (current research, messages, status).
If you don't have the slug yet, do a single bulk fetch first: GET <backend_url>/api/me/contacts?per_page=500 (paginate if total > 500) to map id → slug. Then per-prospect detail GETs use the campaign endpoint.
Tell the user what's about to happen
Briefly tell the user so they're not staring at a silent screen for 5 minutes:
"Got it — researching prospects and drafting openers. This takes a few minutes; I'll work through them one at a time and show you everything when I'm done."
Then start the per-prospect loop.
Phase 1 — Per-prospect loop (sequential, one at a time)
For each prospect in the batch, do Phases 2–4. Sequential is fine for V1 — 5 prospects × ~60-90s each is acceptable. Don't try to parallelise.
After each prospect, give a brief progress note in chat ("✓ 1/5 — Greg Coleman done. Next: Marina Park."). No emoji except the check mark for progress. Keeps the user oriented during the wait.
If a single prospect's research fails (page gone, rate-limit, network error), DO NOT crash the whole batch. Mark that prospect's research JSON with a failure_reason field, attempt the opener using only stored contact data (it'll be more generic — flag this in the message log), and move on to the next.
Phase 2 — Research a single prospect
For the current prospect:
Step 2a — Fetch full contact record (campaign-scoped)
GET <backend_url>/api/me/campaigns/<campaign_slug>/contacts/{contactSlug} to get the latest contact fields (linkedin_url, email, headline, etc.) PLUS the per-campaign state (current research, messages, status) in a single call.
The response shape:
{
"contact": {
"id": ...,
"slug": ...,
"first_name": ...,
"linkedin_url": ...,
"email": ...,
"headline": ...,
...
"campaign": {
"status": "surfaced",
"last_surfaced_at": "...",
"research": {...},
"messages": [...]
},
"cooldown_warnings": [...]
}
}
The campaign.research and campaign.messages are what you'll be appending to in this campaign.
Step 2b — Read local notes if any
Check if <project-dir>/contacts/<slug>/notes.md exists. If yes, read it — these are the user's own freeform notes ("Greg's away until Jan", "don't pitch testing — they have an in-house lab"). Use them to shape the opener.
If the file doesn't exist, that's fine — most prospects won't have notes.
Notes live as local files (not on the backend) for V1. The user owns this file; never overwrite it. If you create a placeholder later in this skill (Step 2e), only do so if the file is absent.
Step 2c — Navigate to LinkedIn profile
Use the Chrome connector tools (mcp__Claude_in_Chrome__navigate, then mcp__Claude_in_Chrome__read_page or mcp__Claude_in_Chrome__get_page_text).
URL: prospect's linkedin_url field. If empty/missing, fall back to a LinkedIn search by name+company — but accept that this is less reliable.
What you read from the profile:
- Current role + tenure (when did they start the current role?)
- Previous role (recency of any change)
- Location
- "About" section — only if it gives offer-relevant context
- Featured / pinned content — if any
Do NOT read: skills section, endorsements, recommendations, education history beyond current/latest, profile photo, banner. None are useful for the opener and they bloat context.
Step 2d — Navigate to recent activity
URL pattern: <linkedin_url>/recent-activity/all/.
Read the most recent 5–10 posts. For each:
- Date posted
- Post body (full text — these are usually short)
- Post type (their own / repost / comment)
If they have no public activity → note that ("No recent posts visible publicly").
Step 2e — Analyse and store research (canonical: backend prompt; this is the fallback)
Canonical path: fetch the research_analyze template from GET /api/mcp/prompts/research_analyze, substitute placeholders ({{profile_raw}}, {{activity_raw}}, {{offer_summary}}, {{icp_summary}}, {{prospect_*}}), run that prompt. See "Prompts come from the backend" near the top of this file.
The heuristics below are the fallback — used only when the backend prompt is unreachable. Iterate quality via the backend prompt, not here.
Heuristics for analysis (fallback):
- Recency cliff: a post in the last 7 days is hot. 7-30 days is warm. 30-90 days is lukewarm. >90 days is cold.
- Topical hits: does any recent post touch on the user's offer space? Be specific — "post about scaling sales teams" matches a sales-coach offer, "post about a recent funding round" matches a B2B SaaS offer. Generic "leadership lessons" posts are weak signals.
- Role-change signals: did they start their current role recently (last 3 months)? New roles = high openness to vendor conversations. Long tenure (>3 years) = harder to dislodge from their current setup.
- Shipping signals: did they announce something they built / shipped / launched? Strong hook.
- Vulnerability signals: did they post about a problem they're trying to solve? Strongest possible hook — they've publicly named the pain.
Build a research JSON object with this shape:
{
"researched_at": "ISO timestamp",
"profile": {
"current_role": "Founder",
"current_company": "Magnetorquer Pty Ltd",
"tenure": "since Jan 2023",
"location": "Brisbane, Australia",
"previous_role": "Engineer at SatCo (2020-2022)"
},
"recent_activity": {
"most_recent_post": { "date": "2026-04-22", "summary": "Announced shipping the Magnetorquer prototype" },
"themes": ["satellite hardware", "team hiring"],
"activity_level": "hot"
},
"signals_matched": [
{ "signal": "post-relevance", "evidence": "Recent post about Magnetorquer shipping aligns directly with microgravity testing offer" },
{ "signal": "shipping-signal", "evidence": "Just announced launch — open window for vendor conversations" }
],
"hook_for_opener": "Just shipped the Magnetorquer prototype — congratulate, then offer microgravity validation before launch.",
"notes_for_draft": "User's local notes mention Greg is travelling — keep ask soft, no pressure on timeline."
}
If you couldn't get a profile read at all, set:
{
"researched_at": "ISO timestamp",
"failure_reason": "<short reason>",
"fallback": true
}
PUT the per-campaign research:
PUT <backend_url>/api/me/campaigns/<campaign_slug>/contacts/{contactSlug}
{
"research": <the JSON above>
}
This writes to campaign_contacts.research for THIS campaign only. Other campaigns retain their own research for the same person — different angle, different fit assessment, different hook.
If research already has prior data in this campaign (re-surfaced after the 30-day cooldown), the PUT overwrites. If you want to preserve history, prepend a previous_runs array — but for V1 simplicity just overwrite.
Step 2f — Create notes placeholder (only if absent)
If <project-dir>/contacts/<slug>/notes.md does NOT exist, create the directory + placeholder:
# Notes: <Full Name>
*Your own notes go here. Examples:*
*- "Greg's away until Jan, come back then."*
*- "Met at the SmallSat conference 2025."*
*- "Don't pitch testing — they have an in-house lab."*
If notes.md already exists, leave it completely untouched. The user owns this file.
Phase 3 — Draft a Greg-style opener (canonical: backend prompt; this is the fallback)
Canonical path: fetch the draft_opener template from GET /api/mcp/prompts/draft_opener, substitute placeholders, run that prompt. See "Prompts come from the backend" near the top of this file.
This Phase 3 section is the fallback — used only when the backend prompt is unreachable. The methodology below is intentionally thin baseline, not the canonical IP. Iterate quality via the backend prompt, not here.
What a Greg-style opener IS
- 3-5 sentences max. Often shorter is better.
- Opens with a specific reference — a post, a milestone, a role change, something they shipped. NOT "I noticed you..." or "I came across your profile..." or "Hope you're well!"
- States offer relevance in one line — what you do, why it connects to their specific situation. Not a pitch, not a value prop deck.
- Soft ask — "happy to chat", "lmk if useful", "no pressure". Never "let's set up a call" or "15 minutes of your time".
- Reads like a smart peer noticed something — not like an SDR running a sequence.
What a Greg-style opener IS NOT
- ❌ "Hi Greg, hope you're well!" — generic greeting filler
- ❌ "I help small satellite companies with microgravity testing." — generic value prop, not personal
- ❌ "I came across your profile and was impressed by..." — fake personalisation
- ❌ "Would love to set up 15 minutes to discuss how we can help you." — pushy ask
- ❌ Any emoji
- ❌ Any "!" exclamation (unless replicating user's voice and they use them)
- ❌ Buzzword stack ("synergy", "leverage", "scale", "transform")
- ❌ Long preamble before the actual point
Structure (3 lines, often less)
- Specific opener — refers to the hook from the research
- Bridge to relevance — one sentence connecting their thing to your offer
- Soft ask — leave the door open, don't push
Examples
Good (illustrative):
Hey Greg — saw your post about the attitude control work you're shipping next quarter. We do microgravity component validation at Sunburnt Space; if you're after a final shake-down before launch, happy to chat. No pressure either way.
Good (a "they shipped something" hook):
Hey Marina — congrats on the launch of FlightDeck. The demo video looks slick. We work with founders on outbound right around your stage — if you're thinking about a more deliberate go-to-market motion, lmk and I'll send something useful.
Good (a "recent role change" hook):
Hey Tim — saw you joined Gilmour Space as Senior Propulsion Engineer last month, congrats. We work with propulsion teams on flight-readiness testing in microgravity; if it's relevant for your roadmap I'd love to compare notes. No pitch unless asked.
Bad (avoid this):
Hi Greg, hope you're well! I saw you're a Founder at Magnetorquer. I help small satellite companies with microgravity testing solutions. Would love to set up a quick 15-minute call to discuss how Sunburnt Space can help you scale your operations. Let me know what works in your calendar!
Drafting process
- Re-read the
hook_for_opener field from the research blob.
- Re-read the user's
offer from config.
- Read local
notes.md for the prospect if any — let it shape the draft (e.g. soften the ask if user noted travel; skip the offer if user noted "in-house lab").
- Draft using the structure above.
- Self-edit pass: check it against the IS / IS NOT lists. Cut anything weak. If it could have been written about anyone, rewrite — it must reference this specific prospect's situation.
If the research failed
Use the stored contact fields (position, company, headline). Be honest in the draft — don't fabricate posts or activity. The opener will be more generic. Example fallback:
Hey Tim — noticed we connected a while back. We work with propulsion teams on flight-readiness testing — if it's relevant for your work at Gilmour, happy to share more. Otherwise no pressure.
Append the opener to messages (campaign-scoped)
GET the current campaign.messages array on the contact (you already pulled the campaign-scoped detail in Step 2a; reuse that).
Append a new entry:
{
"drafted_at": "ISO timestamp",
"channel": "linkedin",
"kind": "opener",
"draft": "<the opener text>",
"research_failed": false,
"user_notes_present": true | false
}
PUT the updated messages array to the per-campaign endpoint:
PUT <backend_url>/api/me/campaigns/<campaign_slug>/contacts/{contactSlug}
{
"messages": <the appended array>
}
This writes to campaign_contacts.messages for THIS campaign. Preserve all prior message entries — append, don't replace.
Phase 4 — Update per-campaign status after each prospect
For the prospect just drafted:
PUT <backend_url>/api/me/campaigns/<campaign_slug>/contacts/{contactSlug}
{
"status": "drafted"
}
This sets the per-campaign campaign_contacts.status to drafted. Other campaigns are not affected. Don't touch last_surfaced_at — surface-next-5 already set it. Don't mark contacted — V1 has no automated send-tracking. The user marks contacted themselves via the dashboard or backend update; that bumps contacts.last_contacted_at (global, drives the 7-day hard lockout) so the prospect can't be surfaced in any campaign for a week.
Phase 5 — After all prospects done — clear the batch and present
Once the loop completes:
Step 5a — Clear the batch (campaign-scoped)
PUT <backend_url>/api/me/campaigns/<campaign_slug>/current-batch
{ "batch": [] }
This way the next surface-next-5 call in this campaign doesn't see a stale batch. Other campaigns' batches are unaffected.
Step 5b — Present all openers in chat
Show all openers cleanly so the user can copy them. Format:
All done. Here are your <N> openers — copy, tweak if you want, send from LinkedIn:
═══ 1. Greg Coleman — Founder, Magnetorquer Pty Ltd ═══
<opener text>
LinkedIn: <url>
Notes file: <project-dir>/contacts/<slug>/notes.md
═══ 2. Marina Park — CEO, FlightDeck ═══
<opener text>
LinkedIn: <url>
Notes file: <project-dir>/contacts/<slug>/notes.md
... etc
If any prospect failed research, mark it clearly:
═══ 4. Tim Reyes — Senior Propulsion Engineer, Gilmour Space ═══
⚠ Research failed (rate-limited). Opener uses stored contact data only — more generic than usual.
<fallback opener text>
LinkedIn: <url>
Notes file: <project-dir>/contacts/<slug>/notes.md
Step 5c — Close
"When you've sent any of these, you can mark them contacted by saying 'mark Greg as contacted' — V1 doesn't track sends automatically, but the next surface skips anything in drafted for 30 days regardless.
Pipeline view: https://helperhq.co/pipeline
When you're ready for the next 5, just say 'get me the next 5'."
The pipeline URL is intentionally low-key — one line, no preamble. Per the v0.9 brief: "A short URL in the response, nothing more." The user has agency to ignore it; surfaced contacts are still in the Lead bucket on that page.
Things you must NOT do
- Do NOT do anything beyond the prospect's profile + recent posts. No company page deep-dive, no news search, no LinkedIn graph traversal. Narrow.
- Do NOT log into LinkedIn or attempt any authenticated actions. Public profile reads only.
- Do NOT fabricate posts, role history, or anything that wasn't actually visible. If research is thin, say so honestly.
- Do NOT overwrite local
notes.md. Ever. The user owns it.
- Do NOT replace the contact's
messages array — always append.
- Do NOT mark prospects
contacted — V1 has no send tracking.
- Do NOT use emoji in opener drafts.
- Do NOT use exclamation marks in opener drafts (unless mirroring the user's voice in a future tier).
- Do NOT pitch in the opener. Soft ask only.
- Do NOT call the
/api/mcp/* endpoints in V1 — leave them as documented seams. V1 does research + drafting in-context.
- Do NOT modify
<project-dir>/.hhq-session.json except to update jwt / jwt_expires_at after a refresh.
- Do NOT log the JWT, licence key, or auth file contents.
- Do NOT touch the user's config.
Edge cases to handle gracefully
- Profile is private / "Out of network" → research fails, fall back to stored contact data, flag in the opener block.
- No recent posts → activity level = silent, hook from role/company match instead. Be honest in the opener — no fake "saw your recent post" reference.
- Profile URL is wrong / 404 → mark research failed, fall back, suggest the user verify the LinkedIn URL.
- Prospect's company in the contact record differs from current LinkedIn profile (they changed jobs since the export) → research finds a new company. PUT a single update with both the research and the corrected
company / position fields.
- Rate-limited mid-batch → mark remaining prospects as research-failed, fall back, finish the batch. Don't retry mid-flight (it'll just re-rate-limit).
- Single-name profile (no last name) → use what's available. Slug already handled by ingest-contacts (which delegates to backend).
- Two prospects in batch with the same name (unlikely) → slugs differ (server-side disambiguation), so notes files don't collide.
/api/me/campaigns/{slug}/current-batch is empty mid-flow → another session may have cleared it. Tell the user honestly and suggest re-surfacing in this campaign.
- User interrupts mid-batch ("stop", "pause", "wait") → finish the current prospect cleanly (don't leave a half-PUT contact), then stop. The current-batch on the backend still reflects the original 5 with
drafted_at set on the ones you finished. The user can re-trigger this skill — check each contact's messages field for a recent opener entry to skip already-done prospects.
- Backend down mid-batch → if Chrome research succeeded but the PUT fails, you have research data you can't persist. Hold it in conversation context, surface a partial result honestly, suggest retry.