| name | meta-ads-daily-review |
| description | Daily Meta Ads campaign review for the media buyer. Pulls live API data for BOF campaigns, runs ad set health check, creative × audience matrix, kill/scale/watch verdicts, Meta quality flags, anomaly detection, and top priorities. Replaces manual screenshot-to-Claude workflow. Uses threshold-based logic (0.8% kill, 1.5% target, £50 cost/call, 8% booking rate, 50/30/20 split). Supports weekly phase logic (weeks 1-6+). |
Meta Ads Daily Review
Automated daily campaign review skill for the media buyer. Replaces the manual process of screenshotting Meta Ads Manager results and pasting them into Claude. Pulls live API data and produces the same analysis format in one run.
Purpose
The current daily review of BOF Meta Ads campaigns runs by hand:
- Opening Meta Ads Manager
- Screenshotting ad set + creative performance
- Pasting screenshots into Claude
- Asking for kill/scale verdicts, quality flags, anomalies, priorities
This skill automates steps 1-3 via Meta API and produces the same output format without any manual data entry.
What This Skill Does
When invoked:
- Fetches live data from Meta Ads API (campaign, ad set, ad, creative, insights endpoints)
- Runs ad set level health check (impressions, CTR, CPC, calls booked, learning status)
- Builds creative × audience matrix across all ad sets
- Assigns kill/scale/watch verdicts per creative using threshold logic
- Flags Meta quality rankings (below average detection)
- Detects anomalies (CTR vs quality mismatch, delivery issues, conversion gaps)
- Generates top priorities list for the week
- Outputs a formatted markdown report matching the Claude reference output
Analysis Framework
All verdicts use these thresholds (configurable via config.yaml):
| Metric | Threshold | Action |
|---|
| CTR kill threshold | < 0.8% | Flag for kill after sufficient impressions |
| CTR target | ≥ 1.5% | Healthy baseline |
| CTR scale signal | ≥ 2.5% consistent across ad sets | Priority scale candidate |
| Cost per form submit target | £50 | Flag if higher (BOF only) |
| Landing page form submit rate | 8% | Flag if lower (BOF only) |
| Budget split (Warm/Lookalike/Interest) | 50/30/20 | Flag if off (BOF only) |
| Min impressions for verdict | 1000 per ad set | Below = "insufficient data" |
Stage-aware checks: The skill detects the campaign's funnel stage from its Meta objective (OUTCOME_AWARENESS / OUTCOME_TRAFFIC / OUTCOME_VIDEO_VIEWS = TOF, OUTCOME_ENGAGEMENT = MOF, OUTCOME_LEADS / OUTCOME_SALES = BOF). Form-submit, cost-per-submit, and 50-30-20 split checks are suppressed for TOF and MOF. Judging a Profile Visits campaign on form submits is a category error.
TOF: Cost per IG Follower (Profile Visits)
For TOF campaigns (Profile Visits in particular), the scale KPI is cost per IG follower, not CTR/CPC. Because Meta Ads API doesn't return a campaign-level follower count, the skill:
- Calls the IG Business Account endpoint on every run:
GET /{IG_BUSINESS_ACCOUNT_ID}?fields=followers_count,username (using the existing META_ACCESS_TOKEN).
- Appends the reading to
ig_followers_baseline.json with today's date.
- Computes
followers_acquired = current_total - most_recent_prior_snapshot.
- Surfaces
cost_per_follower = campaign_spend / followers_acquired in the report.
- Pro-rates followers_acquired to each ad set by its spend share so per-ad-set CPF is visible.
First run of the day creates the baseline only (0 acquired). Re-running the next day gives the first real delta. Snapshot file is committed so the baseline persists across machines.
Override: set IG_BUSINESS_ACCOUNT_ID in root .env to point at a different IG account (default: Ben Hawksworth's business account, 17841400157052776).
Naming note: The skill reports form_submits, not calls_booked. A Meta pixel lead event is a form submission, not a booked call. Historical ratio is roughly 4.5 form submits per booked call. The JSON output also includes an estimated_bookings field (form_submits / 4.5) flagged as directional until the client tracker sheet ground truth lands. This aligns with the dashboard schema (2026-04-17 rename).
Learning Phase Awareness
- Every ad set checked for
effective_status (LEARNING, ACTIVE, LEARNING_LIMITED)
- If in learning phase: verdicts marked directional, not final
- No kill recommendations until 50 conversion events per ad set
- Budget changes discouraged during learning
Week Phase Logic
- Week 1: Pure data collection, only "watch" verdicts allowed
- Week 2: First kill/scale decisions on creatives with enough impressions
- Weeks 3-5: Copy testing phase (statement vs metric vs question hook variants)
- Week 6+: Scale winners, rotate losers
The skill reads the current week number from config.yaml or accepts it as a CLI flag.
Instructions for Claude
When this skill is invoked:
-
Confirm the campaign to review
Use the AskUserQuestion tool to ask:
Question 1: Campaign
- Header: "Which campaign?"
- Options: Pulled dynamically from Meta API (list of active BOF campaigns)
Question 2: Week Number
- Header: "What week is this campaign in?"
- Options: Week 1, Week 2, Week 3, Week 4, Week 5, Week 6+
Question 3: Date Range (Optional)
- Header: "Date range"
- Options: Last 7 days (default), Last 14 days, Last 30 days, Campaign lifetime
-
Fetch live Meta Ads data:
cd 1-meta-ads/meta-ads-daily-review && bash setup.sh --campaign "<name>" --week <N> --days <N>
Script pulls:
- Ad set level: impressions, clicks, CTR, CPC, spend, calls booked, effective_status
- Ad level: same metrics + creative ID + quality_ranking + engagement_rate_ranking + conversion_rate_ranking
- Delivery issues: issues_info, delivery_info
-
Run the analysis framework:
Step A. Ad Set Health Check
- Build table: Ad set | Impressions | CTR | CPC | Calls booked | Learning status
- Mark overall health (healthy / concerning / learning)
Step B. Creative × Audience Matrix
- Build table: Creative | Warm retargeting CTR | Cold lookalike CTR | Cold interest CTR | Signal
- Signal column categories:
- ⭐ Strongest overall (≥2.5% across all 3)
- ⭐ Consistent across all 3 (≥ target in all)
- ⭐ [Audience] standout (only strong in one audience)
- Solid, watch (close to target)
- Mixed (varies significantly)
- Below target across board (< 1.5% in 2+ audiences)
- Too little data (< 1000 impressions)
Step C. Kill / Scale / Watch Verdicts
Build three buckets:
Watch closely for scaling:
- Creatives with CTR ≥ 2.5% in 2+ ad sets
- Flag as "priority scaling at week [N+1]"
- Note the strongest audience for each
Watch, need more data:
- Creatives with impressions 1000-3000 per ad set
- Creatives with mixed signals (strong in 1, weak in 2)
Concern, underperforming:
- CTR < 1.5% in 2+ ad sets WITH sufficient impressions
- Quality ranking "Below Average" flagged
- Note: do NOT kill during learning phase
Step D. Meta Quality Flags
- List every ad with
quality_ranking = BELOW_AVERAGE
- List every ad with
engagement_rate_ranking = BELOW_AVERAGE
- Explain why it matters (Meta deprioritises delivery)
Step E. Anomaly Detection
Detect and flag:
- CTR + quality mismatch (high CTR but below average quality → clickbait risk)
- Delivery starvation (creative getting <500 impressions while others get 5000+ → auction losing)
- Click-but-no-booking (ads working, landing page broken)
- Spend concentration (one creative eating >40% of ad set budget)
Step F. Top Priorities This Week
Generate ordered list of actions:
- Landing page issues (always first if present)
- Learning phase blockers (don't change budget until learning complete)
- Priority scale candidates (name specific creatives)
- Quality ranking fixes (name specific creatives)
- Kill candidates for next week review
-
Output formatted report:
- Save to
outputs/DAILY-REVIEW-[campaign-name]-[YYYY-MM-DD].md
- Use the structure from the Claude reference output (ad set level, creative analysis, verdicts, anomalies, priorities)
Data Caveat: Always Prepend
Every generated report must open with a short "DATA CAVEAT" block before the ad set table. Current caveat as of 2026-04-21:
form_submits are Meta pixel lead events (form fills), not booked calls. Ground truth for booked calls lives in calendly_bookings.json from the dashboard pipeline (2026-04-20). Paid-ads bookings are tagged via the -pa slug.
estimated_bookings = form_submits / 4.5 is a directional fallback only. Prefer the real Calendly number when available.
- Pipeline accuracy was confirmed 2026-04-20: Calendly numbers match Ads Manager 1:1 after the double-count fix landed in the dashboard repo.
Drop the caveat entirely once estimated_bookings is replaced by the live Calendly count in the report body.
Output Format
# Daily Review: [Campaign Name]
**Date:** 2026-04-15
**Week:** Week 2
**Date Range:** Last 7 days
**Generated by:** meta-ads-daily-review skill
**Data freshness:** [data_fetched_at timestamp]
---
## ⚠️ DATA CAVEAT: READ FIRST
[2-4 sentences: what's fixed, what's pending ground-truth validation, directional vs exact]
---
## AD SET LEVEL: HEALTH CHECK
[Context paragraph: learning phase status, overall health]
| Ad set | Impressions | CTR (all) | CPC (all) | Calls booked | Status |
|---|---|---|---|---|---|
| Warm / Retargeting | 13,640 | 2.26% | £0.70 | 0 | Learning |
| Cold / Lookalike | 11,126 | 2.26% | £0.87 | 1 | Learning |
| Cold / Interest | 13,586 | 2.27% | £0.86 | 3 | Learning |
[Summary paragraph]
---
## CREATIVE ANALYSIS: ACROSS ALL 3 AD SETS
| Creative | Warm retargeting | Cold lookalike | Cold interest | Signal |
|---|---|---|---|---|
| Graphic 2 / Claude specialist | 2.46% | 1.66% | 2.79% | ⭐ Strongest overall |
| ... | ... | ... | ... | ... |
---
## KILL / SCALE / WATCH VERDICTS
**Note:** [Impression volume caveat if relevant]
### Watch closely for scaling
- **[Creative]**: [reason]. Action next week: [action]
### Watch, need more data
- [Creative list]
### Concern, underperforming
- **[Creative]**: [reason]. Recommended action: [action]
### Meta quality flags to act on
- **[Creative]** in [ad set]: [quality issue explanation]
---
## ANOMALIES WORTH FLAGGING
- **[Creative/issue]**: [explanation + why it matters]
---
## TOP PRIORITIES FOR THIS WEEK
1. [Highest priority action]
2. [Second priority]
3. [Third priority]
4. [Fourth priority]
---
**Next review:** [suggested date]
Required Environment
.env file (in skill folder):
META_ACCESS_TOKEN=<token>
META_ACCOUNT_ID=<account_id>
Reuses the same token as meta-ad-copy skill. Antonio will provide the new token once configured.
Files
skill.md: this file
fetch_daily_review.py: Meta API data fetcher + analysis runner
config.yaml: threshold logic + week phase rules (configurable)
setup.sh: venv + dependency installer, entrypoint
outputs/: generated daily reports
Ben's Voice Rules (for the output narrative)
The skill uses plain analytical prose, not Ben's marketing voice. But for any commentary/recommendations:
- British English (£, optimise, behaviour, organisation)
- No dashes, no AI filler phrases
- Short, direct sentences
- "Business owners and service providers" (never "coaches")
- No corporate jargon
Output Rules: Hard Nevers
These are the framework rules from 1-meta-ads/CLAUDE.md §6, applied to every report this skill generates. Claude must respect all of them when writing the prose sections (read, summary, anomalies, priorities).
Never in output:
- Kill recommendations on ad sets with
status=PAUSED. If it's already paused, there's nothing to kill. Filter by active status before issuing kill verdicts.
- "Simplify the form", "reduce landing page friction", "remove qualification". The application form is the qualification filter by design. A form-submit-to-booking drop is the filter working as intended, not a UX problem.
- Treat pixel
lead events as booked calls. Always cross-reference Calendly.
- Team-member names in free-text recommendations. Write about the system and the data, not the people maintaining it.
- Internal snake_case field names (
paid_calls_booked, taken_no_outcome, etc.) in prose. Use natural language.
- Assert "the call happened" as fact. The system sees scheduled events, not attendance. Write "scheduled calls" or "bookings", not "calls taken".
- Speculate about sales-call quality, pitch clarity, onboarding friction. The data shows scheduling and revenue, not conversation content.
- Directive "kill immediately". Reframe as "watch closely" or "concern" with the reasoning.
- Em-dashes (—), en-dashes (–) anywhere. Use colon, comma, parentheses, or split into two sentences.
- Pad numbers to look precise when the underlying data is directional. If the metric is ±10%, do not write it to the penny.
Always in output:
- Data freshness stamp (when the numbers were last refreshed, from
data_fetched_at).
- Data caveat block for any Meta pixel number (use the canonical caveat in
## Data Caveat).
- Stage-aware verdicts. Never issue a verdict without declaring the stage.
- Per-stage reasoning. If recommending kill or scale, explain which stage rule is being applied.
- Pipeline state when revenue-related. If there are pending paid bookings, say so explicitly rather than implying "0 revenue means ads do not convert".
Integration
Daily workflow:
- Morning: run
/meta-ads-daily-review
- Answer 3 quick questions (campaign, week, date range)
- Skill pulls data + generates report in ~30 seconds
- Review report, action top priorities
- No screenshots, no copy-paste
Weekly complement: Output feeds into meta-ads-weekly-intelligence skill (reads from the dashboard GitHub repo) for the birds-eye funnel view at the founder level.
Version
v1.0 (initial): built 2026-04-15 based on the manual screenshot review workflow it replaces, and Ben's Meta Ads brief.
Pending:
- New Meta Ads token configuration
- Raw data push to the dashboard repo
- README in the dashboard repo pointing to clean/raw data files