| name | loci-preflight |
| description | Execution-aware preflight analysis (control-flow, timing/energy) on the functions an edit touches and the callees of any new code, using compiled artifacts, to catch problems while the design is still cheap to change.
|
| when_to_use | MANDATORY in /plan mode when user describes new logic or a modification. Triggers: "implement", "add", "write a function", "new feature", "how should I", "modify", "refactor", "guard". Do NOT invoke for review/explain requests or direct edits outside plan mode.
|
loci-preflight
This skill is a thinking tool, not a write-gate. Run it during planning —
while you are still deciding what to write — so the execution fit is visible
before any code changes. The output shapes how you write, not just whether.
Preflight requires compiled artifacts. It does not fall back to source-level
reasoning. If the project cannot be compiled or the architecture is not
supported, the skill stops and tells the user why.
Tool boundary: asm-analyze only — never objdump
All assembly, CFG, symbol, and ELF inspection in this skill goes through
<asm-analyze-cmd>. Do not use objdump, readelf, addr2line, or
nm as substitutes — asm-analyze produces the annotated CFG and per-block
CSV the LOCI MCP expects, and binutils output is not equivalent. If
asm-analyze returns an error, surface it and stop; do not fall back to
objdump.
Always pass --arch <loci_target> on every asm-analyze call, reading the
value verbatim from the SessionStart LOCI target: line.
When to run
Run preflight as part of forming your plan, immediately after you understand
what function(s) you need to write and before you issue any Edit/Write call:
- User describes the task
- You read the relevant files to understand the call site and surrounding code
- ← run preflight here, while thinking
- Adjust the plan based on findings
- Write the code
Plan mode: Always emit the full preflight report (Execution, CFG Analysis,
Execution fit, footer) in the response text — never inside the plan body.
The plan body should contain only the adjusted implementation steps that
incorporate preflight findings. The user must see the complete structured
report in the response, not a summary buried in the plan context.
Step 0: Check session context
Check that loci MCP is connected and authenticated, you see the tools before
running the preflight steps that require it. If the MCP is unavailable, tell
the user:
LOCI MCP server must be authenticated and connected for this skill to function.
Please run /mcp in Claude Code to manage MCP servers, then approve the loci
server. If it does not appear, restart Claude Code — the plugin registers it
automatically on startup.
Read the persisted detection results from the <project-context> path (the
per-session keyed file, listed as project context: in this session's
context). It is written by session-init.sh at session start and is the single
source of truth for compiler, architecture, and build system.
Do NOT re-run detection scripts.
{
"compiler": "...",
"build_system": "...",
"architecture": "...",
"loci_target": "...",
...
}
If the file does not exist, stop and tell the user:
LOCI session context not found. Please restart Claude Code so the plugin
setup runs and detects the project environment.
Also check the system-reminder block emitted at session start for:
Target: <target>, Compiler: <compiler>, Build: <build>
LOCI target: <loci_target>
Map the LOCI target to loci MCP supported architectures and binary targets:
| LOCI target | CPU |
|---|
| aarch64 | A53 |
| armv7e-m | CortexM4 |
| armv6-m | CortexM0P |
| tc399 | TC399 |
The CPU column identifies which real silicon hardware the LOCI timing and
energy predictions are traced from.
If the architecture is not in this table, emit and stop:
## Preflight: STOPPED
Architecture not supported.
Supported: aarch64, armv7e-m, armv6-m, tc399
If no compiler was detected, emit and stop:
## Preflight: STOPPED
No compiler detected in session context.
Action: resolve the build environment, then re-run preflight.
Step 1: Compile the affected source(s) via build-metadata
Always compile the source file(s) whose callees the new code will invoke
through <build-metadata-cmd>. Do not reuse an existing .o or .elf
from the project's own build — LOCI needs the compiler, flags, and version it
controls so that the post-edit rebuild can diff apples-to-apples.
Read build-metadata command:, asm-analyze command:, venv python:, and
plugin dir: from the SessionStart context. For each source:
<build-metadata-cmd> compile \
--source <path/to/src.cpp> \
--loci-target <loci_target> \
--context "<project-context>" \
--phase preflight
build-metadata resolves flags through a typed cascade — each step is
recorded in the .meta.json sidecar under flag_source_v2.attempts:
- User override (
.loci-build/flags.json, LOCI_EXTRA_CFLAGS)
compile_commands.json (exact)
make --dry-run against the project's own makefile (exact)
- Sibling
.obj/.o DWARF in the build directory (high)
- Same-stem
.obj/.o DWARF near the source (high)
- Linked ELF DWARF (medium; prefers CU whose
DW_AT_name matches source)
- TI
.projectspec XML — -I/-D only, CPU stripped (medium, partial)
- Makefile regex scan — augmenter only (low, partial)
- Hardcoded defaults — last resort with a warning
It guarantees -g and -c, and writes .loci-build/<loci_target>/<basename>.o
plus .loci-build/<loci_target>/<basename>.o.meta.json. The compiler /
flags / version / discovery tier are recorded in the sidecar; post-edit
calls build-metadata diff to verify parity. Do not print the
build-metadata block to the user — the sidecar is the source of truth,
and the block is intentionally suppressed to keep the skill output focused
on the analysis.
Validate the .o — a standalone -c compile can exit 0 yet produce an
empty object file when the source is wrapped in #if / #ifdef guards whose
defines (-D) were not on the command line. After build-metadata compile
succeeds, run:
<asm-analyze-cmd> extract-symbols --elf-path .loci-build/<loci_target>/<basename>.o --arch <loci_target>
If the result shows 0 symbols or returns an error mentioning "no code" or
"preprocessor", the target function was compiled out. In that case ask the
user for the -D flags the project build system uses, re-run
<build-metadata-cmd> compile, and re-validate.
Secondary path: existing binary
Use a full binary (.elf, .out) for analysis only if the callees span multiple
compilation units and linking is needed. You MUST still run
<build-metadata-cmd> compile for the relevant source file — the .o +
.meta.json pair is what the pre-edit hook snapshots, and what post-edit
compares against. Skipping it breaks the entire pre/post chain.
Hard stop: build-metadata compile fails
If <build-metadata-cmd> compile exits non-zero, emit stderr verbatim and
stop. Do NOT paraphrase, do NOT proceed to analysis. The stderr already
carries the source, flag-source trace, and remediation options.
## Preflight: STOPPED
build-metadata compile failed for <source>.
<stderr from the command, verbatim>
Step 2: Call graph and timing/energy analysis
Read asm-analyze command:, venv python:, and plugin dir: from the LOCI session context (system-reminder at session start). Use these as <asm-analyze-cmd>, <venv-python>, and <plugin-dir> in the commands below.
The goal is to analyze the functions the edit will affect — for new code, the
callees it will invoke; for a modification, the function itself (plus any new
callees) — before writing anything.
Extract assembly
Extract CFGs for the callees the new function will invoke:
<asm-analyze-cmd> extract-assembly --elf-path <.o or binary> --functions <callee_1,callee_2...> --arch <loci_target>
The JSON contains the control_flow_graph field with annotated CFGs in
text-format optimized for LLM analysis.
The JSON output contains timing_csv_chunks, timing_csv, and timing_architecture fields needed
for the MCP call.
Extract fields with jq, not python -c. Save the extract JSON inside
the project (e.g. .loci-build/extract.json — never /tmp on Windows), then:
<asm-analyze-cmd> extract-assembly --elf-path <…> --functions <…> --arch <loci_target> > .loci-build/extract.json
jq -r '.control_flow_graph' .loci-build/extract.json # annotated CFG text
jq -r '.timing_architecture' .loci-build/extract.json # arch string for MCP
jq -c '.timing_csv_chunks[]' .loci-build/extract.json # one chunk per line, pass to MCP
Timing and energy via LOCI MCP
Immediately after extraction, get hardware-accurate timing and energy for the
callees:
Call mcp__plugin_loci_loci__get_assembly_block_exec_behavior for all chunks in
parallel (one call per chunk, all in the same response):
csv_text: the chunk
architecture: the timing_architecture field from the output above
IMPORTANT: Issue all chunk calls simultaneously in a single message — do NOT
call them sequentially. Concatenate the result CSVs (skip duplicate headers)
before computing per-callee metrics.
Compute per-callee:
- Worst path =
execution_time_ns + std_dev_ns
- Energy =
energy_ws (report in uWs; convert from Ws by multiplying by 1e6)
The MCP response CSV columns are exactly: function_name, std_dev_ns,
execution_time_ns, energy_ws. Reference those column names literally
when reading rows — there is no bare std_dev column.
Sum worst-case timings and energy across the hot-path call chain — but
not by adding the bare CSV execution_time_ns of every hot-path
block. Hot-path blocks that end in bl / blx are call sites: the
MCP-returned cost for that single block reflects only the branch-only /
single-instruction call-site cost, NOT the cost of the callee's body.
You MUST expand every such block first (see next sub-step) before summing.
If the cumulative expanded chain exceeds a known deadline or energy
budget, flag it now — before any code is written.
Expand bl / blx call-site rows
For every block on the hot path whose disassembly ends in bl / blx
(or whose CFG terminator is annotated (external-call ...),
→ <callee_symbol>, or (unresolved reloc)):
-
Identify the callee. Read the symbol from the CFG annotation
and/or the bl instruction's target. Strip any _0x<hex> block
suffix — you want the function name (e.g. ClockP_start,
xTimerCreateStatic).
-
In-binary callee — rows whose function_name starts with
<callee>_ are present in the same MCP response. Walk the
callee's hot path through its CFG, then compute:
callee_worst_ns = Σ over callee hot-path blocks of (execution_time_ns + std_dev_ns)
callee_energy_ws = Σ over callee hot-path blocks of energy_ws
Replace the call-site cost with bl_cost + callee_worst_ns (and
energy with bl_energy + callee_energy_ws). If the callee itself
contains a bl to another in-binary symbol, recurse one more
level. Stop at recursion depth 2 to bound work; if a deeper chain
is on the hot path, surface it as a CFG note rather than recursing
indefinitely.
-
External callee — function_name prefix <callee>_ is NOT in
the response (the callee's .o was not in --functions /
--elf-path, e.g. FreeRTOS / vendor library symbols). Keep
bl_cost as a lower bound for this site. Do NOT silently
accept it as the call-site cost. You MUST:
- Add a CFG-Analysis line:
⚠️ external callee body unmeasured — <callee> figure is a lower bound.
- Append
(≥ <total> ns — external callees unmeasured) to the
Latency row's Note in the conclusion table.
- Where reasonable, suggest re-extracting with the callee's
.o added so the next pass measures the body.
The hot-path total is the sum over all hot-path blocks where every
bl-terminated block's cost has been replaced by its expanded form
per the rules above. Treating a bare bl row as the full call-site
cost (instead of expanding an in-binary callee's hot-path cost, or
marking an external callee as an explicit lower bound when its body
is unavailable) silently understates timing for any function whose
hot path traverses an in-binary callee, and silently understates
external-callee cost without flagging it as a lower bound.
If modifying an existing function and a .o.prev exists, also extract timing
and energy for the baseline (pre-edit) function. Compute delta:
diff_pct = ((post_value - pre_value) / pre_value) * 100
If the MCP is unavailable, skip timing/energy, note "(timing/energy
unavailable — MCP not connected)", and follow the Step 0 guidance for
telling the user.
If the MCP tool returns an error containing "limit reached" or "quota",
stop the skill entirely — do not continue with CFG analysis or
escalation triggers. Instead, output the quota message with reset time
and upgrade CTA:
LOCI usage quota reached — preflight analysis skipped.
<server error message verbatim — includes usage/limit, reset countdown, and upgrade link>
The server message already contains reset time and upgrade CTA, e.g.:
"Daily token limit reached (31,000 / 30,000 tokens). Resets in 4h 23m.
Upgrade to Premium at auroralabs.com for 300,000 tokens/day."
Show it verbatim. Then end the skill.
If a chunk call returns any other error (not quota, not "unavailable"),
treat it as MCP unavailable for that chunk's callees: skip timing, flag
each affected callee with ⚠️ RISK: timing data unavailable for <callee>
in CFG Analysis, and continue with CFG-only analysis.
Analyze the CFG output
Check the CFG text from the extract-assembly output for structural hazards:
- Missing declarations: are callees present in the binary with the expected
signatures? If a callee is absent, flag a missing forward declaration or
linkage issue.
- Indirect calls: any
bl to a register in a callee's CFG — flag as a
potential CFI hazard.
- Recursion/cycles: back edges in the CFG with no visible exit condition —
flag unbounded recursion.
- Latency: use the MCP timing results above; flag any callee whose worst
path violates a timing budget, or where the cumulative hot-path chain
exceeds a known deadline.
- Energy: use the MCP energy results above; flag any callee or hot-path
chain whose energy cost is notably high relative to the use case (e.g.,
battery-powered device, ISR context, tight power budget).
Reason over results
After analyzing the CFG and receiving LOCI results, reason through the
following before proceeding to output. This is a mandatory thinking step —
do not skip it when results look clean. Increment R (reasoning cycle
counter) by 1 now.
Interpretation questions:
- What is this function's role in the system — is it on a hot path, ISR,
periodic task, or called once? This determines whether any timing delta
is critical, advisory, or irrelevant.
- If
.o.prev exists: is |delta| < std_dev_ns? If yes — change is within measurement
noise, treat as stable. If |delta| > std_dev_ns — change is real; flag it.
If no .o.prev: this is the first measurement — record these numbers as the
baseline and note no prior exists for comparison.
- Does
std_dev_ns indicate a stable path or high hardware variance — and why
(cache sensitivity, branch misprediction, pipeline stalls visible in CFG)?
- Is a timing budget known from the session context? If yes, compare hot-path
worst against it and flag if exceeded. If no budget is known, report the
number and skip the fit assessment.
- What does the CFG structure explain about the timing — which blocks
dominate, are there expensive paths the new code will always hit?
- Has every hot-path
bl / blx site been expanded per the
"Expand bl / blx call-site rows" step? If a callee's body rows
are present in the MCP response but its bare bl cost is still
what's flowing into the Latency total, the number is the entry-block
understatement — re-aggregate before continuing. If a callee is
external (no <callee>_* rows), is the lower-bound annotation in
the Latency Note?
- Is the hot-path energy distribution balanced across callees, or does one
callee dominate? If dominated, that callee is the leverage point — plan
to cache its result, call it less frequently, or substitute a lighter alternative.
- Do any CFG findings (indirect calls, recursion, missing declarations) change
the design — does the plan need a guard, a different callee, or a linkage fix?
- Synthesize per-row Status: when multiple sub-findings roll up to the
same Gate (e.g. several CFG hazards under Safety, both worst-case latency
and dominance under Performance), the row's Status is the worst of the
contributors and the Note lists them comma-separated, worst-first.
- Verdict cause comes from sub-findings, not Gate names: the
ADJUST PLAN / STOP one-sentence cause lifts the lead item from the
driving row's Note (e.g. "STOP — unbounded recursion blocks plan", not
"STOP — Safety row is ❌"). Gate names are for the table; the verdict
speaks in concrete findings.
Escalation triggers (run skill inline, then reason over its results):
Escalate to stack-depth when — increment R by 1 at trigger:
- Execution context is ISR, HWI, or interrupt callback, AND call chain
depth > 3 levels visible in CFG, OR
- Recursion already flagged in CFG analysis above, OR
- Plan adds a new RTOS task (xTaskCreate, Task_construct, osThreadNew) that
needs stack sizing, OR
- Plan introduces large local variables on stack (buffers, arrays, C++ objects
with non-trivial constructors), OR
- Plan adds a known-deep callee (printf, snprintf, crypto, TLS functions).
After stack-depth returns, reason over its results — increment R by 1:
- Does worst-case stack depth fit the task's or ISR's configured stack budget?
- Are there large frames that could move to static or heap allocation?
- Does any frame in the chain add cost the plan can eliminate?
- Could the call chain be flattened to reduce depth?
→ adjust plan based on conclusion before proceeding.
Escalate to memory-report when — increment R by 1 at trigger:
- The plan introduces significant new static allocations (large buffers,
global arrays, static structs) visible from reading the source, OR
.o.prev exists and the plan grows or restructures existing data sections.
After memory-report returns, reason over its results — increment R by 1:
- Does the new allocation fit within available ROM/RAM headroom?
(answerable only if map file was provided — memory_regions shows usage %;
without map file, report section size delta only)
- Which region is under most pressure after the change?
- Does the plan need to reduce static footprint before proceeding?
→ adjust plan based on conclusion before proceeding.
Re-query loop
After reasoning, check whether a better candidate exists before committing to
the plan. If any of the following is true, go back to Extract assembly with
the alternative callees and repeat through Reason over results:
- Reasoning identified a lighter or safer alternative callee worth evaluating
- A flagged callee (timing violation, CFI hazard, recursion) has a named alternative
visible in the source files already read
- Hot-path energy is dominated by one callee that may have a lighter variant
- The plan for the new function changed (different call sequence, new callees
introduced) and those callees have not yet been measured by LOCI — re-query
with the new callee set before finalizing the plan
Increment R by 1 and M by the number of new MCP calls for each re-query cycle.
Cycle limit: 3 re-query iterations maximum. If the limit is reached without
a stable plan, emit the best candidate found and note the cycle limit was hit.
Convergence condition — exit the loop when:
- The plan is stable (no new callees to evaluate and no unresolved flags), OR
- All remaining flags are ✗ BLOCK (require user decision, not further querying), OR
- The cycle limit is reached.
Output format
Emit the preflight report in the response text, before describing what
you will write. In /plan mode, the report goes in the response — NOT
inside the plan body.
The output has three blocks in order: (1) conclusion table, (2) voice
remark, (3) LOCI footer. No free-form prose sections, no multi-paragraph
reasoning write-ups, no per-callee enumerations. The reasoning happens
in Step "Reason over results" above — it's mandatory and increments R
— but the OUTPUT of the reasoning lands as Status + Note in table rows.
The build-metadata block from build-metadata compile is intentionally
NOT shown to the user. Compiler/flag provenance lives in the .meta.json
sidecar; build-metadata diff surfaces its own LOCI · build mismatch
block on its own when parity actually breaks, and that is the only case
the user needs to see it.
Conclusion table — structure
Header:
## Preflight: <FunctionName>
Followed by the conclusion table. Icon vocabulary: ✅ PASS · ⚠️ WARNING ·
❌ FAIL.
Row-inclusion rules:
- Include a row only if the gate actually executed this run.
- Include a row only if there is something to report (skip "Recursion ✅
none" noise rows).
- Every ⚠️ / ❌ row MUST cite a reason in the Note column — no icon
without a cause. The Note is the one-line synthesis of the "Reason
over results" pass for that gate.
- Skipped gates are omitted (no fourth "N/A" icon).
Row catalogue (order when present):
- Safety — fires when CFG analysis surfaces a structural hazard
(missing declaration, indirect call, recursion / cycle). Status:
❌ for unbounded recursion or a BLOCK-level missing declaration;
⚠️ for benign-but-noteworthy hazards (function-pointer dispatch,
bounded recursion, weak-symbol miss); otherwise the row is omitted.
Note names the specific hazard(s).
- Performance — fires when MCP timing returned. Captures hot-path
worst-case latency, hot-path dominance (one callee >60% of budget),
and noise margin (only when
.o.prev exists: did the delta exceed
std_dev_ns?). Status: ✅ within budget and within noise; ⚠️ near
budget OR delta exceeds std-dev; ❌ over budget. Note format:
worst <X> µs (vs. <budget> when known); dominant: <callee> (<pct>%).
- Energy — fires when MCP returned energy. Threshold follows the
target context: ISR / battery-powered tighter than once-per-boot.
Note format:
<X> µWs.
- Stack — only when stack-depth was invoked this run. Note:
stack: <N> B (<usage>%) — <verdict> (verbatim from stack-depth).
- Memory — only when memory-report was invoked this run. Note:
memory: ROM <X>% / RAM <Y>% — <verdict>.
Build success and symbol-resolution are NOT table rows. The
LOCI · build block at the top already reports compiler/flags/target.
If compile or symbol-extract fails, the skill STOPs before reaching
the conclusion table — no state in which a "Build ✅" row carries new
information.
Conditional per-callee breakdown (between table and verdict)
Per-callee timing is usually hidden to keep clean runs compact, but it
appears automatically when the engineer needs it. Render a "Hot-path
breakdown" block between the table and the verdict line WHEN any of
these triggers match:
- The Performance row's status is ⚠️ or ❌, OR
- The Performance Note names a dominant callee (>60% of hot-path worst)
Show top-5 callees along the hot path, sorted by
worst_ns_summed_across_callee_hot_path desc. The per-callee
worst_ns here is the summed body cost, NOT the entry-block
worst — same expansion as the Step 2 sub-step. External callees
appear with ≥ <bl_cost> and a (body unmeasured) tag:
Hot-path breakdown (top-5 by worst):
<in_binary_callee_1> <summed_worst_ns> (<pct>%) <summed_energy_uWs>
<in_binary_callee_2> ...
<external_callee> ≥ <bl_cost_ns> (<pct>%) ≥ <bl_energy_uWs> (body unmeasured)
...
Omit this block when neither trigger matches (clean runs stay short).
When fewer than 5 callees contributed to the hot path, show what's
there — don't pad.
Table footer (always): bolded single-line verdict.
Execution fit: **GOOD** — proceed with plan ·
**ADJUST PLAN** — <one-sentence change> ·
**STOP** — <one-sentence reason>
Template
## Preflight: <FunctionName>
| Gate | Status | Note |
|--------------------------|:------:|-----------------------------------|
| <row 1 when applicable> | ? | <cited reason> |
| ... | ? | ... |
<Hot-path breakdown block — only if Performance ⚠️/❌ or its Note names a dominant callee>
Execution fit: **<GOOD|ADJUST PLAN|STOP>** — <one sentence>
Example (typical clean run, ~10 lines)
## Preflight: process_message
| Gate | Status | Note |
|--------------|:------:|-----------------------------------|
| Safety | ⚠️ | dispatch via function pointer — benign |
| Performance | ✅ | hot-path worst 1.8 µs |
| Energy | ✅ | 0.05 µWs |
Execution fit: **GOOD** — proceed with plan
For modifying an existing function with .o.prev available, the
Performance row's Note carries the noise-margin sub-finding
(|delta| vs std_dev_ns). The Before/After comparison lives inside
that Note, not as a separate Delta block.
Re-reasoning triggers (table-driven)
Before emitting the final conclusion table, inspect what the first-pass
analysis produced. If any of the row patterns below matches, loop back
— re-query MCP, escalate, or re-read source — BEFORE emitting. Each
looped-back pass increments R (co-reasoning); each extra MCP call
increments M. The table the user sees is the post-loop version, not
the first-pass draft.
| Row pattern | Trigger |
|---|
| Performance Note shows dominance > 80% | Re-query MCP on the dominant callee's per-block timings (not just the entry block). One extra MCP call. Often reveals a specific block as the leverage point, which the hot-path-summary hid. |
| Safety ❌ with missing-decl sub-finding | Before STOP: re-read the source to check for alternate callees that share the name (macro redefinition, weak symbol, LTO-inlined). Don't STOP on the first miss; verify. |
| Safety with indirect-call sub-finding AND function is on an ISR path | Escalate to stack-depth even if usual triggers don't match — indirect dispatch can hide call-graph depth from static analysis. |
| Safety with recursion sub-finding | Escalate to stack-depth (already the existing rule, restated here for table-completeness). |
| Performance Note shows ` | delta |
Per-callee timing detail appears in the conditional "Hot-path breakdown"
block above, but only when the Performance row is ⚠️/❌ or its Note names
a dominant callee — clean runs skip it to stay short. If the engineer
needs per-block breakdown beyond top-5 callees, re-extract via
asm-analyze extract-assembly directly.
Adjusting the plan based on findings
The value of running preflight during thinking is that findings change the
plan, not just add comments:
- A missing forward declaration → add it as a step before the function edit
- An unbounded loop in a callee → plan to add a termination guard or budget
- A callee timing violation → plan to cache the result, call asynchronously,
or choose a lighter alternative before committing to the design
- An energy concern → plan to batch calls, use a lighter alternative, or move
work off the hot path
Write the adjusted plan, then write the code. Do not write the code and then
note risks afterward — that defeats the purpose.
LOCI voice remark
Before the footer, add one short LOCI voice remark (max 15 words) that
acknowledges the user's work grounded in a specific number from the
analysis. Attribute improvements to the user ("clean work", "smart move",
"tight code"). For concerns, be honest and constructive with specifics.
Skip if the analysis produced no results or the user needs raw data only.
LOCI footer
After emitting the preflight report (or all-clear shorthand), append the
footer as the last thing printed — only if N > 0 (at least one
function was sent to LOCI). If no functions were processed (MCP
unavailable or no functions to measure), do NOT emit the footer.
Record cumulative stats (run via Bash before rendering the footer).
Pass --verdict "<verbatim-verdict-line>" so the verdict ride-along
ships alongside the per-function trends payload — the line is the same
string already rendered to chat (Execution fit: GOOD — proceed with plan,
Execution fit: ADJUST PLAN — <reason>, or Execution fit: STOP — <reason>),
unbolded, no surrounding asterisks.
Also pass --gates '<gates-json>' — a compact JSON object capturing
the per-row Status from the conclusion table just rendered. Map the
icons: ✅→pass · ⚠️→warn · ❌→fail. Only include gates that fired
this run (omitted gates were not part of the table). Allowed gate
names: Safety · Performance · Energy · Stack · Memory.
Example for the clean-run preflight example:
{"Safety":"warn","Performance":"pass","Energy":"pass"}.
<venv-python> <plugin-dir>/lib/loci_stats.py record --context-file "<project-context>" --skill preflight --functions <N> --mcp-calls <M> --co-reasoning <R> --verdict "<verbatim-verdict-line>" --gates '<gates-json>'
Record per-function measurements (single Bash call for all functions).
Pipe all measurements as JSONL via stdin. Skip functions where MCP timing
was unavailable.
echo '<jsonl_records>' | <venv-python> <plugin-dir>/lib/loci_stats.py record-measurement --context-file "<project-context>" --stdin --skill preflight
Where each line is one function:
{"fn":"<func1>","worst_ns":<execution_time_ns>,"energy_uws":<E>}
{"fn":"<func2>","worst_ns":<execution_time_ns>,"energy_uws":<E>}
The worst_ns field name is the storage-schema key consumed by
loci_stats.py (preserved for compat with prior on-disk measurements);
pass execution_time_ns into it.
Render the footer — compact by default
One line. Icon-led, no surrounding bars, middle-dot separators, spaces
around any → arrow:
<icon> LOCI preflight · <N> functions · fit <GOOD|ADJUST|STOP>
<icon> — mirrors the body's Execution-fit verdict: ✅ for GOOD,
⚠️ for ADJUST, ❌ for STOP.
Worked example (clean run):
✅ LOCI preflight · 2 functions · fit GOOD
Clean-escalation suffix
When preflight escalated into stack-depth or memory-report AND the
escalated skill returned clean, append a space-separated +<skill>
marker to the primary scalar so the compact line still surfaces that
the deeper check ran:
✅ LOCI preflight · 2 functions · fit GOOD +stack-depth
✅ LOCI preflight · 5 functions · fit GOOD +stack-depth +memory-report
A non-clean escalated result already flips a Stack/Memory row in the
preflight conclusion table to ⚠️/❌ and the verdict to ADJUST/STOP, so
+<skill> only ever appears next to a green icon. The conclusion
table itself carries the bad news — the footer stays compact regardless
of verdict, and the cumulative branch-stats line is not included.
Counter definitions (used by loci_stats.py record above):
- N = unique functions whose assembly was sent to LOCI (callees of
new code, or modified functions themselves)
- M = MCP calls to
mcp__plugin_loci_loci__get_assembly_block_exec_behavior
- R = co-reasoning: 1 for the initial LOCI result pass, +1 for each
re-query loop iteration, +2 for each escalated skill (1 at trigger,
1 when reasoning over results)