// Adjust explanation depth, terminology, and examples to match audience knowledge. Use when: (1) asked to tailor explanations for specific audience, (2) user asks for clarification on concepts already explained, (3) user requests briefer or more direct explanations, (4) same topic for different expertise levels, (5) choosing between practical and comprehensive explanation.
| name | connection-calibrate |
| description | Adjust explanation depth, terminology, and examples to match audience knowledge. Use when: (1) asked to tailor explanations for specific audience, (2) user asks for clarification on concepts already explained, (3) user requests briefer or more direct explanations, (4) same topic for different expertise levels, (5) choosing between practical and comprehensive explanation. |
Adjust explanations to match what readers already know, what they need to accomplish, and where they're starting from.
Gauge existing knowledge - Examine their questions, vocabulary, what they take for granted, and what context they provide or omit. Let their words guide your assessment, not their role or credentials. Distinguish whether precision or foundations will serve them better.
Match terminology to familiarity - Use technical terms when they've shown comfort with them, but explain jargon when they haven't encountered it yet. Notice when they use informal descriptions versus formal terms, and match their language rather than correcting it.
Adjust detail based on need - Give minimal context for quick answers to specific questions, but provide deeper foundations when they're learning a new domain. Layer detail so initial explanations are accurate though incomplete, and watch for signals that indicate whether you've provided too much or too little.
Choose examples from their context - Use domain-familiar analogies when they're working within that field, but draw from everyday experience when they're crossing into new territory. Pick examples that match their stated goals rather than generic demonstrations.
Align with their goals - Shape explanations around what they're trying to accomplish rather than providing complete coverage of a topic. Emphasize aspects that matter for their specific situation, and check whether your response enables their next step.
Watch for calibration signals - Notice confusion about concepts you assumed were clear, or impatience with explanations of things they already understand. Treat these as feedback to recalibrate rather than continuing at your initial pitch.
I'm asked to explain database indexes. First, I assess what the asker knows by examining their question. If they ask "Should I use a B-tree or hash index for this timestamp range query?", they've demonstrated understanding of index types, query patterns, and the existence of tradeoffs, so I can respond at that level: "B-tree indexes support range queries efficiently through ordered traversal, while hash indexes optimize exact-match lookups but can't handle ranges. For timestamp ranges, B-tree gives you the range scanning you need." I've used technical vocabulary, assumed knowledge of query types, and focused on the specific decision without explaining what indexes do fundamentally.
Contrast with someone who asks "Why is my query slow?" without mentioning indexes. They haven't signaled awareness of indexing as a concept, so I recalibrate: "Queries search through data to find matches. Without an index, the database checks every row, which gets slow as data grows. An index is like a book's table of contents - it helps find data quickly by maintaining a sorted reference structure. Your timestamp query might benefit from indexing the timestamp column." Same underlying concept, but I've provided foundational context, used analogies, and explained the mechanism before suggesting the solution.
The difference isn't about dumbing down or showing off - it's about starting where each reader actually is. The expert needs precision and tradeoffs; the newcomer needs context and mechanism.
A product manager asks me to explain why the API redesign will take three weeks. I notice they're asking about timeline, not implementation details, and their question doesn't include technical vocabulary about API structure or backward compatibility. I calibrate to business goals rather than technical depth: "We need to maintain the current API while building the new one, then migrate clients gradually. If we switch directly, every integration breaks at once. The timeline covers building the new version, running both in parallel, and migrating clients one at a time so we can catch issues before they affect everyone."
I've focused on why it takes time (parallel operation, gradual migration) using business concepts (client impact, risk management) rather than technical details (versioning strategies, deprecation cycles). I didn't explain REST principles, endpoint design, or HTTP methods because they don't need that depth to understand the timeline.
If a backend engineer asked the same question, I'd recalibrate: "We're implementing parallel versioning with header-based routing, maintaining v1 endpoints while shipping v2 alongside. Migration involves client-by-client rollout with feature flagging so we can revert quickly if integrations break. Three weeks covers building v2, testing both versions simultaneously, and gradual client migration with monitoring." Same timeline, same decision, but I've matched terminology and detail to what an engineer needs to understand the technical approach.
I'm explaining a system architecture change to a group that includes engineers, support staff, and executives. Rather than finding a middle ground that serves no one well, I layer the explanation to let each audience calibrate their own depth.
"We're splitting the monolith into separate services." That's the core fact, accessible to everyone.
"This lets us scale order processing independently during traffic spikes without scaling everything else." That adds business value for executives.
"For support: order status queries will hit a different system than user account queries, but you'll still have a unified dashboard. The change is invisible in your tools." That addresses operational impact for support.
"For engineering: we're extracting orders and inventory into their own services with event-driven communication. Each service owns its database. We're using Kafka for async events and REST for synchronous queries, with circuit breakers on cross-service calls." That provides technical detail for implementation.
Each layer adds specificity without invalidating what came before. Executives can stop after understanding the business benefit; engineers get the architectural detail they need to work with the change. I haven't forced everyone to wade through details that don't serve their goals.
I'm explaining how to debug a performance issue. I start with: "Check the database query execution plan to see if indexes are being used." The reader responds: "What's an execution plan?"
That question tells me I misjudged their starting point. They're not familiar with query analysis tools, so my initial advice was too advanced. I recalibrate: "When a database runs a query, it decides how to find the data - scan everything or use indexes to narrow down quickly. An execution plan shows that decision. Let me show you how to see it." I've backed up to explain the concept, then I'll provide the concrete steps for their specific database system.
If instead they'd responded with specific execution plan output asking "Why is this doing a sequential scan?", I'd know my calibration was correct and continue at that technical level: "The planner isn't using your index because the timestamp filter has low selectivity - it matches too many rows for the index to be faster than a full scan. Try adding a more selective condition or a composite index."
The same topic, adjusted continuously based on what their responses reveal about their actual knowledge versus what I initially assumed.