// Surface implicit assumptions to verify them against observable code artifacts before building on unverified claims. Use when: (1) claims about code behavior are made without checking what the implementation actually does, (2) planned changes depend on properties not yet verified against tests, types, or runtime behavior, (3) reasoning is based on how software typically works rather than how this specific codebase demonstrably works, (4) assertions about system properties reference expected patterns rather than observed artifacts.
| name | understanding-assumption-surface |
| description | Surface implicit assumptions to verify them against observable code artifacts before building on unverified claims. Use when: (1) claims about code behavior are made without checking what the implementation actually does, (2) planned changes depend on properties not yet verified against tests, types, or runtime behavior, (3) reasoning is based on how software typically works rather than how this specific codebase demonstrably works, (4) assertions about system properties reference expected patterns rather than observed artifacts. |
Identify what I'm assuming without evidence and verify against observable reality.
When I need to ensure my understanding is grounded in fact rather than assumption, I systematically surface what I'm taking for granted and verify it against the code, tests, history, and documentation that actually exist.
Catch the claim - I pause when I'm about to assert something about behavior, design, or requirements, recognizing that stating "this does X" or "users need Y" means I believe something to be true that I may not have verified.
Enumerate assumptions - I make explicit what must be true for my claim to hold, listing the conditions I'm relying on such as "this function handles null inputs" or "these services communicate synchronously" or "the database enforces this constraint," understanding that unstated assumptions are where reality most often diverges from expectation.
Identify the scope and artifacts - I determine what level I'm reasoning about, whether function, component, system, or organization, and recognize what observable evidence exists at that scope, from reading implementation code and tests at the function level to examining architecture documents and deployment configurations at the system level.
Verify against observable reality - I actually check the assumption by reading the relevant code, examining test coverage, reviewing version control history for context, or consulting documentation, treating my initial belief as a hypothesis to be tested rather than a fact to be trusted.
Distinguish verified from unverifiable - I separate assumptions I've confirmed against artifacts from those I cannot verify with available evidence, marking the latter as risks or unknowns that need different treatment, as an assumption I cannot verify must be treated as an explicit risk requiring stakeholder discussion or as a decision point where more information is needed before proceeding.
Document what verification revealed - I make newly-verified facts explicit for future reference and flag where my assumptions were wrong, understanding that the gap between what I assumed and what I verified tells me about my own blind spots and the system's actual behavior.
Adjust based on findings - I change my approach when verification contradicts my assumptions, recognizing that discovering I was wrong early is far cheaper than building on false foundations and discovering the error when others depend on my work.
I'm adding error handling to a data processing function and claim "the upstream parser always returns valid JSON objects." I pause and recognize this is an assumption about behavior. I enumerate what must be true: the parser validates input, it rejects malformed JSON, it never returns null or primitives. I'm working at function scope, so the observable artifacts are the parser's implementation and its test suite. I read the parser code and discover it actually returns null when input is empty, and its tests confirm this. My assumption was partially wrong - it does validate JSON structure, but empty input is handled differently than I thought. I document this by adding a comment explaining the null case and adjust my error handling to explicitly check for null before processing. Had I not verified, my code would have crashed on empty input, and debugging that later would have been far more expensive than the two minutes spent reading the parser implementation now.
I'm planning to optimize a React dashboard and assume "the DataTable component is pure and only re-renders when props change." I recognize I'm making claims about component design without verification. What must be true: the component has no internal state, it doesn't use context, it's wrapped in React.memo or is a class with shouldComponentUpdate, and it has no side effects in render. At component scope, the artifacts are the component implementation, its prop types, and how it's used across the codebase. I read DataTable.tsx and discover it uses a useEffect hook that fetches data based on a prop, and it maintains internal pagination state. My assumption was wrong - it's not pure and has complex re-render behavior. I use grep to find all usages and see it's rendered in six different places with different update patterns. I document this by noting in my optimization plan that DataTable needs refactoring before optimization, and I adjust my approach from "just memoize parent" to "first extract data fetching to parent, then optimize." Verifying saved me from an optimization that wouldn't have worked and would have confused the next developer.
I'm designing a retry strategy for a payment service and state "the payment provider API is idempotent, so retries are safe." I'm assuming behavior about a system integration. What must be true for this: the provider uses idempotency keys, duplicate requests return the same result without double-charging, their documentation guarantees this, and our integration actually sends idempotency keys. At system scope, the artifacts are the provider's API documentation, our integration code, deployment configurations, and any existing retry logic. I read our integration client and find we're not sending idempotency keys at all, and the provider's docs say retries without keys may cause duplicate charges. I check version control history to see if this was ever implemented and find a comment from six months ago saying "TODO: add idempotency keys before enabling retries." My assumption was completely wrong - retries are currently unsafe. I document this by filing a ticket to implement idempotency key generation and adjust my design from "add retry logic" to "first implement idempotency keys, then add retries in a follow-up." Catching this assumption before shipping prevented a serious production incident where users could have been double-charged.