// Identify and make explicit the unstated premises reasoning depends on. Use when: (1) asked to reveal premises or assumptions, (2) text uses "obviously" or "clearly" without justification, (3) argument depends on specialized knowledge, (4) recommendation presented as objective fact.
| name | integrity-surface-assumptions |
| description | Identify and make explicit the unstated premises reasoning depends on. Use when: (1) asked to reveal premises or assumptions, (2) text uses "obviously" or "clearly" without justification, (3) argument depends on specialized knowledge, (4) recommendation presented as objective fact. |
Identify and make explicit the unstated beliefs, background knowledge, and value judgments that writing depends on but doesn't acknowledge.
When writing explanations, arguments, or technical content, examine what you're taking for granted that readers might not share, making those premises visible so they can evaluate the reasoning on its own merits.
Identify logical dependencies - Trace through each claim to find what must be true for it to hold, making visible the premises the argument depends on but doesn't state. Look for gaps where you've skipped steps because they seemed obvious, and consider what someone who disagrees would challenge as an unjustified leap.
Surface background knowledge assumptions - Notice when you're using specialized terminology, referencing concepts, or building on prior knowledge without establishing that foundation first. What feels like common ground to you might be specialized expertise to readers, so make explicit what they need to know to follow the reasoning.
Make value judgments explicit - Examine claims about what matters, what's important, or what should be prioritized to see where you're asserting values as if they were facts. Words like "obviously," "clearly," or "of course" signal that you're treating a judgment as self-evident. State directly when you're making a value claim rather than a factual one.
Clarify definitional assumptions - Identify key terms that could be understood multiple ways and make explicit how you're using them. Abstract concepts like "quality," "performance," "simplicity," or "good design" mean different things to different people. Define what you mean by these terms in this specific context rather than assuming shared understanding.
Test for independence - Read through the writing from the perspective of someone who doesn't already agree. Could they understand why you reached these conclusions even if they don't accept them? Could they articulate your reasoning accurately without filling in unstated premises from their imagination? Transparent reasoning enables genuine evaluation rather than forced interpretation.
Distinguish necessary from unnecessary disclosure - Evaluate which assumptions genuinely need to be stated versus which can reasonably be treated as shared common ground. Consider your audience's background and the context where this will be read. Focus on assumptions that shape the argument's validity, not exhaustive enumeration of every possible premise.
Position assumptions appropriately - Place the disclosure of assumptions where readers need them to evaluate the reasoning. State foundational premises before building arguments that depend on them, make definitional clarifications when introducing key terms, and surface value judgments before making recommendations that rely on those values.
I'm writing documentation that explains why our system uses eventual consistency instead of strong consistency for a particular feature. My initial draft says: "We chose eventual consistency because it provides better availability and performance. Strong consistency would create unacceptable latency for users distributed globally." Reading this back, I notice I'm assuming readers understand the CAP theorem, know what eventual versus strong consistency means, share my values about what constitutes "unacceptable" latency, and agree that availability matters more than consistency for this use case. Someone unfamiliar with distributed systems can't evaluate whether this was the right choice because they don't know what trade-offs I'm making. I revise to make the assumptions explicit: "In distributed systems, we face a trade-off between consistency (all nodes see the same data immediately) and availability (the system responds even if some nodes are unreachable). For this feature, we prioritized availability because users expect the UI to respond instantly even during network partitions, and we determined that showing slightly stale data (up to 5 seconds old) is preferable to showing error messages. This assumes that users value responsiveness over perfect accuracy for this particular interaction." Now readers can evaluate whether that assumption about user values is correct, whether 5 seconds is an acceptable staleness window, and whether the consistency-availability trade-off was properly understood and applied.
I'm writing a technical recommendation that says: "We should migrate to microservices because our monolith has become unmaintainable." Reviewing this, I realize I'm making several hidden value judgments: that maintainability problems justify the cost and complexity of migration, that the specific maintainability issues we face would be solved by microservices rather than other approaches, and that we're willing to trade operational complexity for development flexibility. Someone could reasonably disagree with any of these values but my original framing presents them as obvious conclusions. I revise: "Our current monolith deployment takes 45 minutes and requires coordinating releases across teams, which has slowed our deployment frequency from daily to weekly. I'm recommending microservices because I believe faster independent deployment per team is worth the increased operational overhead of running multiple services. This assumes we value deployment flexibility over operational simplicity, that our team has the expertise to manage distributed systems, and that the coordination cost of the monolith exceeds the integration cost of services. If we don't share those assumptions, we might instead improve the monolith's modularity without distributing it." Now someone who disagrees can engage with the actual values at stake rather than just seeing the recommendation as technically obvious.
I'm writing about code quality and my draft says: "This codebase has serious quality problems that need immediate attention." Reading this, I notice that "quality problems" could mean anything - performance issues, bugs, security vulnerabilities, poor readability, lack of tests, tight coupling, or any combination. I'm assuming readers will interpret "quality" the same way I do, but quality is a multidimensional concept where different people prioritize different dimensions. I revise to make my definition explicit: "I'm defining quality problems here as issues that increase the cost of making changes safely. Specifically, this codebase lacks automated tests (95% of code has no test coverage), uses inconsistent naming conventions that make code hard to navigate, and has tight coupling between modules that causes changes to ripple across unexpected areas. These aren't performance or security problems - the code works correctly in production. The issue is that making changes requires extensive manual testing and carries high risk of breaking unrelated functionality. When I say this needs immediate attention, I'm prioritizing our ability to evolve the system over short-term feature velocity." Now readers understand exactly what I mean by quality, can evaluate whether those specific issues deserve the priority I'm assigning, and can disagree with my definition while engaging with the actual concern rather than arguing about an ambiguous term.
I've written an explanation of why we chose PostgreSQL over MongoDB for a project. My draft lists technical reasons: "PostgreSQL offers better transaction support, more mature tooling, and stronger consistency guarantees." I test whether a reader could evaluate this reasoning independently by imagining someone who doesn't already agree. They would need to know: what kind of data we're storing, whether we need transactions, what "mature tooling" means in this context, whether consistency matters for this use case, and what trade-offs we're accepting by not using MongoDB's strengths. None of that is in my draft. I'm stating conclusions that depend on unstated context about the problem we're solving. I revise: "We're building a financial reporting system where data accuracy and auditability are regulatory requirements. For this context, I chose PostgreSQL over MongoDB because: (1) our data has a fixed schema defined by accounting standards, making MongoDB's schema flexibility unnecessary, (2) we need ACID transactions to ensure report consistency when multiple writers update related records, and (3) we need SQL's declarative query language for complex analytical queries that auditors will write. This assumes that correctness and queryability matter more than MongoDB's advantages in horizontal scaling and flexible schemas. If we were building a system where schema evolution and write scalability were the primary concerns, MongoDB might be the better choice." Now a reader can evaluate whether the stated requirements actually necessitate PostgreSQL's strengths, whether the trade-offs are appropriate, and whether the characterization of both databases is accurate.