// Make abstract concepts concrete through examples that reveal essential properties. Use when: (1) asked to illustrate or demonstrate or show examples, (2) user acknowledges explanation but asks what it looks like, (3) explaining boundaries between similar concepts, (4) user can define term but cannot apply it.
| name | clarity-illustrate |
| description | Make abstract concepts concrete through examples that reveal essential properties. Use when: (1) asked to illustrate or demonstrate or show examples, (2) user acknowledges explanation but asks what it looks like, (3) explaining boundaries between similar concepts, (4) user can define term but cannot apply it. |
Make abstract ideas graspable by showing concrete instances that reveal their essential character.
When making abstract concepts concrete through examples:
Identify what resists understanding - Determine whether the difficulty is definitional, structural, practical, or boundary-based. Different understanding gaps need different illustration approaches.
Choose representative instances - Select examples that capture essential properties rather than accidental features. Look for typical cases where defining characteristics are clearly visible.
Show variation to reveal the pattern - Present multiple examples that differ in surface features but share the underlying pattern. Variation helps people extract what's essential from what's coincidental.
Use counter-examples for boundaries - Show near-misses that look similar but lack essential properties. Make the distinction explicit by highlighting which critical feature is absent. Boundaries become clear through contrast.
Avoid misleading particulars - Watch for features that appear prominently but aren't essential to the concept. Either choose different examples or explicitly note which features to ignore, since examples always contain more information than the concept they illustrate.
Layer from simple to complex - Start with the clearest, most stripped-down instance and progressively introduce complexity. People grasp basic patterns before they can appreciate variations and edge cases.
Test the mental model - Check what pattern someone would extract from seeing these instances together. Adjust examples when they risk creating incorrect mental models, even if each example is individually valid.
I'm explaining technical debt to someone who understands the term abstractly but doesn't recognize it in practice. Abstract definitions like "future cost of expedient decisions" haven't created recognition. I need concrete instances that make the concept visible.
I start with a clear example: "You're building a user registration system and hard-code the validation rules directly in the form handler because it's faster than creating a separate validation layer. This works perfectly now. Six months later, you need the same validation in three other places - the API endpoint, the batch import process, and the mobile app. You now have to implement the same rules four times and keep them synchronized, or spend a week refactoring to extract the validation you could have separated initially." This shows the core pattern: a shortcut that creates future work.
I add variation to show the pattern across contexts: "You copy-paste a component instead of abstracting it because you're in a hurry. Now bug fixes require changing five files instead of one." Different scale, same pattern. "You skip writing migration scripts for database changes and document manual steps instead. Every deployment requires someone to remember and execute those steps, and eventually someone forgets and breaks production." Different domain, same pattern. The variations reveal that technical debt isn't about any specific practice but about choosing speed now over ease later.
I include a counter-example to clarify boundaries: "You write a simple linear algorithm instead of an optimized one because the dataset is small. The simple approach is easier to understand and maintain, and performance is fine. This stays fine even as the product evolves." This looks like it might be technical debt - choosing the simple option - but it's not, because there's no future cost. The counter-example shows that not every expedient choice creates debt; only those that increase future friction.
I test the mental model by asking what pattern emerges from these examples. The pattern should be: shortcuts that save time now but multiply work later. If instead someone extracts "avoid copying code" or "always optimize," the examples emphasized wrong features. The instances need to converge on the time-cost-tradeoff pattern, not specific practices.
I'm teaching user-centered design to engineers who think it means "make things pretty" or "ask users what they want." The abstraction hasn't connected to practice. I need examples that reveal the essential activity: designing based on user needs discovered through observation and testing rather than assumptions or aesthetic preferences.
First instance: "A team building a dashboard assumes users want to see all available data because more information seems better. They observe users actually using the dashboard and discover that people ignore most of it and express frustration about finding the three metrics they check daily. The team redesigns around those three metrics with everything else collapsed or removed. This is user-centered design - testing assumptions about user needs against actual behavior and adjusting based on evidence." This shows the core: observation over assumption.
Variation in a different domain: "A documentation team writes comprehensive reference material because complete information seems valuable. They watch new users trying to accomplish tasks and see people getting lost in the reference docs, succeeding only when they find incomplete but task-focused tutorials. The team restructures documentation around common tasks rather than complete API coverage." Same pattern - evidence-based adjustment - different medium.
Another angle: "A feature team receives user requests for advanced filtering options and starts building them. Partway through, they interview users about their actual workflows and discover that people want filtering to solve a specific problem: finding items they worked on recently. The team implements a simple 'recent items' view instead of complex filtering. This is still user-centered design even though they didn't build what users asked for, because they designed for the underlying need rather than the stated solution." This variation shows that user-centered doesn't mean user-commanded; it means understanding the need behind the request.
Counter-example for boundaries: "A team builds an internal tool and designs it based on their own needs as users of the tool, without formal research or external user testing. They're users of what they're building, so they make decisions based on their direct experience of the problem. This might be good design, but it's not user-centered design in the formal sense - that term specifically refers to designing for others and testing assumptions rather than trusting your own intuition." This clarifies that user-centered is about a specific practice of validating assumptions with users, not just "good design."
The pattern across examples: repeatedly testing design assumptions against evidence from actual users, whether through observation, testing, or research. Not aesthetics, not feature requests, not completeness - observation-driven iteration.
I'm explaining refactoring and people keep conflating it with debugging, rewriting, or adding features. The abstraction is too loose. I need examples that show what refactoring is and counter-examples that show what it's not, making the boundaries explicit.
Positive instance: "You have a 200-line function that's hard to follow. You extract related steps into smaller named functions so the main function reads like an outline of the process. The behavior stays identical - same inputs produce same outputs - but the code is easier to understand. This is refactoring: changing structure while preserving behavior." This establishes the defining characteristic: structural change without behavioral change.
Counter-example for fixing bugs: "You have a function that calculates the wrong result due to a logic error. You fix the logic so it returns the correct value. This is not refactoring - it's debugging or bug fixing. Refactoring explicitly preserves existing behavior, whether that behavior is correct or not. You might refactor code to make bugs easier to find, but the refactoring itself doesn't change what the code does." This clarifies that behavior preservation is essential, not incidental.
Counter-example for adding features: "You have a simple user service that only handles registration. You add password reset functionality, authentication tokens, and session management. This is not refactoring - it's feature development. Even if you restructure the code while adding features, the addition of new behavior means it's not pure refactoring." This shows that expanding capability violates the definition.
Counter-example for rewriting: "You have code written with callbacks and you rewrite it using async/await. The structure changes completely, even though the behavior might be similar. This might be refactoring if behavior is truly identical, but if there are any subtle changes in error handling, timing, or edge cases, it's a rewrite rather than refactoring. The distinction is whether behavior is provably preserved or merely approximated." This highlights that behavioral identity is strict, not approximate.
Together, these examples converge on the essential property: refactoring is the subset of code changes where structure changes but behavior doesn't. The counter-examples prevent common misconceptions by showing where the boundary lies. Someone who grasps these examples understands that refactoring is a specific, constrained activity, not a synonym for "improving code."
I'm explaining microservices architecture and need to choose examples carefully because examples naturally contain more detail than the concept requires, and readers will extract patterns from everything present.
Problematic example: "Netflix uses microservices with each service deployed to AWS Lambda, communicating via REST APIs, with separate databases for each service." This example is factually accurate but contains misleading particulars. Someone might conclude that microservices require Lambda (serverless), REST (this protocol), or AWS (this provider), when these are incidental to this instance rather than essential to the pattern.
Better approach - start with essential properties stripped of implementation details: "Microservices architecture means decomposing an application into separate services where each service handles a distinct business capability, can be deployed independently, and owns its data. For example, an e-commerce system might separate user accounts, product catalog, shopping cart, and order processing into independent services. Each service can be updated without redeploying the others." This establishes the pattern without suggesting specific technologies.
Then layer in variation to show the pattern transcends any particular implementation: "One team might deploy microservices as containers orchestrated by Kubernetes, another might use serverless functions, another might run them as traditional server processes. Some communicate via HTTP, others via message queues, others via gRPC. The microservices pattern is about service boundaries and independence, not the specific technologies used to implement them." The variation prevents readers from associating the concept with any particular technology stack.
I check for misleading particulars by examining whether someone reading this example would think X is required when it's actually optional. If the first example they see uses Docker, REST, and AWS, they might think those are defining features. By showing multiple implementations using different technologies, I signal that those specifics are variable while the service decomposition and independence are constant.
This approach - starting with a stripped-down statement of essential properties, then showing varied implementations - prevents the common mistake where people equate a concept with its most frequently seen implementation.