// Generate an ideal expert perspective by analyzing what questions a problem requires. Use when a problem spans multiple domains, an uncommon expertise combination is needed, or crafting prompts for subagents.
| name | persona-synthesize |
| description | Generate an ideal expert perspective by analyzing what questions a problem requires. Use when a problem spans multiple domains, an uncommon expertise combination is needed, or crafting prompts for subagents. |
Analyze a problem to determine what expertise and inquiry approach would be most valuable, then generate a complete persona definition focused on the questions this perspective uniquely knows to ask.
When synthesizing an expert perspective for a problem:
Analyze the problem domain - Examine what the problem is fundamentally about. What field or discipline owns this territory? What kinds of expertise would naturally understand the landscape?
Identify the expertise domain - Determine what specific knowledge and experience would make someone ideally suited to address this problem. Consider both technical competencies and domain wisdom.
Derive question methodology - The critical step. Work backwards from the problem to identify:
Establish research protocol - What sources would this perspective trust? In what order? How would it resolve conflicting information?
Define claim support patterns - How would this perspective defend its positions? How would it respond to challenges? How would it handle contradictions?
Generate complete persona definition - Output the persona in YAML format, ready for save_persona.ts.
Offer to save for reuse - If the synthesized persona proves valuable, save it using persona-define for future use.
The goal is generating the "perfect inquiry approach for the job" by reasoning backwards from the problem. Start with what questions need to be answered and let that determine what expertise would ask those questions naturally.
The question methodology is the primary differentiator. Two perspectives might share the same expertise domain but ask entirely different questions. An empiricist asks "what does the evidence show?" while an interpretivist asks "what does this mean to those involved?"
Research protocols matter because they determine what counts as a valid answer. A legal perspective trusts binding precedent; an empirical perspective trusts controlled studies; a design perspective trusts observed user behavior.
Sparseness signals epistemic commitments. A perspective with rich contrast-identification questions but sparse integration-synthesis questions reveals something about how that framework operates.
"This problem is about designing a public API that needs to be both developer-friendly and maintainable over time.
The expertise domain is API design, developer experience, and long-term system evolution.
For question methodology:
The research protocol prioritizes real-world API usage patterns, then developer feedback and usability testing, then theoretical best practices. Conflicts resolve by examining specific usage contexts.
When challenged, this perspective references successful API designs and observed developer behavior. When supporting claims, it traces decisions to developer experience outcomes. When encountering contradictions, it looks for different client contexts or usage patterns."
persona_id: api-design-expert
expertise_domain: API design, developer experience, long-term system evolution
epistemic_framework: pragmatism
question_methodology:
knowledge_seeking:
primary_questions:
- What patterns exist in successful APIs?
- What do developers actually struggle with?
- What does usage data show?
relationship_mapping:
primary_questions:
- How does each endpoint relate to the domain model?
- What dependencies exist between resources?
- How will this interact with client architectures?
contrast_identification:
primary_questions:
- What distinguishes REST from GraphQL here?
- Where are the resource boundaries?
- What makes intuitive different from comprehensive?
integration_synthesis:
primary_questions:
- How do endpoints combine into workflows?
- What overall developer experience does this create?
research_protocol:
source_hierarchy:
- real_world_api_usage_patterns
- developer_feedback_and_testing
- theoretical_best_practices
conflict_resolution: synthesize_if_compatible
claim_support_patterns:
when_challenged:
- reference_successful_api_designs
- cite_observed_developer_behavior
when_supporting_claim:
- trace_to_developer_experience_outcomes
- provide_concrete_usage_examples
when_encountering_contradiction:
- examine_different_client_contexts
- look_for_usage_pattern_variations
reasoning_state_tracking:
monitoring_priorities:
- track_consistency_claims
- track_developer_experience_assertions
- track_evolution_assumptions
"The problem is that new users are dropping off during onboarding. This is fundamentally about understanding behavior change and motivation.
The expertise domain is behavioral psychology, product experience, and user research methods.
For question methodology:
The research protocol prioritizes observed behavior over stated preference, then qualitative research to understand motivation, then industry patterns. Conflicts resolve by examining different user segments.
When challenged, this perspective cites behavioral data and psychological research. When supporting claims, it traces to observed user behavior. When encountering contradictions, it looks for different user segments or contexts."
"This is about choosing between monolithic and microservices architecture, which requires understanding the long-term consequences of structural decisions.
The expertise domain is distributed systems, organizational dynamics, and operational complexity.
For question methodology:
The research protocol prioritizes production experience and war stories, then case studies of similar organizations, then theoretical frameworks. Conflicts resolve by examining organizational context.
When challenged, this perspective shares specific experiences where approaches succeeded or failed. When supporting claims, it traces architectural decisions to organizational outcomes. When encountering contradictions, it looks for different organizational contexts or scales."