prompt design research

user-centered insights for building adaptive prompt systems

project overview

Context: Conducting foundational research to inform a prompt design system.

Objective: Validating components or iterative process of designing prompts.

Role: User Researcher, Conversation Designer

Duration: 3 months

Methods: In-depth interviews, think-aloud sessions

Organizations are applying engineering paradigms to natural language prompts to create modular and reusable systems, but there's a fundamental question that often goes unasked: How does this help us write prompts?

Before validating any specific design system, we needed to understand at a foundational level how users conceptualize prompts and what building blocks they find useful when writing them. This research aimed to elucidate real user behavior.

challenge

research approach

Key Questions:

  • How do users naturally approach prompt writing?

  • Which components do they prioritize based on the task?

  • How does this vary according to factors like data, task complexity, and industry context?

Methodology: Starting with 40 participants recruited across cross-functional teams, I sent out a comprehensive screener survey to ensure diversity across role types, technical expertise, and AI tool experience. From this pool, I conducted 8 intensive 45-minute think-aloud sessions with participants as they tackled a real-world task: writing follow-up emails to prospects after discovery calls.

The think-aloud protocol allowed me to observe not just what participants said they did, but what they actually did when constructing prompts under realistic conditions.

what we discovered

Component Fluidity: Participants consistently merged or skipped components that frameworks typically treat as discrete building blocks.

  • Solutions Architect noted that context and goals could be the same depending on the prompt.

  • Technical Writer found role-setting unnecessary, saying role and tone can be inferred from context.

  • Content Designer was surprised to discover that asking Claude for a "professional but warm" tone automatically resulted in "confident but not pushy" messaging - the tool adapted to the specified business context.

platform intelligence changes everything

Users have adapted to the reality that conversational AI platforms often have tone and style baked in (via backend prompts). What you need to specify depends entirely on your tool:

  • Many platforms already handle tone and style automatically through system-level prompting.

  • Simple contextual commands work effectively when platforms understand domain context.

  • Guardrails vary dramatically by task and industry - they’re essential in regulated contexts but can block good results in tasks requiring flexibility.

the "lazy prompting" discovery

Minimal Start, Maximum Iteration: Perhaps the most significant finding was that effective users have discovered "lazy prompting" - starting minimal and iterating - works better than trying to craft the perfect prompt upfront.

What We Observed:

  • Technical Writer: No required structure, 3-5 iterations as standard practice

  • Project Manager: Scientific method approach - strip to essentials, build up systematically

  • Product Manager: Staged validation - summary → context → audience in separate steps

  • Customer Success Manager: AI states knowledge gaps for confidence-based evaluation

the validation challenge

But this iterative approach comes with pain points that traditional software development hasn't solved:

  • Engineers struggle with not being able to understand the implications of changes when modifying prompts.

  • Product teams find that acceptance criteria has to adapt to AI's unstable output.

Users are developing new, sophisticated validation approaches specifically tailored to AI's unpredictable nature.

mental model tension

A Solutions Architect revealed a key contradiction in user expectations of AI models: more information is always better in a prompt, and yet she’s surprised by its effectiveness with minimal input.

from prompts to dialogue

Users are naturally moving beyond single-shot prompting toward conversational refinement:

  • "Prompts are never a one-and-done." (Product Manager)

  • Users teach the model like a partner through back-and-forth interaction.

  • AI is most effective when treated dialogically.

This represents a fundamental shift from prompt engineering to conversation design - treating AI as a collaborative partner rather than a system to be programmed.

prompt writing is a process, not a structure

This research challenged the assumption that effective prompting requires upfront specification. Instead, users have naturally evolved toward treating prompt writing as an iterative conversation rather than a structural engineering problem. Users don't follow rigid frameworks - they engage in collaborative problem-solving with AI through rapid iteration and refinement.

designing for natural workflows

The research validated four key design principles:

  1. Optimize for iteration over specification.

  2. Allow natural overlap between components.

  3. Support learning-by-doing.

  4. Emphasize that prompt design requires a unique blend of linguistic intuition and technical skills.

These insights shifted our system design approach from amassing prompt templates toward encouraging learning-by-doing and creating space for experimentation and validation of outcomes. Rather than standardizing prompt "anatomy," we focused on supporting the iterative discovery process that effective users had already developed.

Next
Next

voice experience design for smart glasses