User research scripts

Table of Contents

A user research script is the difference between a useful 30-minute conversation and a friendly chat that goes nowhere. When you’re short on time, ‘winging it’ usually means biased questions, missing context and messy notes. A solid script keeps you focused, comparable across sessions and fair to the person you’re interviewing. Done well, it also makes research easier to share with sales, product, delivery and leadership without turning into a debate.

In this article, we’re going to discuss how to:

  • Build a user research script that gets consistent, decision-ready answers.
  • Run interviews that reduce bias, confusion and ‘nice chat’ drift.
  • Turn conversations into actions, owners and deadlines, not a pile of notes.

What A User Research Script Is And When You Need One

A user research script is a structured set of prompts for an interview or usability session. It usually includes: a short intro, consent language, warm-up questions, core questions, prompts to dig deeper and a wrap-up. Think of it as ‘guardrails’, not a word-for-word recital.

You need a script when you care about consistency and auditability, for example:

  • Product discovery: comparing themes across 8 to 12 users without moving the goalposts.
  • Sales and retention: understanding why deals stall or why customers churn, in their words.
  • Hiring: standardising interview panels so ‘gut feel’ doesn’t win by default.
  • Distributed teams: keeping sessions comparable across time zones and interviewers.

As a baseline, user-centred design standards stress building solutions based on user needs and feedback (Source: ISO 9241-210:2019). A script is one of the simplest ways to make that repeatable.

How To Write A User Research Script (Step By Step)

Below is a practical build process. It’s written for operators: you want answers you can act on, with enough context to trust them.

Step 1: Define The Decision You’re Trying To Make

Write one sentence: ‘After these sessions, we will decide ____.’ If you can’t fill that in, your script will drift. Examples:

  • ‘Choose between onboarding flow A and B.’
  • ‘Decide whether to ship feature X in Q2 or drop it.’
  • ‘Fix the top 3 reasons users abandon at step 4.’

Then list the 3 things you must learn to make that decision. Anything else is a ‘nice to know’ and should be parked.

Step 2: Choose A Session Type And Timebox

Most teams default to a 30 to 45 minute interview because it’s easy to schedule and cheap to run. Usability tasks often need 45 to 60 minutes because people need time to think and try things.

Pick one primary format per session:

  • Interview: you’re learning about behaviour, context, constraints and current workarounds.
  • Usability test: you’re watching someone attempt tasks with a product or prototype.
  • Concept test: you’re checking comprehension and value of a proposed idea, not asking for feature requests.

Whatever you pick, timebox each section in your script. A script without timeboxes turns into a ‘follow-up later’ promise you won’t keep.

Step 3: Write Questions That Start With Past Behaviour

Prefer questions about what happened, not what someone thinks would happen. This reduces polite answers and wishful thinking. It’s a basic interviewing principle in UX practice (Source: Nielsen Norman Group, user interview guidance).

Use this pattern:

  • Past: ‘Tell me about the last time you…’
  • Specifics: ‘What happened next?’ ‘What did you try?’
  • Costs: ‘What did that cost you in time, money or risk?’
  • Workarounds: ‘How do you deal with it today?’

Keep ‘Would you use…?’ questions to a minimum. If you ask them, treat the answer as weak evidence unless it’s backed by real examples.

Ready-To-Use User Research Script Template (Copy And Adapt)

This is a general-purpose user research script you can copy into your doc. Replace the bracketed parts and keep it tight.

Session Name: [Onboarding pain points interview]

Length: [40 minutes]

Goal (one sentence): [Decide what to change in onboarding to reduce drop-off after step 2]

Must-learn (max 3): [1] [2] [3]

Roles: Interviewer [name], Note-taker [name], Observer(s) [names]

1) Welcome and set-up (3 minutes)

‘Thanks for your time. I’m going to ask about your recent experience with [product/process]. There are no right or wrong answers, and you won’t hurt our feelings. If something’s unclear, that’s on us.’

‘We’d like to [take notes / record audio] so we don’t miss anything. Is that OK?’

Information only: If you record calls, make sure you handle consent and transparency in a way that fits your situation and local rules (Source: UK ICO guidance on recording and data protection principles).

2) Warm-up and context (5 minutes)

  • ‘What’s your role, and what does a normal week look like?’
  • ‘What were you trying to get done when you came to [product/process]?’

3) Walk me through the last time (15 minutes)

  • ‘Tell me about the last time you [did the key job]. Where did it start?’
  • ‘What tools did you use? Who else was involved?’
  • ‘What slowed you down or created rework?’
  • ‘What did you do when you got stuck?’

4) Dig into the hard parts (10 minutes)

  • ‘Which step felt most risky or frustrating, and why?’
  • ‘What information did you need that you didn’t have?’
  • ‘If you could change one thing about the process, what would it be?’

5) Quick concept check (optional, 5 minutes)

Show one concept only. ‘What do you think this is? What would you expect it to do? What would you do next?’

6) Wrap-up (2 minutes)

  • ‘Is there anything I didn’t ask that I should have?’
  • ‘Can we contact you if we need to clarify anything?’

Question Bank: Prompts That Get Useful Answers

Use these as add-ons, not a shopping list. Pick the ones that map to your decision.

Understanding motivation

  • ‘What made this important enough to do now?’
  • ‘What would have happened if you did nothing?’

Understanding trade-offs

  • ‘What did you give up to get this done?’
  • ‘What would you accept being worse, if something else improved?’

Understanding risk and trust

  • ‘Where do you double-check things? What are you worried might go wrong?’
  • ‘What would make you stop using this immediately?’

Understanding success

  • ‘How do you know you did a good job?’
  • ‘What does “done” look like for you?’

Running The Session: An Operator Checklist

Scripts fail less from bad questions and more from sloppy execution. This checklist keeps you honest.

Before the call

  • Write the decision statement at the top of the doc.
  • Confirm who’s taking notes and who’s watching silently.
  • Prepare your consent wording and stick to it.
  • Set up a simple note structure: Context, Quotes, Observations, Actions.

During the call

  • Ask one question at a time. Silence is fine.
  • When you hear a strong claim, ask: ‘Can you give me an example from the last time?’
  • Tag moments that matter: pain, workaround, decision point, handoff, delay.
  • Park off-topic items visibly so you don’t lose them.

After the call (within 30 minutes)

  • Write a 5-line summary: what they were trying to do, what broke, what they did instead, what it cost, what would help.
  • Log 1 to 3 actions with owners and dates.

If you regularly lose details between calls, using a tool that produces searchable meeting transcripts and summaries can reduce the ‘where did we hear that?’ problem, as long as you still review and correct the output.

After The Call: Turn Notes Into Decisions

Your goal isn’t a perfect research repository. It’s a clear decision trail: what you heard, how often, and what you’re doing about it.

Use this lightweight workflow:

  • 1) Clean up: remove filler, keep direct quotes where they change the meaning.
  • 2) Cluster: group evidence by theme (for example, ‘missing info’, ‘handoff delays’, ‘trust checks’).
  • 3) Score confidence: note sample size and how strong the evidence is (behaviour beats opinions).
  • 4) Decide: write what you’re changing, what you’re not changing, and why.
  • 5) Assign: list tasks with an owner and deadline.

For teams that live in calls, it helps if action items are created consistently and pushed into the workflow you already use. A system for automated action items from meetings can help, but treat it as a draft that a human signs off.

Conclusion

A good script keeps research honest, comparable and usable across the business. Start with the decision, timebox the session, and default to questions about real past behaviour. Then do the unglamorous part: turn notes into actions with owners and dates.

Key Takeaways

  • Write your user research script around a single decision, and limit must-learn questions to three.
  • Ask about recent real events first, then use follow-ups to get specifics, costs and workarounds.
  • Summarise fast, cluster evidence, and ship actions with owners and deadlines.

FAQs For User Research Scripts

How Long Should A User Research Script Be?

Keep the script to one or two pages so you can scan it live. The structure matters more than the word count, as long as it covers intro, consent, context, core questions and wrap-up.

How Many People Do I Need To Interview?

It depends on how varied your users are and how big the decision is, but many teams start seeing repeat themes after a small set of sessions. Treat early patterns as a signal, then top up if the evidence is mixed or high risk (Source: Nielsen Norman Group guidance on qualitative sample sizes).

Should I Share The Questions In Advance?

Share the topic and what you’ll cover, but usually not the full list of questions. If people rehearse answers, you get polished stories instead of what actually happened.

Can I Record User Interviews For Note-Taking?

Often yes, but you need to handle consent and transparency properly and store recordings securely. This is information only, and you should check what applies in your context (Source: UK ICO guidance on recording and data protection principles).

A Utility-Led CTA: Keep Research Notes Consistent Across Calls

If you’re running interviews every week, the real cost isn’t the call, it’s the admin debt afterwards. If you want a controlled way to capture conversations, produce consistent summaries and keep actions from slipping, Jamy can help.

  • See how Jamy supports an AI meeting notes workflow
  • Use multilingual meeting summaries for global research calls
  • Standardise follow-ups with meeting action items and owners

Search

Table of Contents

Latest Blogs