All guides

Writing Effective Usability Test Scripts with Examples

January 24, 2024

What is a usability test script?

A usability test script outlines specific tasks, scenarios, and instructions given to participants during a usability test. It serves as a guide for moderators or facilitators, providing a structured framework for conducting the test. The script commonly includes an introduction, questions, a series of tasks and prompts for participants. It is typically part of a larger usability test plan document, and functions as a framework to maintain consistency, focus on objectives, and the overall efficiency of the testing process.

To learn more about writing a usability test plan, refer to the Guide to usability test plans or guide to creating a UX research plan.

Elevate your continuous UX Research

Validate your concepts and collect post-launch feedback through Hubble

Why do you need a usability test script?

  1. Consistency: A script ensures that each participant experiences the same tasks and scenarios, promoting consistency across the testing sessions. This allows for fair comparisons between participants.
  2. Focus on Objectives: The script helps maintain a focus on the research objectives by aligning tasks with the goals of the usability test. It ensures that the test is structured to gather relevant data.
  3. Reproducibility: With a script, the usability test can be reproduced in the future or replicated by other researchers. This is valuable for validating findings or conducting follow-up studies.
  4. Task Complexity: The script assists in carefully designing tasks with appropriate levels of complexity. It allows for a controlled introduction of features and functionalities to observe user interactions.
  5. Efficiency: Facilitators or moderators benefit from a script by having a clear roadmap for each session. It minimizes the risk of overlooking tasks and ensures that the test runs smoothly.
🖌️ Quick Tip
âś“ Run through the script to estimate how long the overall study and each task would take.

âś“ Have team members review key questions to ensure that they accurately reflect the research questions and the overall learning objectives.

Key components of an effective usability test script

Whether you are a seasoned researcher or not, having a study script helps the moderator stay on task. Even though every question and detail in the script don’t have to be addressed during each session, using the script as a reference guide helps you cover all the important questions. Below are key components that should be included in a script:
‍

Introduction

The initial step is to make an introduction and outline the structure of the upcoming usability testing session. Establishing mutual understanding, setting clear expectations, communicating the estimated duration, and addressing any potential concerns are essential.

It's crucial to remember that a usability test involves human interaction. Participants perform tasks while being observed, which creates a dynamic that relies heavily on communication. Thus, building a rapport with participants becomes important to the success of the usability test. Take the opportunity to introduce yourself and your team, aiming to break the ice by finding common ground—whether it's a shared location, mutual interests, or similar professional experiences.

  • The importance of the introductory paragraph is often overlooked. However, take the time to provide the context of the study and build rapport with the participants by clarifying what the study is about and what to expect whether your study is moderated or not.
  • Cultivating a comfortable atmosphere before the test commences contributes to more genuine feedback. This, in turn, results in more accurate and valuable outcomes.

Background questions

When running a usability test, you want to avoid jumping straight into usability testing and tasks. Instead, you should take the time to introduce each other and learn more about their background. When talking about the participant's background, be specific about their experience related to the research goals and the product you're testing.

  • Include introductory questions that will help you learn more about the participants. How do their background look like? how do their typical behavior or toolset look like related to the research questions you’ve set out?
  • Remember to keep the questions open and conversational. These questions will not only serve for you to learn more about the participants, but also help participants preheat and get comfortable before diving into the tasks.
  • Have participants talk and respond to the background questions by avoiding close-ended questions.
  • Probe key important questions to understand about participants’ background. Their responses to these background questions can be tied to their task results.
    ❌ Don’t: Do you use tool X? 
âś… Do: What tools do you use for going about achieving goal Y?
  • As shown above, instead of asking a binary question, be specific about their product usage in context of the research.

Examples of common background questions

Background questions in a usability test are crucial for gaining context about the participants and understanding their experiences, expectations, and proficiency levels. Here are some key background questions to consider:

Background and workflow

  • Please tell us about yourself, your role, and professional background?
  • How does your typical day look like?
  • Walk me through how your typical workflow looks like.
  • How do you go about achieving a job/task?
  • When was the last time you went about doing job/task?

Product usage

  • What tools do you use and why?
  • How does the tool fit in the context of your typical workflow?
  • How familiar are you with product X?
  • How often and when do you use a certain tool?
  • When was the last time you used the product?

Challenges and pain points

  • What are common challenges or pain points?
  • How do you currently go about solving the pain points?
  • What alternatives are you using to address the challenges?

Framing usability tasks

When framing usability tasks for user research, it's crucial to create scenarios and instructions that encourage natural user behavior. The way you frame the tasks can heavily affect how participants go about each task. Here are some key considerations to keep in mind:

  • Never blame the participants, but rather emphasize that you’re interested in testing how your product performs. Especially when participants struggle or fail to accomplish a task, be careful not to convey a nuance that it is upon them. Participants will feel discouraged and lose confidence, which would likely affect subsequent tasks and questions.
  • Emphasize to think-out-loud by elaborating how it helps collect data and make sense of participants’ interactions with the product. If the study is moderated, feel free to chime in to encourage think-out-loud when participants remain silent or are too focused on the task itself.
  • Depending on the fidelity of the prototype or product you are testing, set their expectations by emphasizing that it is a prototype and that not everything might be working, including certain buttons, menus, and etc. Participants that are less familiar with usability testing might misunderstand that the product is incomplete and would comment about certain buttons that weren’t suppose to work.
  • Frame the questions so that they are relevant to the goal or what your actual users would normally want to accomplish. Develop scenarios and tasks that mirror real-world situations users might encounter. Ensure they resonate with participants and align with their typical use cases. As you write the tasks, don’t give away too much instructions.
  • Sequence tasks logically, considering a natural flow of interactions. Begin with simple tasks to build participant's confidence before progressing to more complex scenarios.

Minimizing bias in usability tasks

The way you frame and word the tasks influence how participants approach the tasks. Below are some common strategies to minimize potential biases:

  • Refrain from providing hints or leading participants toward specific features or solutions. Instead, try to keep open-ended and allow users to explore and discover on their own.
  • Maintain a neutral tone and nuance to minimize any unintentional bias or influence whether you are moderating the study or writing out the task for an unmoderated study. Use language that does not sway participants towards a particular response.
  • Be deliberate about the words or terms that you use as they might bias the participants. Sometimes, participants might get attached to certain keywords, and only look for those key terms in the product. For example, if you are testing a renewed settings page, avoid using the actual term “settings” but frame the question as “how you would go about managing the account details?”

Post-task/wrap up questions

When crafting post-task questions in a usability test, it's essential to relate them to participants'  interactions with the prototype. Depending on the goal of the usability task, some bullet points will be more relevant to your study. Here are some considerations to keep in mind, followed by example questions:

  • Encourage participants to express their thoughts freely. Very often, participants feel reserved to provide honest feedback because they believe providing negative feedback will hurt the product team’s feelings. Use open-ended questions to allow for detailed responses.
  • Overall experience: Ask about the specific task just completed to gather insights into the user's process and experience. Focus on what worked well and not, what could be improved, and if anything was surprising. It’s also useful to point out their particular interaction to follow-up why they behaved in that certain way.
  • Satisfaction: Assess participants’ satisfactions with the task or feature. Inquire about any elements that contributed to a positive or negative experience.
  • Expectations: Explore whether participants’ expectations align with their actual experience. Inquire about any surprises or deviations from what they anticipated.
  • Ease of use: Ask participants to rate the ease of completing the task. Ask questions about any difficulties encountered and the reasons behind them.
  • Discoverability: Assess how easily participants found specific features or information. Dig into any challenges faced in locating certain feature or elements in a design.
  • Preference and Feedback: Ask participants about their preferences regarding the design, layout, or functionality. Solicit general feedback on the overall user interface.
  • Suggestions for Improvement: Encourage participants to provide constructive suggestions for enhancements. Also, it is more important to inquire about how those changes would affect their usage or overall experience.
  • Prioritization: If there were multiple tasks, ask participants to rank them based on perceived importance or difficulty. Also, it is important to understand their reasoning for prioritizing tasks in a certain order.

Examples of common post-task questions

For the example questions below, you can further gauge participants' experience by having them rate the particular experience in numerical scale. Capturing their experience in numbers can lend insights into quantitative data. Also, make sure to follow-up with why they gave a particular rating.

  • How was the overall experience using feature X? Why?
  • What do you think went well or didn’t go well?
  • From a scale of 1 to 10, with 1 being very difficult and 10 being very easy, how would you rate the overall ease of completing the task?
  • If any, what was surprising or unexpected during the tasks?
  • If any, what was most challenging during the task?
  • How did the design or layout of the interface contribute to completing the task?
  • How difficult was it to discover specific feature or information?
  • Based on your experience, how well did the task align with your initial expectations?
  • What changes or improvements would you suggest to make this a better experience?
  • How do the changes you suggested would actually affect your experience?
  • If you were to prioritize the tasks you performed today, how would you rank them in terms of importance or difficulty?
🖌️ Quick Tip
When having participants rate their experience, make sure to follow up with why. Ask them what a perfect score would look like to them.

Reflection and closing questions

Closing questions in a usability study are useful for gathering overall impressions, addressing any remaining concerns, and obtaining feedback on participants’ holistic experience.

  • Use the closing questions to gauge their overall experience with the product or prototype. Focus on capturing the overarching impressions.
  • Relate to the early introduction questions and see how the product experience compares with their initial expectations. Also, you can probe how the experience compares with other tool sets or current solutions that they employ, and have them rate their willingness to recommend or switch the product.
  • Probe which elements of the product are likely to be memorable for the participants. Explore which aspects made a lasting impact on participants. It could be either positive or negative so it is important to follow-up with why.
  • Use the last few questions to let participants exhaust any remaining thoughts and comments. Encouraging them to share anything less related can lead to a new insights.
  • If your study is a moderated, before wrapping the study, check-in with teammates to ask any follow-up questions.
  • Last but not least, thank the participants for their time and participation. Go beyond by describing how the feedback and findings will be used and contribute to the product development.

Examples of common closing questions

  • How was the overall experience with the study? Why?
  • How did your actual experience today compare with your expectations before the study?
  • Which aspects of the product do you think you will remember after 6 months? Why?
  • If any, which part of the study stood out most to you? Why?
  • Today, we looked at improving the user experience in area X. In regards to that, do you have any last comments or thoughts you want to share with me or the team?
  • Before closing, I want to check if any of the teammates have any questions.


Example of a full usability test script

The script below is an example with texts in bracket suggesting additional tips for moderating a usability study.

Introduction:

*Hello, thank you for joining us today. My name is XXX. I’m a YYY at company ZZZ. We also have team members joining the call to observe and learn from your experience today. Have you done this kind of study before? [Take a brief moment to make participants feel comfortable]. For today, you’ll be asked a series of introductory questions so that we can learn more about you and for the later part, you’ll be using our tool to perform some tasks. As a reminder, we are not testing you or your particular skill, but we are testing our new tool. For any of the questions and tasks, there are no right or wrong answers [Make them feel comfortable—Never blame the participants for their poor performance]. Your honest feedback will help us shape the direction of the product. Do you have any questions before we get started?*

Can I begin recording for note taking purposes? [Get permission to record as needed]

Background questions:

  • Tell us a little bit about yourself and the work that you do related to XXX.
  • What are some tools that you use for your task XXX?
  • How do you use those tools in your current workflow?
  • Tell us the last time you had to perform work XXX?
  • If any, what are some challenges when you are performing XXX or using the tool?

Usability tasks:

*For the next part of the study, we want to observe and learn from your experience as you work with our tool. We are testing the tool, not you [Before diving into the tasks, reassure them about task performance not being important. We want participants to feel as comfortable as possible]. It is also extremely valuable for us for you to think aloud by verbalizing what you see and think [Remind to think aloud].*

  • Task 1: You are provided with tool XXX to perform your work YYY. Go about how you would approach using this tool.
  • Task 2: Your next task is to achieve a specific goal ZZZ using our tool XXX. Walk us through how you would go about it.
  • Task 3: What other ways in the tool could you achieve the previous task?

Post-task questions:

  • How was the task? What went well or didn’t go well? Why?
  • What was most surprising or unexpected when doing the task? Why?
  • How was the discoverability or ease of use for performing XXX using the feature?
  • What changes or improvements would you suggest to make this a better experience?
  • How does this task compared to the previous task? What are the difference?

Closing questions:

  • How was your overall experience today?
  • When was the last time this tool XXX could’ve been useful to you? [Probe and ground for actual context of how and when they would actually utilize the tool].
  • From a scale of 1 to 10, how likely would you be using tool XXX? Why?
  • From a scale of 1 to 10, how likely would you recommend this tool XXX to your colleagues? Why?
  • From a scale of 1 to 10, how different is this tool XXX from other solutions out there? Why?
  • What would you remember the most from this session after 6 months from now? Why?
  • What are some changes that you wish to see in tool XXX?
  • How do those changes improve your experience with tool XXX?
  • As we are looking to improve the experience in XXX, do you have any last minute thoughts or comments to share with me or the team?
  • Check in with teammates for questions.
🖌️ Quick Tip
As shown above, prioritize important questions with bolded text. In case time runs out during sessions, prioritize the questions by bolding key ones that should be addressed.


Tips for effective test scripts

Having a well-thought-out test script ensures the study is structured and consistent throughout multiple sessions. Below are some tips and recommendations to consider when developing your next study.

1. Set clear study expectations

Communicate the high level purpose of the study, how long the study will take, and what the general flow of the study would be like to let participants set expectations. Stating what the study will be about is important in the introduction part of the script.

Additionally, sharing a few bullet points on what to think about in advance can help participants better prepare for the session. An easy, effective way is to email the participants on what the study will cover. For example, have participants recall on their most recent particular experience so that their feedback during the study is grounded to actual context.

2. Provide clear instructions and avoid jargons

Think about the audience that will be participating in the study. Especially avoid jargons that your product teams internally use. What is obvious to product teams that live and breath underneath the product area might not be as obvious to the participants.

Moreover, be intentional about the terms that you use during the study. Often times, participants would become attached to certain keywords, affecting their overall behavior or performance during the tasks. Thus, in order to avoid unknowingly biasing participants, try using terms that are neutral and avoid using emotive words.  

3. Refrain from giving too much details in the tasks

With prototype tasks, don’t give away too much details in the instructions. Frame the structured tasks that focus on the end goal that the participants should ideally achieve. In other words, what job do your usual customers want to accomplish by using your product or a certain feature?

4. Remind participants to think out loud during the tasks

Encourage participants to think aloud as they go through tasks. Having them read out the instructions ensures that they don't skip any important descriptions. Moreover, having them use their mouse cursor to indicate where they’re looking at make it easy to for you to pair their reaction to specific components that they’re engaging with.

5. Follow-up with why and how questions

Unless the participants are well-trained in thinking out loud, it is tempting for them to answer questions without elaboration and quickly move on to the next. It is the role of the moderator and study design to ensure that you collect as much qualitative data possible. Just as we add logics to surveys for a smooth flow of the study, consider adding if questions. What if a participant answers..., what if a participant prefers design B..., and so on. It is critical to always hear the reasoning behind participants' certain ratings.

6. Review and pilot-test the test script

Lastly, internally test out the study to make final changes and fix any potential misinterpretations that could happen. Have key stakeholders contribute to reviewing key questions. Double check that the questions that are being asked clearly reflect the learning objectives of the project. Have a second eye to review the flow of the study and read through the instructions.

‍

Frequently Asked Questions

What is a usability test script?

A usability test script is a detailed plan outlining the tasks, scenarios, and questions that participants will engage with during a usability testing session. It serves as a guide for the moderator to ensure consistency and thoroughness throughout the testing process.

Why is a usability test script important?

A usability test script helps maintain consistency across testing sessions, and ensures that all relevant tasks and scenarios are covered in a structured framework.

To learn more, see the guide section.

How do I create task scenarios for usability testing?

Task scenarios should be realistic and representative of typical user interactions with the product or interface. They should be specific, actionable, and focused on the goals of the testing session.

How many tasks should be included in a usability test script?

The number of tasks included in a usability test script depends on factors such as the complexity of the product or interface being tested, the duration of the testing session, and the goals of the research. Usability tasks can span around 5-10 tasks to ensure thorough coverage without overwhelming participants.

Jin Jeon

UX Researcher
Read other articles
Jin is a UX researcher at Hubble that helps customers collect user research insights. Jin also helps the Hubble marketing team create content related to continuous discovery. Before Hubble, Jin worked at Microsoft as a UX researcher. He graduated with a B.S. in Psychology from U.C. Berkekley and an M.S in Human Computer Interaction from University of Washington.