What is unmoderated testing?
Unmoderated testing is a method in user research in which participants independently interact with a product, service, or prototype without the presence of a researcher. With software tools, such as Hubble, product teams are able to digitally build studies and capture participants’ screen and video recording for data analysis. Because the test can be assessed remotely, unmoderated studies are very often conducted in a remote environment, allowing participants to engage with the product in their own environment and at their own pace.
Unmoderated testing vs. moderated testing
The main difference between unmoderated and moderated testing is whether a researcher is present during the study. Moderated testing requires a researcher present during each session to guide participants, providing instructions and asking questions in real-time.
Even though there is no clear formula for which method to choose, the choice between the two depends on the research goals, resources, timeline, and the desired level of control over the testing environment. While one of the primary benefits of running unmoderated studies is the lack of researcher presence, it is also one of its major weaknesses.
- Research goals: Consider what the learning objectives are for the study. What types of data would help you achieve those learning goals? To obtain in-depth, qualitative data, moderated studies can yield greater insights than unmoderated studies as researchers can ask follow-up questions during the sessions. Nonetheless, this doesn’t necessarily mean that unmoderated testing yields poor qualitative data and there are other factors to consider.
- Money and time: As moderated testing involves a researcher presence, it requires each session to be scheduled, which could possibly limit the breadth of participant pool due to regional time difference. Consequently, moderated sessions are more time and cost consuming with recruiting, scheduling, and study facilitating being closely involved.
On the other hand, unmoderated testing offers relatively cost and time effective solution as it can be run remotely at participants’ own convenience. Online research tools allow a diverse participant pool readily available and quickly fill-up the targeted data points.
- Control over environment: Moderated testing could be more appropriate if the study requires high control over the test environment. For example, if the test involves an abstract concept that needs more clarifications, or includes setting up a developer environment with specific instructions, having physical team members present to guide through the participants could be a safe option.
Unmoderated testing vs. concept testing vs. usability testing. What are the differences?
We often hear terms like concept testing, usability testing, moderated/unmoderated testing all together or interchangeably. While the terms are all related, they have distinct meanings.
As mentioned above, unmoderated and moderated tests are about whether researchers moderate the study. They describe the context or the setting of the study design. On the other hand, concept testing and usability studies are research methodologies that have specific research objectives.
Concept testing: is suitable when you have early design concepts that you want to validate before building the entire product. For example, during a discovery phase, your team may have tens of brilliant ideas that just haven’t been validated for the market fit. By showing concepts and ideas with low fidelity prototypes or storyboards, you can test and compare multiple concepts.Concept testing focuses on attitudinal data, capturing participants’ overall feel or belief about the concept. The essence is about validating the concept by communicating its value, getting feedback, and ideally iterating the concepts. If a concept were to exist as an actual product, would it solve a particular need that your customers have? Some of the key questions for concept testing are listed below:
- If any, how likely would this concept solve a problem or fulfill customers’ needs?
- If so, how well does this concept solve a problem in a way that’s meaningful to the customers?
- How believable is the concept? (Especially if the concept is in early stage and the value proposition is abstract).
- How different is this concept to the other solutions that customers currently use?
Usability testing: examines how usable, effective, or pleasant a product or prototype is. Participants are often provided with a particular task to go through while product teams observe their interaction and reaction to the product.Unlike concept testing, usability studies are behavioral, focusing on observation of participants and how they react to the product. Below are some of the key questions for running usability studies:
- How was the overall experience?
- What worked well and didn’t work well?
- What was most surprising to the customers? (Surprise can be both good and bad).
- What was most frustrating to the customers?
The lists of questions for concept testing and usability testing are not mutually exclusive and can be used interchangeably. However, it is important to understand the distinction between the two in order to design an effective research plan that prioritizes the project’s key objectives.
What are the pros and cons of unmoderated test, and when should I use it?
If you are wondering whether your next concept testing or usability study should be an unmoderated one, here are some pros and cons of running unmoderated studies. As described in this article, unmoderated testing offers advantages such as scalability, cost-effectiveness, and the ability to reach a diverse participant pool. Thus, it is efficient for large-scale quantitative insights and iterative testing. However, it lacks real-time feedback and guidance. If you need in-depth qualitative feedback or have complex setup or tasks that require moderation, moderated studies can be more effective.
Pros of unmoderated testing:
- Easy and fast to collect data
- Time flexibility
- More diverse participant panel (no geographical limits)
- Low cost in money and time to run iterations of study
Cons of unmoderated testing:
- Lack of moderator to clear things up or ask follow-up questions
- Relative lack of in-depth, qualitative feedback
- Depending on the type of study, reviewing videos and data can be time consuming
The cons listed above can be remedied somewhat by simplifying the study tasks, having a structured set of questions, and piloting the study to ensure that everything flows as expected before having external participants go through it.
Having compared the pros and cons of unmoderated testing, a study doesn’t always have to be exactly one over the other. Consider iterating or having multiple stages of the project to incorporate both moderated and unmoderated testing. For example, unmoderated tests are often used to collect additional data points if the existing data have conflicting results, or if you simply aim for larger sample number for statistical significance.
So how do I run an unmoderated study?
- Define Objectives: The very first thing you want to do is clearly outlining the research goals and what you want to learn from the study. Depending on the learning objectives, the design of the study, type of data to collect (whether it’s quantitative or qualitative), research methods, and more will be impacted.
- Identify Customer Profile: Once you have the research goals setup, identify characteristics of ideal target audience. You can begin with their basic demographic background, and then expand on to how their workflow is like, what certain tools they use, and more.At the end of the day, you must have a few key characteristics that sets the target audience apart from the rest of those similar groups. If the customer profile is set too broadly, the results may be also too broad to interpret.
- Choose a Platform: Select a reliable testing platform that aligns with your research needs and budget. Compare different features available among various platforms to decide which one will best suit your needs.
- Create Tasks: Develop clear and concise tasks or scenarios for participants to complete during the test. Make sure the setup and the tasks are straightforward so that participants can complete the study by themselves. Each question that is being asked should be meaningful and avoid any potential jargons that your internal product teams may be using to avoid any potential confusion. This article suggests additional tips on writing clear instructions for unmoderated studies.When you are running concept or usability testing with Figma prototypes, clearly communicate that the product is merely a prototype and that not all features or buttons may be functional. Encourage participants to think-aloud, express their thoughts as they navigate through the prototype.
- Launch the Test and Collect Data: Deploy the test to participants, ensuring they understand the instructions and expectations. When you use Hubble, you will be able to immediately review the results as responses get submitted.
- Analyze Results: Interpret the collected data to draw meaningful conclusions and insights. What was expected, and what was surprising? Connect how the findings relate to the original research learning goals. In Hubble, you can view summarized results and detailed prototype results.
- Share the Results with Stakeholders: This article shares some quick tips and suggestions for creating compelling research reports. Below are some additional suggestions for sharing out research insights with the stakeholders:
- Create an executive summary page to highlight the research findings in a few bullet points: What are the top three takeaways that you wish the stakeholders to walk away with from the presentation? Make the results memorable by making them grounded to actual context of a user.
- Highlight the journey of a particular user or participant that stood out: Narrate its background, needs, pain points, and success areas along the journey.
- Don’t limit the share outs to presentations: Presentations can often be one-directional and less engaging. Get creative by organizing workshops to collaboratively analyze results or share findings.