How to Make the Most of Your Human Factors Usability Study
Written by Alex Therrien and Kelly Catale
Human Factors (HF) usability studies often seem chaotic, unpredictable, and confusing, but they don’t have to be. Investing in a small amount of planning and keeping some of the guidelines below in mind will make your experience better. The Human Factors experts at Sunrise Labs have experience serving multiple medical device and combination product companies in both the client role and consultant role, which enables us to offer these tips based on our real world experience. Here are some things you can do as a client to help your Human Factors consultants make the most of your upcoming usability study.
1. Make sure your Human Factors Engineering team is investing time into learning how your product functions
When seeking a Primary Care Provider, we often want someone who works hard to understand our health history, current conditions, and other important factors that shape who we are as a person, to help inform their medical decisions and diagnoses. Similarly, when seeking an HF partner to conduct your usability study, you should always choose to engage a team that is capable of, and committed to, understanding as much as possible about your system’s technology, features, and capabilities, to help inform their project work.
Your HF partner should have the technical depth to think like system engineers. Understanding the core functions of your system and how they are achieved is as critical to usability testing as understanding the user profiles and use environment.
Make sure that your HF partner is investing in learning how your system functions and are curious about how the subsystems interact to achieve the product’s purpose. A testing team is only as effective as their level of engagement with the product; they need to be able to understand how the system works in order to evaluate its usability.
2. Make time to work with the testing team to understand your system
This point goes hand-in-hand with point #1. Once you find a capable and curious HF partner, make sure to work with them to encourage understanding of your system’s key principles and functionality. Make a point to invest time in providing an overview and sharing important information about your product, rather than asking your HF consultants to learn on their own.
Your time investment might take the form of a brief presentation about the product’s history, key design decisions, and an explanation of technology, followed by a Q&A session. Demonstrations of the system can be very beneficial as well! Some of our best experiences with usability studies have come from clients that dedicated a full workshop to educating the testing team, which ensured that the system was well understood.
Time spent up front helping your HF consultants learn more about your system will pay off significantly, later in the project. The testing team will be able to identify usability issues and their root causes more easily during test sessions and the testing report will be richer and more comprehensive. Importantly, your HF consultants will be able to provide realistic design recommendations that align with the system’s technology and design progression, which makes them directly actionable for the team. This saves you money and time, compared to filtering through weaker, less relevant recommendations.
3. Dry run, dry run, dry run!
What is a dry run? Think of it as a dress rehearsal before the “real testing” begins. Dry running your usability study protocol early catches problems with the protocol, test setup, and the prototype that could lead to bad data or wasted testing time during the usability study.
There is often a rush at the end of every prototype build to just “get things working”, which inevitably causes bugs and unexpected issues. Dedicating time to beta test your protocol and prototype before running formal usability study sessions ensures:
- The testing team is comfortable with the use scenarios and understands the expected functionality of the prototype,
- Bugs are found and fixed before they create noisy testing data that will mislead regulatory reviewers, and
- Required infrastructure, like a connection to a cloud database, functions as expected and doesn’t cripple the team while they are in the field testing the product. This is the most expensive portion of the testing cycle.
4. Accept that you will have usability findings
Imagine you are heading into a summative usability test, you have prepared and worked hard to reach this moment. The first session starts and a participant does something unexpected.
Don't panic. Trust the process and know that you don't have to fix everything.
Every study has findings. Make sure your study design allows ample time to review all findings and identify what the end result (harm) would be. If your design mitigations are comprehensive and you have spent time iterating your design, its very likely that the harm from observed use errors has already been prevented or reduced.
Sometimes, your findings indicate that additional changes to the system design are required. Make sure you have time to digest and address key findings after a usability study. This is very important. You invested time and money into conducting the study for a reason!
If your system’s use-related risks have been adequately managed with a robust set of design mitigations, your team can evaluate if any remaining usability findings would impact your business case. If not, add these new findings to your product roadmap for future releases because the new finding doesn't present a risk to the user or patient, and doesn't impact the business case.
5. Invest in realistic set dressing to avoid distraction
Have you ever looked at “fake” screens or mockups of web pages or forms and thought “Wow, that name sounds funny” or “Those numbers don’t seem possible''? The same kinds of thoughts can happen when usability study participants view mock data sets in prototypes.
Preparing a clinically accurate data set to include in the prototype (with real numbers and accurate symptoms) is just as important to a prototype's function as the features being simulated. Engaging with your clinical advisors or gathering this type of data during early research will help your prototype seem more accurate and realistic to the study participants.
Even if the study is simulated use, keeping participants invested in the illusion that the study is a real experience and accurately represents their real healthcare experience gets them to act more naturally. More natural behavior means your data is more accurate and helps you identify real shortcomings rather than inaccurate findings that are based on testing induced confusion.
6. It is important to listen to participants; it’s even more important to observe their actions
Participants aren't always aware of what they are doing, which means they are typically unaware when they commit a use error. Additionally, the cause and timing of the use error may be completely disconnected. We rarely encounter participants who are self-aware enough to articulate a truthful and accurate root cause.
Observation of the participant’s behavior is often a much more accurate indicator of the root cause of a user error. Watching where their attention was focused and tracking interaction patterns gives a much better indication of the relationship between the use error and the device's interface.
That said, there is still value in asking users about their mindset and thoughts leading up to a use error. Their impression of how a use error occurred can help further clarify how the participant believes the system works.
Emphasizing the participant's assessment is risky because humans are notoriously unreliable at self-reporting. Participants will attempt to help explain what happened to the best of their ability, but their reporting will be heavily influenced by trying to tie together what they believe to be related elements into a cohesive story: their understanding of the system, their understanding of the task, and their limited recollection of what just happened. Stay skeptical of these potentially "clean answers" from the participant unless they are supported by your direct observations during the task.
7. Stay calm even when your participants are a mess
Humans can be difficult, nervous, clumsy or forgetful. So can study participants. Your initial reaction to study participant behavior might be that these complications are an unrealistic burden for the study and should be treated as a testing artifact. Resist this urge. Testing artifacts are rare in a well designed and executed study. These conditions may be frustrating, but they are realistic.
Your system users will not always be in a peaceful and focused mindset. They have lives, job responsibilities, finances, and families, all of which contribute to distraction during system use. Ideally, testing would be limited to calm participants in a distraction-free state. Your patient’s day-to-day life doesn’t function that way, and they will bring at least some of that reality to the usability test.
Embrace knowing that your users are human and consider that your system may need to find an innovative way to cut through distraction. Designing a system that helps work with distracted, unfocused users will reduce the urge to chase the slippery slope of over-correcting the usability study design to limit these types of users.
8. Add time to create prototypes and ensure they address what you want to learn
The key to a valuable usability study is preparedness, which includes well prepared prototypes. First, take time to understand what you want to learn from your usability study. Are you looking to gather feedback on a certain function or system interaction? Do you want to evaluate different potential design mitigations and see which option is most effective? Do you want to confirm that you have mitigated a particularly high-risk use error? Are you evaluating a usability validation protocol? Answers to questions like these will help you determine what functionality is required for your study prototype.
Additionally, when creating your project schedule, consider what resources and time will be required to generate software, hardware, labeling, and other materials needed for study prototypes. Dedicate time to this effort and know that it is value-added, even if some or all of the prototypes are made solely for the usability test and will not be included in the final product. Schedules that allow for purposeful prototyping end up leading to more meaningful test results.
9. Use a whiteboard to track “top line” findings
In nearly every room behind the one-way mirror or study observer collaboration space, there is a whiteboard -- use it! After conducting the first couple of study sessions, keep a running list of important trends and findings that you consider valuable. You can update this list in real-time during the study, and use its contents to help support post-session debrief discussions.
Keep the list in a place that the entire team can see (rather than just your personal notebook), so that everyone can be engaged in the process of updating and adding to the list. At the end of the study, the list can be translated into a preliminary summary for sponsors and stakeholders, while the full report is still in progress.
10. Eat healthy snacks and meals, and get enough rest
Usability testing is a marathon, not a sprint. Taking care of yourself during the testing week can make you feel more energized, engaged in the process, and motivated to listen to, and learn from, your study participants’ feedback. Eating balanced meals, drinking enough fluids to stay hydrated, and getting enough rest are all key investments for a top performing team.
Alex Therrien is the Director of User-Centered Design and Kelly Catale is a Principal Human Factors Engineer at Sunrise Labs. Check out our User-Centered Design Experience