AF CyberWorx is focusing on Human-Centered Design for the month of October. It’s the secret ingredient of all that we undertake and can be an extremely valuable addition to most processes. Please follow us on social media to gain more insight into the value of Human-Centered Design and enjoy this week’s blog post by our lead UX/UI designer, Mr. Larry Marine!
THE SECRET TO A SUCCESSFUL USABILITY TEST
Among the more popular forms of user research, usability testing is often conducted to gain insight into the users’ perceptions of a design. Like any other tool, usability testing yields the best results when performed correctly. An incorrectly performed usability test can lead you down the wrong path.
A common test design approach is to include specific directives on how to use a product, such as “print out a receipt.” While this seems innocuous enough, it actually biases the users’ behaviors and consequently the results. If users think they know the objective, they inherently try to please the tester. It’s in our nature. Users may even ‘game the system’ (focus more intently than normal on the stated outcome) to try and please us.
Rather than telling the users to do something, try using a question that can only be answered by performing a set of tasks. The advantages of using questions are plentiful:
- As humans, we love to solve problems or answer questions. Using a question in a test creates an intrinsic motivation that elicits a more realistic behavior. If users know what you are looking for, they are likely to alter their natural reactions and focus more attention on that specific action.
- A technique to hide your intentions about what you are testing for is to ask a question about an aspect of the task that occurs after the thing you are focusing on. For instance, instead of telling the user to print a receipt, ask them, “Can you tell how many pages are included in the printed receipt?” They will print a receipt as part of the question without realizing that is what you are focusing on.
- Questions reduce the tension a user might feel to perform a task. If they are given a directive, they believe it can be done. Thus, if they cannot complete the task, they feel it is their failure, not the design’s. If they cannot answer a question, they can legitimately say they don’t know the answer, which is less stressful and does not influence their behavior.
Another test design failure involves unintentionally biasing users by providing an incomplete prototype that lacks the screens users would run into when they veer off course. It’s not necessary to have a complete prototype, but at least have the screens a user would likely run into for a common set of errors for the tasks you are testing.
In the real world, users won’t immediately realize they have made a mistake and will continue to click around a few more screens. This observed behavior gives real insight into how the users finally determine that they have made an error and then how they try to get back to where they made the error.
A prototype that only has screens and interactions for the happy path unintentionally informs the users when they have taken a wrong turn, thus interrupting their natural behaviors. This limits your ability to identify what cues the users are relying on to determine if they are going down the right path and how they recognize when they haven’t. A key tenet of usability testing is that you learn more from watching users making and recovering from mistakes than you do from them NOT making mistakes.
Another common mistake is to rely on usability testing as the only user research method. Usability testing is an excellent method for capturing evaluative insights but is not that accurate for providing generative insights. Watching people use your design only informs you about the existing design, not about truly innovative design approaches or unmet needs. These require more generative methods such as interviews or observations.
Usability Testing can be a very useful tool, or it can mislead you if performed incorrectly. Biasing the users with a poorly conducted test yields inaccurate insights leading to design changes that solve the wrong problems. There is a major difference between what users say and do, and you should always lean towards performance rather than user preference. You must strive to design your test to collect accurate information, otherwise you are just wasting your time and your resources.
The team at AF CyberWorx can help you plan your next Usability Test (or any user research) to gain the maximum benefit.
*The postings on this blog reflect individual team member opinions and do not necessarily reflect official Air Force positions, strategies, or opinions.