We need to ensure that our digital services are accessible and easy to use.
We do usability testing to find out how people use the digital services we design and build. We record how people use our website and forms as they attempt to complete a set task and we ask for their feedback. This helps us understand how people use things in ‘real life’ and if this matches our expectations and designs. We take the feedback from usability testing and use it to improve the user experience (UX) as we build.
I wanted to share some things I learned from our most recent usability testing sessions.
Alex and I carried out some usability testing on a new online service that will enable people to get information about their Council Tax account online.
I was really happy and felt the sessions went well. We met some people, ran through some scenarios and learnt some very useful stuff about how people were navigating through the task.
At the end of the day I wrote up some brief notes on how the sessions went and the things that I wanted to take forward to use in the next usability testing session. I smiled as I typed them up. Life was good. It was a beautiful spring afternoon. The sun shone. I ate lunch in the park.
However, and there’s no easy way to break this, the next day back in the office I was devastated to find that we lost all our audio for our testing sessions (in fact it never recorded). This was despite checking all our equipment and set-up before each session. (Cue very sad faces, and disbelief at how this could possibly happen) Pausing the software before each participant began was the cause of the problem).
Tips for running usability testing
Check your recordings after each participant
The lesson here isn’t to ‘not pause software’. It’s that if we had checked the quality of the previous session, before we started the next one, we would have spotted the error straight away, and been a lot less sad.
If you carry out usability testing, and are looking to learn from this blog, then the one thing I would like you to take away is to check your recordings. Check them after each participant and catch your unexpected equipment/software failures before they ruin your week.
You may find that you need to make smaller improvements, like adjustments to lighting or microphones.
We were able to salvage the situation with our notes, and the screen recordings without audio were more enlightening than I thought they would be.
Finally, here are the four other things that I felt were key for helping a day of usability testing run smoothly.
We use a laptop (MacBook Pro) for our usability testing as that’s where our screen recording software is installed.
We asked about internet use when recruiting participants for our testing, but we didn’t ask people about experience of different devices.
However, some of our participants had never used a mac before and appeared a little daunted. We added a simple question to our script, asking “do you usually use a Mac or a PC?”. Asking this question at the start of the session enabled us to provide reassurance to our participants that they didn’t need to know anything about Macs to complete their task.
Are you sitting comfortably?
Explicitly inviting participants to “settle in” made a real difference. Things to consider here are the position of the mouse and the keyboard. I positioned the mouse below the centre of the keyboard. This prompted participants to move it to where they needed it to be. I also encouraged people to adopt a good posture (mostly pulling the chair up closer to the table). Bonus for us, getting closer to the screen captured a better recording on the laptop’s web cam. (If we had an external webcam we could adjust its position)
Encouraging “thinking out loud”
At the start of each session we ask our participants to describe what they are thinking, doing and looking at as they go through the task, however sometimes this doesn’t happen.
Reminding participants, as they started the task, made a big difference to the amount of verbalisation. It’s really helpful to us when participants verbalise what they are doing and thinking as they complete the task. It provides an additional layer of feedback and creates a powerful impact when we share user testing sessions with our team. Of course verbalising is only useful if you capture the audio (*weeps*).
“Why am I here?” Reiterating the goal
We found that reminding participants during the session of the goal / task that we’d set was really valuable. Losing track of the goal could be a sign of high cognitive load, or that the process design is not supporting the task as it should, we are dealing with an artificial scenario in an artificial situation.
Onwards and upwards to the next session.