User Testing in Realistic Environments
Designing for mobile means designing for multiple environments. Most of the time, we don’t use our phones in quiet spaces like conference rooms, but for some reason we always test our designs there. Recently, the Prolific SF team explored testing designs in more realistic context of use and learned why getting out of the office for user research can be crucial.
Real life isn’t like a testing lab.
Most contexts modern humans move through don’t facilitate deep focus, so your user is probably distracted while she’s interacting with your mobile product. Formal testing is great for catching usability issues or ironing out flows that serve engaged users, but it doesn’t answer questions like, “Will a user keep this app on her phone?” or “What is this experience like while I’m commuting on the bus?” To dig into these questions, we decided to meet our target users in chaotic environments as well as orderly ones.
How do I create a realistic testing environment?
You fake it. With Prolific’s agile process, we decided the best approach was to create our own, down and dirty take on Contextual Design methods. Stalking our target users during their daily routines didn’t seem viable (or legal). Instead, we found or created environments that mirrored real-life contexts of use as much as possible.
We realized that two key factors influence how people interact with an interface: attention and intent. If a substitute environment can evoke the same levels of attention and intent that users have in your real environment, then you’ve created an effective simulation.
The first key to faking it: Attention.
This is that whole focus/distraction thing I keep talking about. Is your user paying attention? Or is she in an environment or mindset that makes focusing difficult?
The second key to faking it: Intent.
If attention is the orchestra, intent is the conductor. When your user has high intent, they have a clear goal in mind and can better tune out distractions and direct their attention.
The terms “high intent” and “low intent” are often used in the context of e-commerce, but they don’t only apply to retail. Your users can have goals besides buying something, and the strength of that desire can vary.
Can I have some examples, please?
How we test discovery
That user discovering your content or product for the first time? She probably isn’t giving you her full attention. What better way to simulate that vibe than interrupting people in public spaces and asking them to give five minutes of feedback on a prototype?
We try to do some testing like this every week of a project if new user acquisition is a primary business goal. If the design truly grabs people’s attention, we’re onto something.
Testing Environment: Guerrilla-style ambush (Public spaces in office buildings or coffee shops are great options.)
Attention: What is that person next to me doing? I like this song! Karen, why did you move our meeting again!?
Intent: IDK, LOLZ CATS. Or work. Not this, go away.
How we test shopping
By the time clients decide to actually buy a product they found on your platform, their intent has shifted to high. One trend we’ve seen in consumer behavior is using an app, or website, as a shopping guide in an actual storefront. Public spaces can be distracting, but intent to purchase keeps people relatively focused.
When working on e-commerce platforms that also have a brick and mortar presence, we work with partners to intercept high-intent shoppers in their storefronts. With these tests, we evaluate how effective the prototype is at guiding clients to a product or facilitating conversation with store employees.
Testing Environment: A real-life store
Attention: Lights, People, Oh! What I need is by that sign.
Intent: I want to buy this product.
How we test educational content
Finally, people needed to actually be able to learn things using some of the products we work on. At the point where a user commits to trying out a lesson or watching a tutorial, the intent to complete a task is high and attention is laser focused. In the past, we’ve actually gone to users homes and watched them complete tutorials in their own work spaces to evaluate how effective they were.
Testing Environment: A private, quiet space
Attention: Watch, Do, Repeat
Intent: I want to do this thing.
But does this actually produce better designs?
We think so! By testing video tutorials with users in their natural workspaces, we learned that chaptering videos to match workflow was key. We made it easier for users to rewatch key sections until they “got it” without having to restart the video or manually find the right spot. Similarly, by guerilla testing a browsing experience, we found that using gifs on a category detail page surprised and delighted users, grabbing their attention.
Keep the Discussion Going
We’d love to chat more about contextual research. As we work on new types of products, we’re always brainstorming new ways to simulate contexts of use! Feel free to reach out if you’d like to riff on the possibilities.