Balancing Self-Reported Data with Behavioral Insights
Prolific Interactive

Balancing Self-Reported Data with Behavioral Insights

Prolific Interactive

Political pollsters will tell you that the order and wording of questions matter a great deal. Ask a voter their position on a controversial issue. Then, ask them whether or not they plan to vote for a candidate who falls on one end of that issue. The results will be different than if you asked for the candidate support first and their issue position second.

It’s hard to control what is top-of-mind for anyone at any particular moment. What commands our memory, attention, and emotion are often driven by external factors. What people tell you they do, and what they actually do can be two very different things.

Why It Matters

From constructing personas and journey maps, to refining interaction patterns just before launch, collecting feedback directly from users is now a common practice among designers, strategists, and product managers. But how that feedback is collected plays a major role in its outcome, and both interviewer and interviewee may fall prey to a host of biases that are difficult to eliminate from the process. This introduces the worrisome possibility that the research shaping your product isn’t painting an honest picture of what users want and need.

And sometimes that is okay. Design is always at least a little experimental, and every good researcher understands that you rarely prove an idea to be absolutely right, rather that it’s not absolutely wrong. To perpetuate the false, but still thought-provoking quote from Henry Ford, “If I asked people what they wanted, they would have said faster horses.” Many interpret this to mean that disruption involves taking things one step further than your research would verify.

However, disruption and uncertainty aren’t crutches to lean on in the absence of quality research. With a little elbow grease applied to your session guides, there are some simple ways to make sure self-reported data matches with behavioral observations.

Implementing User-Generated Content with Lilly Pulitzer

Research into implementing user-generated content (UGC) in the second version of the Lilly Pulitzer app posed an interesting challenge: how accurate would users be in reporting how often they look at non-professional photography in the process of making a clothing purchase?

Unsurprisingly, we got responses across the full spectrum: some users reported never looking or caring to look at pictures of other people wearing products, some users said they always do, and many said, “I’m not sure.”

To start uncovering actual behavior, we knew we had to begin by opening the subject of UGC in a less forward manner.

What social apps do you use most often? We hoped they’d say Instagram or Pinterest.

What accounts do you follow? We hoped they’d talk about stores or fashion bloggers.

Why do you follow them? We hoped users would say something like “I get style or outfit ideas” or, even better, “sometimes they’re promoting an item and I’ll buy it.”

Have you ever made a purchase you wouldn’t have otherwise because of something you saw on social media? If so, what about the picture or post prompted you to go to the store page?

Next, we wanted to see how those answers mapped to what users might recognize about non-professional product photography. We asked our participants to go through about 30 product photos, with roughly ten drawn from buckets of product photography, UGC, and model photography, and tell us what signifiers on the page led them to believe it was one of the three.

Now we had two sets of data: the role user-generated content plays in their purchasing process, and how they identified the differences between UGC and all other types of content. But we weren’t done yet!

Our next step was to test UGC concepts, though we didn’t ask users to tell us what they thought about the UGC, or even that there was UGC on the screen. We simply asked: “Walk me through how you’d judge whether or not this is a product you’d want to purchase.” We were able to trace the flows that most often led our research participants to view UGC, and rapidly iterated our prototypes to better reflect those flows. Outcomes from users who followed process that mirrored their Discovery and Picture Identification exercise responses were weighted more heavily in our final design recommendations.

The last step was to run a bare-bones diary study with users for whom their step-by-step deviated the most from their Discovery and Picture Identifications answers. We gave gift cards to a small set of our research participants, asked them to chart the process they went through to make a purchase on an app or website that we knew had user-generated content on it. We then followed up with them a few days later to see what they could remember about their purchase process and what they felt was missing. Some participants reported checking non-professional photography or reviews before making their purchase, and we were able to use some of this trailing evidence to polish the iconography and tie it more directly to Lilly’s delightful user review feature.

What began with a lot of confusion ended with highly-rated designs that have accumulated positive reviews on the app store. The process was straightforward: first, find out what users say they do and why they say that they do it; second, find out who best matches their words to their behavior, and build it into a primary use case; third, find out how to simplify edge cases and polish the visual application of the experience.

Conclusion

Finally, and it’s oft-forgotten, any kind of discovery research, no matter how thorough, has its limitations. Particularly when it comes to new and experimental features, well-crafted and cared-for experiences are ones where designers and strategists have worked together from square one to tag and track interactions, and to surface funnel data.

Our work on implementing user-generated content in the Lilly Pulitzer app is just one example of how to marry what users say they do in an interview setting, and what they actually do when they’re checking the app on a lunch break or just before they go to bed.