Research Beyond Usability Testing

Often when I hear designers talk about user research they’re specifically referring to one of two things, usability testing, or validation testing. While there two types of testing are important, there’s so much more in the user testing toolkit that we can use.

Usability and validation testing by their very nature take place fairly late in the product development lifecycle. It’s not until you have something built or prototyped that you can partake in these methods. How do you decide what things to pursue building? How do you know you’re focused on the right thing?

What Defines Research?

According to the dictionary research is defined as: “They systematic investigation into and study of materials and sources in order to establish facts and reach new conclusions.”

By this definition, anything from talking with customer support agents, to looking at analytics falls under research. Often times CS is siloed into it’s own department, and has very little communication with design or product management unless there’s a massive issue. Looking at usage data and analytics can be seen as the job of a business analyst.

If we want to understand where are users are hitting pain points that we can resolve, or where users bounce, or spend most of there time, looking at this data and working with these people is imperative.

Just Enough Research

Stealing the title here from Erika Hall’s book. When thinking about what paths to pursue and what features to build we need to know just enough to make a decent bet. Talking to users is always going to be valuable, but it’s not always viable, and you can’t be certain that the people you’re talking to are representative of your larger user base, survey bias is a thing after all.

This is where data comes into play. A user can’t lie about the actions they take within a product or experience. Knowing what data to collect and pay attention to is able to provide near “real-time” information on what your users are doing. Tracking user flows and navigation through an app allows you to look at trends emerging in the data. Is a large quantity of users bouncing from a specific page? Are user’s navigating to, or between, a select group of pages? With this information we can understand and highlight flows that may need to be promoted within navigation, or take some time to explore what about a specific page might cause users to leave.

This same analysis should be used pre-emptively and throughout the product development lifecycle to validate assumptions and reduce risk of proposed work. If you assume that making a change to feature X will improve user behavior Y, run a scale experiment, doing as little work as possible. Did the change you proposed move the needle on user behavior, or did it reveal a different problem that you actually need to solve?

Avoiding Footguns

You just launched a killer new feature! Pop the champaign and pay out the bonuses! A few weeks go by and you notice usage numbers aren’t really increasing. You’re feature is seeing good adoption across the user base, but the overall numbers aren’t really improving.

Turns out that as part of your latest release you unintentionally caused a negative impact on another core part of your app. Without looking at and understanding usage data you might not have caught this issue.

Another great thing about collecting and analyzing usage data across multiple products and flows is that you can see how any new feature or release impacts other parts of your application. Ideally you notice and catch these trends while running testing or experiments, but even post-launch you need to be monitoring and comparing data to understand the impact you’re having.

Adam Sedwick

I work on Design systems and Advocate for Accessibility on the web.

Tennessee

Blogging

Design Systems

Design Tokens

Web Accessibility

Web Design

Web Development

Open Web Standards