Design is creativity with a specified purpose. Design always has goals and intention behind it. So how do we know when we’ve achieved those goals? Measuring the success of design can be incredibly simple, it can also be one of the most challenging aspects of a design career. With advertising, measuring the success of a design is simple, if more people purchase the product you’ve done a good job.

What about product design though? If your product is complex, with many interwoven functions, and multiple design teams, how do you know if your design change is successful? Did sales go up because of something you did, or someone else? Did conversion rates increase because of your design, or was it just coincidence? What about the opposite, if a metric tanks was it your design, or did someone else cause the issue?

Measuring Design

When we talk about measuring the success of design it’s important to start with a good understanding of what it is we’re measuring, and why those metrics are important. Anyone familiar with product or business KPIs should know the familiar favorites: click-through-rate, bounce rate, session duration, and NPS.

I’d like to argue however, that these metrics are not as valuable when discussing the success of design, and often times could indicate the opposite of what we’re actually trying to achieve. A specific example can easily be illustrated with bounce rate and session duration. If the goal of a product is to provide the desired information quickly and easily we’d likely see an increate in bounce rate and a decrease in session duration. This does not mean that our product is bad, or that our users are unhappy. What it may actually indicates is that we’ve succeeded in providing exactly what the user needed in order to quickly and easily complete the task, they don’t need to search around or spend excess time in our product.

A few more valuable metrics for evaluating the success of design would be: Task success rate, time on task, error rate, and error recovery rate. I particularly like starting with these 4 metrics because they facilitate conversations around the outcomes we’re trying to achieve with a design, as well as potential tweaks we can make to assist the user in achieving that outcome.

Task Performance Indicators

I’m certainly not the originator of these ideas, Gerry McGovern first wrote about TPIs back in 2016. Task Performance Indicators (TPIs) are a “stable, reliable, repeatable metric to test top tasks over time,” in practice, they’re a repeatable metric that allows you to continually check on the health of specific tasks as you make changes to the design of your product.

The real magic here is that it doesn’t take that much to get usable results. As with all research related work, there’s a bit of up-front planning, you need to identify what you consider success to be, identify representative user segments, and generate your tasks and questions, but you only really need to do that once. Because the intention if TPIs is to check health and progress over time, you should be testing with the same segments, though different users, and asking the same questions every time you run this study.

What TPIs provide is a clear breakdown of how quickly and successfully your users can use your product. You get an indication of time-to-completion of a task, and a solid indicator of whether or not the tasks themselves are easy to complete, if 40% of the testers can not complete a task you’ve got some work to do. Additionally, TPIs provide a sort of health-check on the product. If your scores go down that’s a flag to investigate why. With continuous development things change, get removed, or break all the time. Regularly evaluating your TPIs means you catch these issues, and investigate them proactively, rather than getting news at a quarterly review that our monthly active user count has plummeted, and no one is sure why.

Successful Design, Successful Business

At the end of the day, everything we do is in service of two groups: our users, and the business, or as Marty Cagan puts it were “creating solutions for our users, that work for the business.” Part of knowing if our design changes “work for the business,” is knowing what we’re measuring, why it’s the right right to measure, and how it impacts the business.

Traditional business centric and vanity metrics may not provide the correct information on whether or not we’ve been successful, and may even reflect the opposite. If we want our users to be happy and continue using our products we should focus on metrics like task completion rate, time on task, and error rates to inform our design and product decisions.

If you’re interested learning more about design-related KPIs and measuring the impact of design I suggest checking out these videos by Vitaly Friedman, and his new course Smart Interface Design Patterns.

Adam Sedwick

I work on Design systems and Advocate for Accessibility on the web.

Tennessee

Blogging

Design Systems

Design Tokens

Web Accessibility

Web Design

Web Development

Open Web Standards