As I've mentioned previously, surveys are a means whereby you validate assumptions. A week or so ago, a colleague and I were chatting about how to discover which UI of a specific screen our users enjoyed most. There were three iterative designs being worked through. So I suggested an A-B-C test to validate his assumption: the majority of users would like B the most, A the second most, and hated C. He had already hypothesized the answer, now he needed to validate it.
One really cool way to validate these assumptions and one that I stand by is the Kano Model. In short, Kano seeks to measure the "delight" features, functionality, experiences, etc bring a customer. Because delight is an emotion, Kano's ability to help you visualize emotional attachment to your product is powerful - possibly the most powerful driving force behind product affinity.
Before we analyzed our data using Kano, we had to, not surprisingly, gather the data. We undertook an omni-channel feedback approach. One which took customer feedback from numerous, sometimes disparate sources and compiled the feedback into one, meaningful illustration. In our case, these sources were direct feedback from sight visits, our annual user conference, an ideas page, surveys, and what I refer to as "excited utterances" - or things you pick up from customers by just being in an industry. We went through the feature requests and whittled it down to 4 popular features we wanted to validate.
With those features, we then built a Kano survey. This isn't a run-of-the-mill "do you like this?" type of survey. There is a very specific format to follow. The question should be formatted like this:
1). If the software did [x], how would you feel?
-I like it that way
-It must be that way
-I am neutral
-I can live with it that way
-I dislike it that way
2). If the software didn't do [x], how would you feel?
-I like it that way
-It must be that way
-I am neutral
-I can live with it that way
-I dislike it that way
3). How important is [x] to you? 1-10
The first question, "If the software did [x], how would you feel?" is meant to help you understand the customer's delight if a feature is included. The second, the opposite of the first question, tells you how "undelighted" they are if they don't have that feature. And then, the importance question, is used to validate those responses with a 1-10 Likert scale, 1 being low importance, 10 being high importance.
You will ask these questions for each of the items you're validating. In the end, you have some powerful visualizations concerning which items you should throw on your roadmap and which of those you shouldn't.
Here are the visualizations:
The above image shows you the positioning of the features based on the respondents' reactions to the first two "how would you feel?" questions. For our purposes, the X-axis tells us how dissatisfied the customer would be if the feature is not included. The Y-axis indicates satisfaction if included. If you look at the purple and red dots, you'll note that not only will it make users relatively happy if the feature is included, but it would also make them relatively unhappy if they didn't have it. Those two items, without much question, should be your top two candidates for roadmapping.
The blue dot tells us that the feature is moot - it is neutrally satisfying and it has relatively low dissatisfaction if it isn't provided. In other words, you shouldn't spend your time here.
To further ensure the results in the customer satisfaction coefficient aren't an anomaly, we use the self-stated importance ranking to validate. In this example, both the red and purple dots are represented as Q02 and Q04. As you can see, the pattern persists in their responses, indicating strong preference for the red and purple features.
The importance scale also came in handy when we had a few responses come through in which the respondent indicated "It must be that way" on the first question, "I dislike it that way" on the second, and the importance was ranked as a 0 or 1. Unfortunately, we had to exclude those responses because they didn't make sense.
The final piece of this effort came this week when we sent out invitations for beta opportunities for the high-ranked features. Within an hour, we had received several enthusiastic responses. This is interesting because our beta opportunities are generally not met with such response rates or enthusiasm. The enthused responses we received seem to validate Kano's ability to measure delight.
I know there are quite a few approaches to executing Kano. If you have one, please send me a message about it or comment below. I hope this helps!
Comments