About 15 years ago, Net Promoter Score was born, giving companies a new way to measure loyalty and brand appreciation. It became an easy method to gauge customer engagement and their emotional direction toward the company.
Sidebar: If you're unaware of what NPS is, it's a simple question of "how likely would you be to recommend our company/service/product to family or friends?" If you rate 0-6, you're considered to be a "detractor"; a 7-8 response is "neutral"; and a 9 or 10 makes you a "promoter." To compute NPS from this, you subtract the percent of detractors from the percent of promoters.
So, NPS: It seems like a pretty good measure of a customer experience. It's based on the same psychological principles of sites/apps like Yelp or Amazon – people want to talk about their experiences. If your experience is good, you want to tell others that they should enjoy it too. If you have a bad experience, you want to warn others off (and perhaps catch the attention of the owner so they can try to make it right).
However, we may have reached a tipping point where NPS no longer serves as a reliable, effective measurement tool. There are three key reasons why.
1. Too many people know about it. With so many companies using the same question, it's not a surprise that people whose companies measure NPS are also being asked the same question when they're out in the world as consumers. Does it skew the results when you know what answer the question is looking for? Because you know the game, do you decide not to play and decline to answer the question, reducing their respondent pool?
2. It's being asked too often – and doesn't always apply. For example, I recently used my credit card company's website to pay my bill. The next day, I got an e-mail from the company... you guessed it: They asked me to rate, on a scale of 0-10, whether I would recommend their company and website to family and friends based on my experience.
In this particular circumstance, it was a routine exercise, and they met my baseline expectations for executing on promised functionality. Did they go above and beyond my needs to make it an extraordinary experience? No – but based on what I was trying to accomplish, I wouldn't have expected that, nor do I really see a way they could've made it "10-worthy" (except maybe if they'd declared my bill paid without taking money from my bank account). Had the function failed, that would've been another story, but what I did was a pass-fail interaction.
3. Other rating scales – and human psychology – work against it. Without getting into the argument of how NPS is actually an 11-point scale, there are too many other types of ratings systems out there that alter how people respond to the NPS question. In situations where people are presented with a 5-point scale, there are usually pretty clear directions on how to rate on the sliding scale, and the midpoint is typically a "middle-ground/okay" response. Not so in NPS, where a "middle number" actually means you still are a detractor.
Similarly, success is only achieved on an NPS response if you hit the top of the scale. But people are conditioned against giving perfect scores because we buffer ourselves from reacting in extremes – unless the experience was truly exceptional. Given the perception of "10" as the top of the sliding scale, respondents would likely feel comfortable giving a "7" or above with the intent of giving a positive rating.
Like it or not, we live in a ratings-based society, where decisions on what we buy and where we go are heavily influenced by other people's reviews on Amazon and Yelp. But there needs to be less reliance on the single NPS question to gauge emotional attachment to a product or company. Who will come up with the next "magic ratings bullet"?