Businesses today realize that one of the keys to success in the competitive marketplace is effective customer management. Companies see customer relationship as a strategic advantage and have invested a lot of effort in making sure that Customer Relationship Management (CRM) is high on the priority list. However, few companies have invested effort in terms of having a continuous measurement strategy that can signal potential dips in real-time.
With the explosive growth of Do-It-Yourself research projects using online surveys, conducting customer satisfaction surveys has become more of an in-house operation. Many companies are asking themselves – How can we improve if we can’t effectively measure?
Customer satisfaction programs generally answer this question. We believe that for a customer satisfaction program to be effective and accepted, it should be more than just one survey that is sent out to all your customers annually. It should be an ongoing strategy of continuous measurement and improvement based on the feedback received. Validation of the improvements can be measured directly in the forms of satisfaction indices.
With the advent of online ASP based software solutions for CRM as well as online surveys, implementing a customer satisfaction program does not need a $100,000 budget. A simple and effective customer satisfaction program can be instituted for as little as $3000 per year. However, the lower cost comes with some caveats. With the do-it-yourself strategy, although you can reign in costs, you also have to do the work yourself (as the term implies) – You can however follow some guiding strategies and principles that can help you avoid making tactical mistakes.
Here we will go through the following items that can help you get familiar with the different issues (and possible mitigation strategies) on conducting an effective customer satisfaction program for your business.
Overall Product/Service Satisfaction
Ongoing Support and Customer Service
Cancellation reasons and drop-outs
Effective strategies for presentation and data collection.
Options for data-analysis and interpretation
Effective strategies for Presentation - For the most part, in our opinion, a 5 item scale (Very Dissatisfied -> Very Satisfied) battery of options has a fairly low degree of cognitive stress. The 5 point scale has enough options usually to accommodate the spectrum of social perception. A battery of options (matrix) is generally preferred because of the following reasons:
The basic principle is to put together a list (between 3 and 7) of “components” of your service that you’d like to measure. Add in a final “Overall” satisfaction rating also.
Options for data-analysis and interpretation
The “overall” satisfaction score should be close to the average satisfaction scores of the individual components. If the overall satisfaction is way out of line from the other component scores it usually means that there is some form of bias taking place, or we are missing out some component in the matrix.
Regression analysis can be performed on the data to give out importance scores for each of the components.
Now, for the fun part. Generally when customers are comparison shopping, they are really comparing options that are important to them. Measuring importance of the different components of your product or service is a little more challenging than measuring satisfaction. This is because importance is generally relative. As Fred Van Bennekom in his excellent article points out – When taking a flight what is important and what is very important? Price? Skymiles? No Stopovers? Comfortable seats? Different people have widely different perception of importance and need.
Accordingly, we simply cannot take the same approach we took with measuring satisfaction. A five point scale (Not Very Important -> Very Important) is simply not going to give you the data that can be called actionable. Moreover, having another 5 point scale that looks and feels very much like the previous (satisfaction) scale becomes monotonous and uninteresting. Remember, we always strive to make the survey engaging.
In our opinion the easiest and the most effective way of measuring importance is having a simple multiple choice question (Select more than one option) – display all the components and have your users choose the top 3 factors that they consider important. This approach has the following advantages and disadvantages:
Choose the three most important issues you consider when selecting an online survey vendor:
There are two parts to the data analysis that can guide us here. Basic frequency analysis as well as TURF analysis:
One of the most effective measures of loyalty is to measure the degree to which your customers will vouch for you. If you customers go out of their way to recommend your product or service to others, it’s an effective measure of their perception.
Effective Strategies for Presentation Again, in our opinion simplicity is the key. A single question can get you a measure of how loyal your customers are towards you. Asking your customers how likely are they to recommend your product or service to their colleagues and friends gives you a fairly good indication of how they perceive your service or product.
Real Life Example : QuestionPro How likely are you to recommend QuestionPro to your friends or colleagues?
Options for data-analysis and interpretation - If your mean for this question is close to 1 (option 1) you should be in good shape. For a positive growth environment, the mean should be between 1 and 1.5. Most of your customers should feel good about recommending your services or products to others.
A great deal of research has already been done and shown that customer loyalty is intrinsically tied to the fact that people still value word-of-mouth.
What is a Customer Satisfaction Index? Indices are very popular (University of Michigan – Consumer Confidence Index, The Conference Board’s Consumer confidence index etc.) in part because of their ability to effectively and accurately represent the underlying data with a single number. The indices in absolute terms do not have much value. It is the rise (or fall) of the value of indices that actually make a difference.
Generally indices are developed based on specific models. These models are specific to industries and are really beyond the scope of the current discussion. However, it is fair to say that indices are mathematical representations of the different components of the data that you collect.
In our example above: “Please tell us how satisfied (or not) you are with each of the following aspects of QuestionPro”
The model we use is as follows: QP-Satisfaction Index = ([Mean(Registration-Process) * .5] + [Mean (Survey Authoring) x 1] + [Mean (Survey Distribution) x 1] + [Mean (Survey Analysis) x 1] ) / 3.5
All we are doing here is “weighting down” the “Registration Process” component down (.5) – because we believe that the “Registration Process” is half as important as the other components.
We track the QP-Satisfaction Index on a daily/weekly basis and it is directly reflective of how we are doing as a company. As you can see above, the indices need not be very complicated. You can start off simple and as time goes along adjust the model to fit what you believe is correct.
With the online survey process, long and unwieldy online surveys are becoming very popular. It is relatively easy and tempting to create long surveys so that granular data points are collected. While on one hand this gives you all the data you need to make and affect business decisions, it also introduces in important concept in online research called non-respondent bias.
What exactly is Non-Respondent Bias? Let’s say you have 200 customers, and you send out a customer satisfaction survey to all of them. You get a response rate of 20% -- So you have 40 responses to the survey. Now, the question is – Do these 40 customers speak for all your customers? How confident are you that the responses that 20% of your customer base is giving you can be taken and applied to most of your customers. What if only the very satisfied or the very dissatisfied customers actually took the time to complete the survey? Non-Respondent Bias is the bias or the skew in the analysis and interpretation of your data due to the fact the large percentage of your respondents did not complete the survey.
While there are many effective ways for making sure your response rates are high enough, our experience and research has shown that the primary factor responsible for not completing surveys is the length of the survey. While promotions and incentives will always increase the response rates, they are mechanisms for working around the issue – not fix the core issue i.e. keep your surveys short and simple.
So, how do you balance out your analytical and business requirements and still keep your surveys interesting and short to keep your response rates high. Obviously it’s a balancing game here – This is part of what Market Research agencies and consultants do. However, there are certain things that we can suggest, that can help mitigate this issue:
The success of a customer satisfaction program, in a large part will depend upon how comprehensively and cohesively can the data be presented to the business decision makers. One of the challenges that in-house (as well as external research) projects face constantly is that if the data is significantly different that what the business decision makers think, it is often dismissed off as anecdotal or a one-time phenomenon. To mitigate this issue, we strongly suggest exploring options of running a program that is continuous and real-time feedback can be provided to the executive management ON DEMAND. Solutions like Customer Satisfaction Dashboards come in very handy for such buy in and moreover it also builds confidence in the solution.
The key to success for any customer satisfaction research study is dependant upon how well you manage the conflicting data-analysis requirements with the need for simplicity. Customer satisfaction studies need not be all-encompassing. They can be short and give you the necessary data-points needed for to make informed business decisions. You can leverage technology to segment out populations that need further data-collection (Very Unsatisfied users etc.) and try to delve into the reasons for their customer experience. Remember, you can never improve what you cannot measure effectively.