Ehrenberg-BassSponsor Website  
    University of South Australia Ehrenberg-Bass Institute for Marketing Science University of South Australia Ehrenberg-Bass Institute for Marketing Science
Log Out
  • Home
  • Online Courses
    • Mining Panel Data for Insights
    • Six Simple Steps of Data Reduction
  • Questions & Feedback
  • Buy Books
  • Additional Services
    • Specialist Research Services
    • How Brands Grow – Live!
    • Other Collaborations
  • Podcast Interviews
Ehrenberg-Bass Institute for Marketing Science

Ehrenberg-BassSponsor Website

Select a category
Search
  • All Categories
  • All Categories
  • # Latest Research
  • Advertising
  • Best Practice
  • Beyond :30
  • Brand Building & Growth
  • Brand Competition
  • ad spend
  • Budgeting
  • Business-to-Business (B2B)
  • Category Entry Points
  • Category Growth
  • Buyer Behaviour
  • Consumer Behaviour
  • Market Research
  • Data Presentation & Method
  • Distinctiveness & Distinctive Assets
  • Double Jeopardy
  • Durables
  • Emerging Markets
  • Innovation
  • Light & Heavy Buyers
  • Loyalty & Defection
  • Loyalty Programs
  • Luxury Brands
  • Marketing Myths
  • Media Decisions
  • Mental Availability & Salience
  • digital
  • Online
  • Packaging Design
  • Pareto Share
  • Penetration and Brand Metrics
  • Physical Availability
  • Portfolio Management
  • Price Promotions & Discounting
  • Pricing Decisions
  • Private Labels
  • qotw
  • Question of the Week
  • Coronavirus
  • Virus
  • Covid
  • COV19
  • COV-19
  • Recessions
  • Segmentation & Targeting
  • Services & Service Quality
  • Shopper Behaviour
  • social marketing
  • Social Cause Marketing
  • Social Media
  • Television
  • Word-of-Mouth

Many consumers give a different customer satisfaction score when re-interviewed.

  • Report 109
  • John Dawes and Lara Stocchi
  • May 2022

Abstract

We find that many consumers give a different satisfaction score when re-interviewed, even though the overall satisfaction score for the brand stays the same. And, this effect is seen among people who did not make a purchase from the service provider between surveys, so it is not due to an intervening service experience changing their satisfaction level. The overall proportions of people in our study who said they were satisfied, neither satisfied nor dissatisfied, or dissatisfied stayed the same between two surveys, so the overall brand score was stable. But the proportion of people within each group that gave the same score in both surveys was not 100%: it was 83%, 56%, and 43% for satisfied, neither satisfied nor dissatisfied, or dissatisfied respectively. There is more stability for the more popular response of ‘satisfied’, and less stability for the less popular initial response of ‘dissatisfied’. We conclude that satisfaction scores behave like brand attributes: stable in aggregate but unstable at an individual level. Moreover, there is more over-time stability for satisfaction survey responses that received higher initial agreement to begin with, just like brand attributes.

This report gives three marketing implications that arise from the findings: one on the evaluation of intervention efforts, the second relating to customer word of mouth, and thirdly, a more general out-take on marketing metrics.

Introduction

We know that only a proportion of consumers, around 50%, say that a brand has a certain attribute when they are re-surveyed. For example, of all the consumers that agree Dove ‘leaves my skin soft,’ or that HSBC ‘is a bank for people like me’ when surveyed, only around half will say the same thing when re-surveyed later (Dall’Olmo Riley et al., 1997).

We wondered, could there be a similar effect for customer satisfaction? To what extent will consumers give the same or different scores about how satisfied they are with a service provider? The answer will be useful to marketers who commission and interpret satisfaction research reports. While satisfaction metrics may change with fashion (e.g. the arrival of NPS – see our three sponsor reports on NPS* –  or ‘CX’ (customer experience)) the idea has remained popular to measure customer satisfaction and run marketing interventions aimed at clients with differing levels of satisfaction. But the question arises, how stable are individuals’ satisfaction levels with a brand?

 

The study

To answer this question, we obtained data from a large European retail chain that runs a loyalty card program. It surveys members on their satisfaction levels and matches these scores to individual level purchases. A sample of these consumers answered the satisfaction survey on two occasions. From this data, we can examine the over-time stability of customer satisfaction scores. We use data from respondents for whom there is no recorded purchase made at the retailer in the period between satisfaction surveys. This ensures that score changes are not due to a recent service experience.

Consumers were asked two questions on a 1 to 7 scale, where 1 is completely dissatisfied, and 7 is completely satisfied. The questions (brand name and category name masked) were:

  • In total, how satisfied are you with brand X
  • Taking into account all aspects that are related to buying [category] Y, I am very satisfied with brand X.

Respondents were re-surveyed six weeks later. For simplicity, we re-coded their responses from each survey into three categories: Dissatisfied (1 to 3 out of 7), Neutral to somewhat satisfied (4 or 5 out of 7) and Satisfied (6 or 7 out of 7).

 

High satisfaction scores

In the first survey, most respondents were satisfied, with 69% and 63% of them scoring 6 or 7 out of 7 for the two questions. This result is consistent with other studies that find most brands get high customer satisfaction scores (Fornell, 1995; Peterson and Wilson, 1992). The scores in aggregate were virtually identical in the second survey, so overall satisfaction remained stable.

 

The satisfaction score ‘repeat-rate’

Next, the ‘repeat-rate’ – the proportion of people giving the same satisfaction score in the second survey is 83% for those who were satisfied, 56% for those who were neutral to somewhat satisfied, and 43% for the people who said they were dissatisfied. So, a larger proportion of people moved from dissatisfied to a higher score compared to those who were initially satisfied and moved to a lower score. This result is remarkably similar to work on the stability of brand attributes: namely that attributes have a repeat-rate of around 50%, and that the repeat-rate is higher for brand attributes that received higher initial response (Castleberry et al., 1994; Dall’Olmo Riley et al., 1997).

Averaged results for the two satisfaction questions are shown in Table 1.

Table 1. Proportions of People with same or different satisfaction level over two surveys

The overall weighted average repeat-rate is 74%. Recall that the brand’s overall satisfaction score stayed the same from survey to survey. Therefore, this result is an example of what is called regression to the mean. Regression to the mean is a famous statistical effect whereby extreme observations made at a point in time tend to regress toward their overall mean level when re-observed; yet overall results stay stable. The upshot is that while the overall results of a survey of many individuals is usually an accurate estimate of population level customer satisfaction, individual-level scores are far less fixed or reflective of ‘true’ satisfaction levels than has been thought – especially if they are negative scores.

 

Implications

Three management implications arise. First, what if marketers design interventions aimed at buyers based on their satisfaction levels? Suppose we target satisfied customers with a reward, or target dissatisfied customers with an incentive to make them like us a bit more. Perhaps we find the reward has not much of a positive effect – the satisfied customers now seem to be not so satisfied. What a shame! And we also find that many of our dissatisfied customers are not dissatisfied any more – the incentive must have worked with them! But we will have been completely hoodwinked by the regression to the mean effect, and our conclusions about the targeted inventions were wrong. Some of our satisfied customers will naturally say they are not quite as satisfied if we surveyed them again a second time – they were at the peak of the satisfaction scale the first time, they can’t go up any higher. And some of the dissatisfied customers will just naturally give a higher score the second time. There was an element of random chance in their initial score, and they can hardly give a lower score, so the average score of this group will be higher the second time around. These effects will have occurred without any marketing intervention. So, marketers in this example, and indeed in many other contexts, need to distinguish real effects from regression to the mean effects when evaluating customer data over time, or evaluating the results of interventions aimed at particular groups.

Second, we have all heard the old line that a satisfied customer will tell some people about their experience, but an unhappy customer will tell many more. We are certainly not implying that one should ignore dissatisfied customers. However, the results here show that dissatisfied customers, on average, move upward in their scores when re-surveyed more-so than satisfied customers move downward in their scores. Therefore, they should not be inclined to talk more to others about their dissatisfaction. This finding is consistent with previous research by East, Hammond and Wright (2007) who reported that positive word of mouth is actually around three times as prevalent as negative (see Institute Sponsor Report 34: Good News about Bad News: Talking about Word of Mouth)

For the third marketing implication, recall that we found a very large proportion of the customers of this retail chain were satisfied. In fact, this is the norm, it is unusual for a brand to get bad satisfaction scores. Moreover, in past work we have found that

(1) brands in a category tend to get quite similar customer satisfaction scores; and

(2) those satisfaction scores don’t tend to change that much in aggregate over time.

In turn, this tells us that customer satisfaction, while undeniably important, is not the difference between big and small brands, or between those that are growing and those that are declining. Those differences are far better explained by mental and physical availability. The business needs to ensure it is measuring how well it links to purchasing, usage and situational cues in category buyers’ minds, and the extent to which it is present and prominent in all the places people shop and buy the category.

 

Thanks to Carl Driesener, Cathy Nguyen, Byron Sharp and Zac Anesbury for giving feedback on previous versions of this report.

 

*Three sponsor reports on Net Promoter Scores (NPS):

Why Net Promoter Score is Actually a Bad Tool and What to Use Instead

Net Promoter Score (NPS) Does Not Predict Growth – it’s fake science

The Net Promoter method makes scores vary too much over time

 

REFERENCE LIST

Castleberry SB, Barnard NR, Barwise TP, Ehrenberg A and Dall’Olmo Riley F. 1994. Individual Attitude Variations Over Time. Journal of Marketing Management, 10 (1-3): 153-162.

Dall’Olmo Riley F, Ehrenberg A, Castleberry SB, Barwise TP and Barnard NR. 1997. The variability of attitudinal repeat-rates. International Journal of Research in Marketing, 14 (5): 437-450.

East R, Hammond K, Wright M. 2007. The relative incidence of positive and negative word of mouth: a multi-category study. International Journal of Research in Marketing, 24 (2): 175-184.

Fornell C. 1995. The Quality of Economic Output: Empirical Generalizations About Its Distribution and Relationship to Market Share. Marketing Science, 14 (No. 3, Part 2): G203-G211.

Peterson R, Wilson W. 1992. Measuring customer satisfaction: fact and artifact. Journal of the Academy of Marketing Science, 20 (1): 61-71.

RELATED CATEGORIES

  • # Latest Research
  • Data Presentation & Method
  • Loyalty & Defection
Content from the Ehrenberg-Bass Institute website for Corporate Sponsors: https://sponsors.marketingscience.info
This content is exclusively for the use of members of the Ehrenberg-Bass Institute Corporate Sponsorship Program.

Can’t find what you are looking for? or have some feedback about the site?                  Contact Us

FOLLOW US

Contact

Phone: +61 8 8302 0111 Postal Address:
GPO Box 2471
Adelaide SA 5001
Australia
Freecall: 1800 801 857 (within Australia) Fax: +61 8 8302 0123 Email: info@MarketingScience.info

Sitemap

  • Home
  • About the Institute
  • Awards and Accolades
  • Ehrenberg-Bass Sponsorship
  • Specialist Research Services
  • News & Media
  • Contact Us
  • Disclaimers, Privacy & Copyright

Corporate Sponsors Member’s Area

  • Sponsor Website Home
  • Online Courses
  • Questions & Feedback
  • Buy Books
  • Research Services

Corporate Sponsors Member’s Area

  • Sponsor Website Home
  • Online Courses
  • Questions & Feedback
  • Buy Books
  • Research Services
image-description

Now available as an eBook exclusively to Apple iBooks

image-description

The Ehrenberg-Bass Institute for Marketing Science is the world’s largest centre for research into marketing. Our team of market research experts can help you grow your brand and develop a culture of evidence-based marketing.

Acknowledgement of Country

Ehrenberg-Bass Institute acknowledges the Traditional Owners of the lands across Australia as the continuing custodians of Country and Culture. We pay our respect to First Nations people and their Elders, past and present.

University of south Australia

The Ehrenberg-Bass Institute is based at the University of South Australia

Website designed & developed by

Website designed & developed by Atomix