I would love to get the Ehrenberg-Bass scientists point of view on the scope of Social Listening tools to effectively track and measure progress made on Brand Consideration, Mental Availability and understanding Brands’ Associative networks.
-
We’re considering adopting a social media monitoring tool to help us understand:
-
What people are saying about our brands
-
Which of our memory assets contribute to distinctiveness and whether these are the aspects consumers are discussing
-
Our category entry points and our ownership of them in terms of share of mentions
-
How to measure mental availability
Given the principles of Mental Availability and the Laws outlined in “How Brands Grow” I’d like to know what key metrics we should monitor to assess if our brand is growing in awareness, consideration, and love, and what key questions the tool should answer to align with our strategic goals.
Firstly, I’ll preface by saying that social listening tools are great for some things, but have their limitations. They can be a great source of knowledge for identifying the new ways people are using your product/category, which can aid innovation and portfolio development. They can also be useful to track sentiment, and in particular, benchmark negative sentiment to identify jumps and shed light into the reasons behind these jumps. These uses can help growth efforts by giving you a finger on the pulse to issues that might hinder mental or physical availability (e.g. issues with distribution, barriers to purchase, new uses of the product) etc.
However, social listening data comes from a biased and incomplete sample of typically heavy category users, and we see extremes of positive and negative sentiments that don’t reflect the ‘norm’.
This makes it difficult to use social listening tools to understand Category Entry Points (CEPs), Mental Availability and distinctiveness. For example, for CEPs, you’ll likely pick up the unusual and shareable, rather than the typical, everyday occasions/contexts that bring the most buyers into the category. These CEPs will likely be less valuable. Moreover, social listening tools don’t allow you to look at buyers and non-buyers separately. We know that to grow, brands need to increase penetration especially of non-buyers. Tracking non-buyers on these metrics is important, and the heavy category users that talk online likely mean you don’t have a representative viewpoint. It’s also hard to know what you are measuring – for example, with brand awareness, you don’t know the cue that stimulated the brand mention so you can’t be sure whether you’re measuring unaided or aided brand awareness (e.g. did the respondent type in a google search or are they responding to a post/comment that mentions the brand). As I mention below, one is more valuable than the other. With distinctiveness, while social listening tools may provide some insight into fame, another key aspect is uniqueness and social listening tools don’t allow you to be sure that people don’t link the asset to other brands as supposed to just not mentioning it online.
Therefore, we recommend measuring Mental Availability, CEPs and distinctiveness with consumer surveys from a representative sample of category buyers. We also don’t recommend measuring these metrics as frequently as social listening tools would allow. Mental Availability can be measured at an approximate, every year, (depending on the category and stability of the category) and attitude much less. Social listening tools can encourage too-frequent tracking of these metrics, that is unnecessary and risks the business reacting to results that might be transient. We offer both measurements for Mental Availability, CEPs and Distinctiveness at the institute, as well as help to integrate these metrics into existing brand health trackers. For more information on our Research Services click here.
With regard to key metrics to monitor Better Brand Health goes into much more detail but hopefully the below gives you a good starting point.
Brand awareness:
There are two main reasons for measuring brand awareness: category identification (e.g. to check if category buyers are aware the brand is a member of a specific category) and ease of retrieval (e.g. to assess how easily category buyers retrieve the brand from memory). As discussed in Better Brand Health, using brand awareness to measure the latter has limitations, as it doesn’t effectively capture how memory works; Mental Availability is a better capture of brand retrieval. Hence, I’ll focus on measuring brand awareness for category identification purposes.
There are three different measures of brand awareness: Top of Mind (TOM), spontaneous awareness (unprompted/unaided), and prompted/aided awareness. As I mentioned above, we know that non-buyers are a key part of a brand’s growth. Unfortunately, these three metrics are not equal when it comes to effectively capturing responses from non-buyers. TOM measures, especially, are difficult (they require the brand to be the first to be recalled given the category cue) and thus fewer responses come from non-buyers. Our recommendation for tracking brand awareness is to measure prompted brand awareness of the brand’s non-buyers. It’s important to note that prompted brand awareness does not easily erode, so collecting a benchmark for brand buyers and then ongoing tracking of non-buyers is sufficient.
Brand consideration:
Our recommendation for measuring brand consideration is to remember that buyers do not have one single consideration set that they always use to make decisions, but rather the set of brands they consider varies across time and contexts depending on the cue that is available. For this reason, we prefer the measure of Mental Availability, which captures the likelihood for a brand to come to mind (so it can be considered) across relevant purchase occasions. Mental Availability is a multi-cue measure of retrieval and helps to address some of the above concerns with unprompted awareness measures.
If you want to use a consideration measure, then capturing consideration ‘post decision’ may be more effective than a ‘pre-decision’. For example, it may be more insightful to ask consumers which brand(s) they purchased (or which retailers they visited) and which other ones they considered, rather than ‘which brands/retailers would you consider’. This shifts the purpose of the measure away from retrieval.
Brand love:
When measuring brand love, I presume the assumption is that this is more desirable and important to strive for than someone just ‘knowing of’ the brand – eg brand lovers will have a higher propensity to buy the brand. In Better Brand Health, we talk about a few challenges with this viewpoint, e.g., by showing that few people hold these extreme attitudes; that often attitudes reflect behaviour rather than drive it; and that buyers often do not need to hold a positive attitude to start/continue buying a brand. Reflecting these findings, when we plan brand attitude questions (brand love being an attitude), we recommend having a question that measures buyers and non-buyers’ full spectrum of attitudes towards the brand – e.g. positive, negative and neutral. In particular, we recommend benchmarking the distribution of responses for buyers and non-buyers, and checking it’s normal for a brand your size and within your category (NB brand attitude scores follow a Double Jeopardy pattern, where big brands have a slightly higher attitude score than smaller brands a slightly smaller score). In addition, check brand rejection scores for both buyers and non-buyers are low and inline with competitors, and identify any reasons for rejection. This will be much more useful for diagnosing issues than striving for brand love.
R.F.
20 March 2025
Link: https://sponsors.marketingscience.info/frequently-asked-questions/what-guidance-can-the-ehrenberg-bass-scientists-provide-on-the-use-and-scope-of-social-listening-tools-to-effectively-track-and-measure-progress-in-brand-consideration-mental-availability-and-unders/
Copy to Clipboard