The Institute is somewhat famous for demonstrating the importance of Distinctive Assets, and showing how they should be measured.
One of the most popular implicit measures is response latency, where you present an asset and measure the time taken to link this to a brand. Some of our sponsors have asked why we don’t advocate using this type of approach to measure the strength of Distinctive Assets. The reason is because testing has shown the approach produces less useful and un-trustworthy results. In this short article we explain why.
Any measure of Distinctive Assets needs to deliver the key metrics of Fame and Uniqueness, which speak directly to what you want assets to achieve in any situation (not just on shelf). To properly gauge uniqueness, respondents must be free to elicit competitor brands, and as many as they can. The measurement task should, if possible, mimic real-world retrieval which is retrieval of the brand, in the absence of the brand, from a cluttered (mental and physical) environment.
Today, our best practice measurement approach involves exposing an asset and asking category buyers to state the brands they associate with the asset (without prompting for brands)1.
This method replicates the thought process that category buyers undertake in the natural environment. Empirical testing shows this to be is less prone to guessing (which inflates Fame), and elicits the widest range of competitor linkages (which avoids underestimating Uniqueness). This does so better than either brand cued measures or measures where brand lists are provided 2.
Response latency measures where respondents are timed is intuitively attractive, but unfortunately suffer from several problems that make these approaches less useful when benchmarking and setting up a strategic framework for building Distinctive Assets. These problems are:
- Measuring response timing/latency is solving the wrong problem. Focusing on (often trivial) differences in response times distracts from the bigger challenge brands face– retrieval in a cluttered mental and physical environment. When Distinctive Assets don’t work for the brand it is retrieval failure, not taking slightly longer, that is the problem.
- Unless you are able to anticipate all likely competitors, Uniqueness scores are likely to be inflated as they are limited to the brands you include in the survey. Uniqueness is the harder metric for marketers to change, which makes realistic measurement very important.
- With response latency measures, it is also difficult to see the structure of mental competition in category buyers’ minds – what other brands are in there, how many people have single brands, how many have multiple brands and the composition of mental competitors (see the chapter on Uniqueness in Building Distinctive Brand Assets) – without incomplete knowledge it is difficult to determine a strategy (or establish if it is possible) to combat mental competition and reclaim an asset.
It’s not just us questioning this approach. Psychology researchers have also raised concerns about the validity of implicit measures, that is, what these measures are really measuring. Without confidence in what might be causing delays in responses it is difficult to propose actions based on these results. Also it is worth noting that researchers in this field turned to implicit measures to solve a specific problem – measuring attitudes that might be present but that the person might not be comfortable directly admitting. In my experience category buyers have no such reluctance when it comes to Distinctive Assets.
Are all response latency measures bad?
Once you know your brand’s Distinctive Assets, metrics like ‘time to find the brand on shelf’ might be useful to test different executions of those assets in shopping environments. We are assessing this further in our R&D. But this is only of value once you know the assets you want to build, and your comparisons are of different executions within a specific context. It’s not useful for non-shopping assets, nor for setting big picture, long term Distinctive Asset strategy.
We are happy to discuss this further if there is some thought to using this approach.
Finally, for those critical of a direct question, survey approach, and feel this is less rigorous than other options, we do introduce a number of controls in place to avoid contaminating responses, and to check on the accuracy of the responses – that they are real associations and not just made up for the survey. Every method has its flaws so we continue to undertake R&D to develop best practice in Distinctive Asset measurement.
Interested in measuring your Distinctive Assets? Find out more.
Footnote:
1 Some companies use an approach which asks people if they recognise an asset as belonging to a category first, then identify brand associations. This is not our approach and could be lead to some falsely high correlations between Fame and Uniqueness.
2 ROMANIUK, J. & NENYCZ-THIEL, M. 2014. Measuring the strength of color brand-name links: The comparative efficacy of measurement approaches. Journal of Advertising Research, 54, 313-319.
3 http://nymag.com/scienceofus/2017/01/psychologys-racism-measuring-tool-isnt-up-to-the-job.html