The Facts
1 A free choice pick-any approach is quicker to administer than rating scales or ranking measures. It also provides a wider range of brand perceptions.
Three of the most common methods to measure brand perceptions are:
- Pick-any – where respondents are given a list of brands and asked which of any brands they associate with each attribute;
- Rating – where respondents are asked to rate brands on attributes using 5, 7 or 11 point scales; and
- Ranking – where respondents are asked to rank brands on how closely they are associated with the given attributes.
Past research has shown that these measures result in similar brand orders (see Table 1). Individuals also respond consistently when asked about brand perceptions via these different measures (Barnard and Ehrenberg, 1990; Driesener and Romaniuk, 2006). However, we have found that the measures are not exactly the same and the pick-any approach has advantages over the others. We find that the pick-any approach is around 50% quicker to administer than other measures. Additionally, consumers are more likely to associate at least one brand with each attribute when asked about perceptions in this manner. The result is a wider range of perceptions generated for each brand. This doesn’t influence the ranking of the brands, only the absolute number of perceptions. This is important if you are using these perceptions to measure brand salience.
Therefore we recommend using a free choice, pick-any approach when measuring the perceptions consumers hold about brands.
2 Always provide consumers with a list of brands. Failing to do so makes it difficult to detect perceptions for smaller share brands or non-users.
When using a free choice, pick-any method, a question arises – should we prompt the respondents with the list of brands or not? In categories with many brands, prompting means you need to decide which brands to include. In categories where someone needs to recall brands (e.g., financial services), some argue that it is better not to prompt, to more closely replicate consumers’ thinking process.
We have researched both scenarios. We find that the rank order of brands is not influenced by prompting (see Table 2). However, prompting does increase the number and variety of perceptions that are evoked for smaller share brands (Romaniuk, 2006). The reason for this is that larger share brands are more likely to be accessible than smaller share brands. They have more people who use them and typically spend more on advertising. At the start of the attribute battery, larger share brands are more easily drawn into working memory. As working memory becomes full of these higher share brands, the retrieval of little known brands becomes inhibited. Thus, particularly non-users and light users find it increasingly difficult to make associations to that brand as the battery progresses. When you provide consumers with lists of brands, each brand starts equally and there are no inhibition effects.
Growing brands and new entrants are likely to be these smaller share brands. Also most growth comes more from non-users becoming customers (see Report #31). Therefore we recommend prompting for brands regardless of category, to ensure you capture the perceptions for smaller share brands and from non-users.
3 How recently and how much someone uses a brand affects their chance of giving brand perceptions.
Users of a brand are not homogenous. They differ in terms of (1) how much they use the brand and (2) time elapsed from their last experience with that brand. These differences influence their propensity to give brand perceptions.
Weight of usage – Some consumers use a brand more than others. This increases their propensity to give a brand association (see Table 3).
It is common to assume that the differences in the number of perceptions given by customer segments explains differences in the brand preferences for those segments. This mixes up the cause with the outcome. People who use the brand more often than other brands should give more perceptions because of their relatively greater experience.
Recency of experience – The more recently we have bought something, the more salient it is. This means that more recent buyers are more likely to give brand perceptions than less recent buyers (see Table 3). This can become an issue when comparing tracking data over time. If you compare the brand perceptions generated at a time of the year when many people interact with the brand (e.g., a Department store at Christmas) with a time of the year when few people do (e.g., in January), you will see systematic differences in the level of brand perceptions because of the change in the numbers of customers recently interacting with the brand.
We recommend always checking the composition of the sample in terms of respondents’ brand usage before comparing perceptions across sub-groups or results over time. Comparisons must be adjusted for any differences in usage.
4 Larger share brands get more perceptions for (pretty much) any attribute than smaller share brands.
We talk more about things we have experienced. This also applies to brands. People who currently use a brand are about twice as likely to give an association for that brand than people that don’t use it. So larger share brands, because they have more users, typically get higher scores than smaller brands with fewer users (Bird et al, 1970). Therefore, a poor score for a larger share brand may be a good score for a smaller share brand. This ‘usage bias’ is present regardless of how you measure brand perceptions.
So brand association scores should be interpreted in the context of brand size and not in isolation.
Also, samples that skew towards the user base of any particular brand will produce results that favour that brand. A common trap is to merge a sample of brand customers with a sample of random category buyers without realising that this produces skewed results.
5 The relationship between the brand perceptions given by users and non-users is predictable.
Brand perceptions come from three main sources; buying/consuming the brand, seeing advertising and word-of-mouth. While brand users may get perceptions from all three sources, non-users have only the latter two.
Therefore brand users have a much higher propensity to respond to most attributes than non-users. The relationship between users and non-users is also predictable for the vast majority of attributes. This provides a nice benchmark for examining different types of attributes. Our modeling has revealed that non-users’ responses are typically between one-third and one-half that of users for the same category. Most attributes will follow the same pattern within a category, but we have found that some attributes show consistent deviations across categories. These are (a) attributes that represent extremely positive overall evaluations, (b) attributes that describe functional qualities and (c) negative attributes (see Table 4). This is important to understand as combining deviating with normal attributes in multivariate analysis (such as perceptual mapping) can give rise to misleading results.
Consistent deviations are useful as they enable us to predict which attributes will deviate prior to data collection. The ones we have identified, and their explanations, are now discussed in more detail as we find most attribute batteries contain at least a couple of these.
6 Extremely positive overall evaluations are less likely to stimulate responses from non-users.
Extremely positive overall evaluations include attributes such as best brand or worth more. These attributes represent constructed evaluations (made up on the spot because you asked the question) rather than reflections of actual links or associations between Brand A and Attribute X. Further, they often encourage people to only give one response, with adjectives such as best, highest, most or excellent. Given that the attributes themselves represent extremely positive evaluations, this makes it even less likely, that non-users will respond. Therefore we see a greater difference between brand user and non-user responses, with multiples of three or four times, rather than twice, as likely.
7 Attributes representing functional qualities are more likely to stimulate responses from non-users.
Functional attributes reflect qualities that we don’t have to experience a brand to appreciate, such as is Australian or has many stores. We can make our judgement based on advertising, the nature of category, or just guess with reasonable accuracy (e.g., private labels will cost a little less). Attributes that represent the brand’s position in the category also follow this pattern (e.g., a leader will normally be associated with the largest share brand; different might be a brand that is small, new or functionally different). For these attributes, brand users are only slightly more likely to associate the brand with the attribute than non-users.
Note: (Good) Advertising reaches non-users. Therefore attributes that represent heavily and consistently advertised messages also display a similar pattern (Barwise and Ehrenberg, 1985).
8 Negative attributes are equally likely to generate responses from users and non-users.
Negative attributes are qualities that are generally undesirable for a brand to hold. Examples are boring, difficult to find or inflexible. Sometimes these are included in attribute batteries alongside more positive qualities. As negative attributes may be seen as negative evaluations of a brand, we may expect that they would follow the reverse pattern, with non-users more likely to respond than users. However, we have found that negative attributes follow the same pattern as functional attributes, where users and non-users are similarly likely to give a response (Winchester and Romaniuk, 2003).
Be careful about mixing positive overall evaluations, attributes representing functional qualities, or negative attributes in multivariate analysis along with ‘normal’ attributes.
Because of the lack of literature about negative attributes, the final two facts focus on this aspect of attribute measurement.
9 Big brands get higher scores for negative attributes than smaller brands.
What level of response should your brand expect for a negative attribute? As with positive attributes, it depends on your market share. Larger share brands gain more responses to negative attributes than smaller share brands. Therefore when interpreting a score for a negative attribute, you need to take into account market share, rather than just interpret the raw percentage.
This has implications for the role that negative perceptions play in brand choice. If consumers rejected brands prior to purchase because of negative qualities, we would expect smaller share brands to gain more negative responses than larger share brands. However, we see the opposite pattern. This is one of a number of reasons why we talk about small brands suffering from a salience problem rather than an attitude problem.
10 Responses to negative attributes come mainly from former users, this is why non-users (as a group) have a higher propensity to respond.
Responses to negative attributes follow different underlying patterns to other attributes. Consumers who have used the brand previously, and then switched, are the most likely to give a negative response (Winchester et al, forthcoming). These negative qualities may be why they switched brands, but may also be generated as post-hoc rationalisations after switching, and may partly reflect their higher brand salience. The second most likely group is current users of the brand, with those that have never tried the brand trailing last (see Table 5).
Because of this difference in underlying patterns, we recommend always collecting past usage information and separating the non-user group into lapsed and never-tried customers. If you are looking at brand perceptions across usage subgroups, compare across the three groups, rather than just the typically used user/non-user split. If you do this for both positive and negative attributes, it highlights the lack of salience by non-brand users. Consumers think very little about brands they don’t use and so the challenge is usually not to change perceptions, but to develop any at all!
Summary
We have covered 10 facts that can help with data collection, analysis and interpretation of brand perceptions. Based on these, our recommendations are:
- Use a pick-any approach
- Prompt for brands, in any market
- Check for differences/changes in usage weight and recency amongst users
- With non-users, distinguish between lapsed customers and those who have never tried
- Both positive and negative attributes need to be interpreted in the context of market share to determine if a score is good, bad or as expected for a brand
- In attribute lists, identify and analyse separately extremely positive evaluations, representations of functional qualities and negative attributes. Use the usage/association relationship to identify such attributes if you are in doubt.
We hope these recommendations help you to more effectively collect and use brand perceptions. In addition, we hope that these facts simplify your life by removing some of the concerns that you may have been confronted with about brand tracking.
We continue with our R&D in this important area. If you would like to receive any of the publications we have produced, please do not hesitate to email us. Our future reports will examine topics such as the impact of changing brand lists, uncovering duplications in attribute lists and the stability of attribute responses over time. Please send us any questions or issues that have arisen that might be useful additions to our research agenda.