How We Study Advertising
The study of advertising has been going on since advertising began. At its most rudimentary form, measurements of advertising are comprised of responses to the messages. At its most sophisticated form, research attempts to study the hidden influences of the messages, which can take place psychologically well after that message was received by the individual.
The study of product advertising incorporates such measurements, but also the databases and experience of Accountability Information Management’s (AIM) 800,000+ individual responses to advertising over the years from Starch, Readex and other independent research companies. These companies offer independent verification of message transmission; that is, they study the sophisticated effects of advertising. In fact, Starch, one of the foremost of these companies, incorporated AIM’s mathematical formula of advertising performance in 1990 when it appeared in our article published in MARKETING RESEARCH magazine. Starch took the math and called it “Readership Ratios.” It was a high compliment to our thinking, and served as the basis for our ongoing inquiry into advertising effectiveness.
For advertising studies, AIM includes all magazines that carried the company’s advertising. The publishers are contacted and asked to provide information about the performance of the client’s advertising within their properties (print and digital). We want each publisher to provide us with specific benchmarks, against which we can measure the performance in the most objective way possible. Part of the problem of advertising is that everyone has an opinion about it; effectiveness, however, strives to measure results – not opinions, or votes. Specifically, each publisher is asked for the following items:
- Average inquiries via the published channels (e-lead, printed BRC, Fax) for advertisements and PR in the issue when the ads and/or PR ran. This would give us a benchmark against which to measure the inquiries to the ads within these channels.
- Total number of inquiries generated per issue for each issue where advertising ran. This metric gives us a measurement against which we can see how the client’s ads perform overall against all the ads in an issue.
- For client banners, provide total number of impressions and click-throughs. Also include comparable size banner averages for same time period as a reference for performance. Digital metrics, as we will point out, are difficult, despite promises of knowing “more” than what a print ad would deliver. Indeed, a client will see that reporting on such performance is often muddy with assumptions that when investigated, don’t hold water.
- The names of the top ten advertisers who generated leads (including page number) highest to lowest (you do not have to tell us the number of leads for each advertiser), including the page numbers (we usually have the magazines because we receive over 1,000 magazines each month in our office) and rank them (#1 = most leads, #2, next most). This metric would allow us to see the type of creative that generates results in a magazine, which in our experience, will differ by audience. What surprises us is the response to this request by certain publishers.
- Any readership studies you have done on those issues (i.e., Harvey, Readex, etc.) or on issues from the previous year that may or may not have included product ads. These are important measurements because they tell us (as obtained via the independent research companies) information on the advertisement’s ability to “stop” the audience and turn those stops into readers. Unfortunately, magazines are continually dropping their investment in these very important studies.
- 100 random names from your circulation. This request would allow us to compare client inquiries that were generated with a “picture” of the magazine’s actual circulation.
The response from some of these publications surprises us each time we talk with publishers. For example, larger publishers provide much of the information (though once provided, and once we question it, forces the publisher to “reconsider” what they originally submitted), while smaller publishers outright refuse to supply some of what we ask for.
Such requests are a very interesting exercise. In addition, many reps have trouble providing what we request—they don’t understand what we need and many times indicate the information was not easy to get. This process will take much longer than originally anticipated. Their lack of understanding metrics behind what they were selling is unfortunate (we can’t be the only ones asking these questions, can we?).
Total Exposure
Overall, the ads are exposed to over a certain number of professionals. If we include some of the “free” publicity clients receive, the exposure goes higher. These calculations are accomplishing by total circulations multiplied by number of insertions, factored into average seeing scores. By audience, you can also understand this exposure (i.e., some markets carry more insertions than others, as in more architects versus distributors).
Overall, however, we believe you can gather enough material to make a broad observation about the an advertising campaign, such as:
In terms of exposure, the campaign achieved above-average penetration in the marketplace because of the size of ads that ran; however, in terms of inquiry production, the campaign fell short of what could have been generated based on observations of what other advertisers were able to produce.
Of course, you have to prove that, now.
Overview of AIM Reports. AIM reports are divided by magazines, and each magazine has the following sections:
- Circulation and Circulation Description – provided as a base to review the marketplace wherein the advertising appeared. This description will provide the reader with a picture of the target audience for the advertisement.
- Media Schedule – provides the overview of when ads appeared in that magazine/digital channel. Each magazine covered contains the corresponding media schedule.
- Results of Print ads – each table in each individual report provides metrics, including comparatives between client ads and top advertisers. When top advertisers are not provided, we use the more general measurements (i.e., average inquiries per issue).
- Results of Electronic ads – same as above, only includes “click-throughs.”
- Comparative Analysis – a detailed narrative analysis of the performance of the campaign in the magazine and digital channels is provided for each magazine we examine. If top ads are provided, we draw some conclusions on what may have been the reason for the higher response.
- Lead Quality Analysis – based on the circulation comparison sample provided to the leads generated by the client’s ads. This is provided, again, to see what our inquiries to the client campaign ad looked like compared to what the publisher provided on a random basis.
- Score – a “grade” provided to the performance. We used a 1 to 5 scale on each of the issues above.
In addition, AIM provides full-size printouts of all the ads referred to in the Comparative Analysis in alpha order to help the reader better-see what the analysis refers to in the discussion.
Discussion: Why Study Advertising?
AD AGE, one of the leading magazines in the advertising trade, conducted a webinar recently entitled Half of My Creative is Wasted…I Just Don’t Know Which Half. The title was based on a famous quote by the retailer John Wanamaker who built one of the largest department stores of our time when he said, “Half the money I spend on advertising is wasted. I just don’t know which half.” Or at least, that is the legend.
The presenter – Gary Getto, who runs Advertising Benchmark Index – acknowledged that “waste” is the problem, except they are trying to get closer to the answer to Wanamaker’s dilemna. He uses (in his business of Benchmark Index http://goo.gl/SYda0) Internet Panels to “judge” advertising before and after it runs, similar to Nielsen who uses panels and people to judge programs on television, as well as the advertising. The point is that despite all of these efforts, judging advertising is extremely difficult, and this is true most especially in the business-to-business marketplace, where the disruption of sale can occur at a variety of points along the sales channel.
Getto was talking about testing and results of the creative execution, which are important, but which do not always correlate to response, as Getto himself admitted in the webinar. In one chart he showed two ads (a Lexus spread ad, and a Subway fractional ad). The chart was indexed, with “100” the norm in the middle. The chart showed that the Subway ad was judged much more effective than its larger competitor (they are competing for eyeballs, or awareness of course, and not response). And in that sense, a client’s ads may perform like the Subway ad (based on results from the a readership-studied issue, not inquiries). In one client’s case, for example, their ad was 16 percentage points higher in terms of awareness than the average ad in an issue of the studied magazine! While we can argue that a spread was studied, we can also argue that this is what the spread should do. And the client’s ad did it!
At the same time, this same spread generated -29% below the top ten ads’ response rates in the magazine in terms of inquiries. So the question we have to keep in mind throughout any analysis is, how do we want to define response? Is it eyeballs, inquiries, combination of both?
As each of clients decides what is and is not response, AIM asks them to consider the total number of magazines examined for the ad campaign being studied. In addition, consideration must be given to:
- Number of print inquiries
- Number of electronic clicks
- $ amount invested in print and digital space
- A calculation for the cost-per-inquiry
- Determination of the approximate cost to reach an individual with a message
Then, we have to factor in one of the best kept secrets publishers do not want you to know: that not everyone in their circulation sees the ad. On average, assuming that a client’s ads across the board scored better than average in terms of awareness, AIM can provide a calculation of the percentage of people the advertising stops. By definition, this also gives the number of people who never saw the ad. Combined, we can compare the cost to reach these with with the cost for a sales person to reach these people. In one case, to call on even 1% of what the ad produced would cost the client well over $9-million (factoring in a $200/sales call number, which is extremely low).
If you have a campaign rolling, or are thinking about rolling one out, talk to AIM. Our knowledge about the behavior of advertising is unduplicated in the industry. Thank you for your time.