Chapter 5 – Evaluation

One of the challenges for public relations is that it has a reputation for being a field that operates on intuition and intangibles. That is, too often public relations practitioners leave the impression with clients and bosses that the knowledge and perceptions of stakeholder audiences cannot be sufficiently measured through either quantitative or qualitative measures. Sometimes, this is informed by more senior-level public relations managers who have several motivations for not pushing for the evaluation component in a public relations campaign. Those motivations could include:

Image of a mat illustrating the jump to conclusions process.
Sound evaluation of a PR campaign can help avoid the pitfalls of making decisions too quickly. “It’s a Jump to Conclusions mat of course” by The Jof is licensed under CC BY-NC-SA 2.0.
  • Not wanting the client/boss to know that they don’t have the skills to do evaluation.
  • Avoiding asking the boss for more money in the budget to do evaluation.
  • No knowing where to find existing mechanisms that can provide low-or-no cost evaluation.
  • Playing to the client’s/boss’s inclination to focus on activities not measurements.
  • Wanting to impress the client/boss that public relations is something mystical that really can’t be thoroughly measured and, therefore, the public relations manager can avoid a higher level of accountability.

Three broad areas of evaluation

While we can certainly stipulate that self-preoccupied motivations can afflict public relations just as much as any field, that doesn’t mean these rationales should carry the day.  The truth is almost all aspects of a public relations campaign can be measured through a mix of quantitative and qualitative approaches (more on that later in this chapter). To that end, the Evaluation element of RACE focuses on providing assessments of the PR campaign.  There are essentially three areas of evaluation — preparation, implementation, and impact. Broadly, the evaluation element asks these kinds of questions (Broom & Sha, 2013, p. 319):

Preparation

  • How well did we prepare this campaign?
  • How appropriate was the messaging?
  • How technically sound were the messaging and events?

Implementation (or outputs)

  • How many messages were distributed?
  • How many messages actually appeared?
  • How many people were potentially exposed to the messages?
  • How many people actually paid attention to the messages?

Impact

  • How many people learned and/or retained the messages?
  • How many people changed their opinion based on the messages?
  • How many people changed their attitude or inclination based on the messages?
  • How many people acted on the messages?
  • How many people continued this behavior?
  • What long-term societal/cultural changes may be apparent from the campaign?

Let’s stay with the Chiptole example first offered in chapter 3.  From its research and action planning, recall that, in this hypothetical scenario, Chiptole has three objectives:

Objective Statement #1: By the third quarter of this fiscal year, Chiptole’s total revenue will be up by 10 percent.

Objective Statement #2: By the fourth quarter of this fiscal year, random surveys will reveal that at least 60 percent of respondents will list Chiptole’s as one of their three favorite Mexican restaurants.

Objective Statement #3: By the end of the next calendar year, Chiptole’s stock market valuation will have increased 15 percent.

Next, looking at the three areas of evaluation, Chiptole may choose to establish these kinds of evaluations:

Preparation:

  • How well did we identify the key data points that came from our research?  For example, we didn’t get sales data per square feet of each retail outlet; did we need that?
  • Did the statistical data we focused on — sales trends — allow us to make the best decisions about our messaging?
  • Did the slogan we develop for our campaign align well with our research findings and our goals?
  • When we used a YouTube video to help launch our campaign, how well was it edited for optimal viewing?

Implementation (or output)

  • What was the complete count of social messaging posts we put up on FB and Instragram?
  • How well did those message get picked up by local and regional news outlets?
  • What was the potential reach of the messages through news outlets that carried our message?
  • Is our survey data about stakeholder awareness of our message comprehensive, or did we miss some key audiences?

Impact (featuring assessing audience awareness, inclination to act and/or audience behavior

  • What do our stakeholder surveys tell us about how positively key audiences now see us?
  • What do we know about what influences people’s viewpoints of us?
  • What do we know about how inclined various stakeholder groups are to visit our restaurants?
  • What are the sales trends since the end of the campaign?
  • What are our competitors doing in reaction to our campaign?

These three aspects of evaluation likely will not be equally important to the client.  For example, the preparation stage is particularly pertinent for the public relations person, and perhaps not as important to the client, because it is assessing the “behind the scenes” aspect of campaign development. Therefore, the preparation area of evaluation would more likely be of value to the public relations team, and other associated teams in marketing and advertising, as they do an inventory of how well they put together data and message points, and conceptualized the campaign roll out, all in advance of implementation.

The client (and also the public relations person’s boss) will likely be more interested in implementation. In fact, it’s not unusual for the client/boss to want, at a minimum, weekly updates on the extent of outputs (e.g., number of news releases sent out, number of interviews done, number of social media posts).  Some bosses/clients put a high premium on outputs as the level of activity leads them to reassure themselves that the campaign is progressing well.  The numbers focus is reflective of an experience this book’s author had when he did an informational interview with a marketing and communications director at a Fortune 500 company in St. Louis. Recall that this was briefly mentioned in Chapter 2 (Research); here’s more detail about that discussion:

Director: Welcome, Burt.  So, you’d like to find out more about how public relations operates in our field?

St. John: Yes, I’d appreciate knowing more about the top PR priorites for your company.

Director: (moves to sit in a chair that has a large opaque window directly behind him; the office behind him revealed rows of cubicles). Glad to do that. But first tell me about what you do.

St. John: I provide public relations services for two major clients in the Midwest.

Director: Really? Well, tell me how many projects your work on in a given month.

St. John: Well, of course there are small day-to-day things I do for them like take media inquiries and speechwriting. But, when it comes to larger projects, I do, in an average month, about 5-6 projects across the two clients.

Director: (leans back in his seat and nods to the opaque window) About 5-6 projects, eh? (He pauses for dramatic effect). Well, I have people back there working on 60-70 projects each month!

Two things happened next: 1) St. John knew the information interview was essentially over and he played it out to its end, and 2) this discussion crystallized for him that those who oversee public relations campaigns, whether bosses or clients, often focus much more on activities performed than how activities may (or may not) have lead to the accomplishment of their goals.

The importance of pairing quantitative evaluation with qualitative findings

A focus beyond outputs to examining what has truly happened — or impact — is something that many clients/bosses are interested in knowing. Therefore, public relations people should report back, at a minimum, on how well the campaign achieved its numerical objectives. Look again at the objectives laid out in the hypothetical Chipotle campaign. For the impact element, the public relations team should answer:

Photograph depicting a small group interview.
Evaluation can be qualitative, such as interviews done one-on-one, in small groups or in focus groups. “Brazil Rio+20 – GEF Evaluation Office Book Launch” by Global Environment Facility (GEF) is licensed under CC BY-NC-ND 2.0.

Objective Statement #1: Did total revenue, by last year’s fiscal third quarter, go up at least 10 percent?

Objective Statement #2: Did random surveys reveal, by last year’s fiscal fourth quarter, that at least 60 percent of respondents listed Chiptole’s as one of their three favorite Mexican restaurants?

Objective Statement #3: Did Chiptole’s stock market valuation increase by at least 15 percent by the end of the last calendar year?

It is also vital to include qualitative measurement to complement objective statements (which are always focused on quantiative measurements).  So, for example, these objectives can be accompanied by focus groups, one-on-one interviews with key stakeholder members, and field observations of Chiptole restaurants. Why also include qualitative evaluations? The “by-the-book” answer is that qualitative measurements provide a depth and richness concerning how well the public relations campaign made a difference for the client. That is, simply tracking Chiptole’s revenue numbers may not reveal the fuller impact of the campaign; there could be other factors, like the competition going out of business, or new lines of products being introduced that have sold particularly well (and maybe even better than anticipated).  So, having qualitative data about how well the campaign reached and effected stakeholder audiences allows the public relations team to point to where its campaign made a difference no matter what other factors may have affected outcomes for Chipotle.  Therefore, this reason for using qualitative data in evaluations is about demonstrating the soundness of the campaign; it displays that the campaign made its own impacts. There is, however, another sound reason for using qualitative data: it demonstrates how public relations, in work environments where managers scramble for resources, deserves appropriate support from upper-management. Consider this hypothetical example of a discussion in the “C” suite at Chipotle:

CEO: Well, St. John, how well did the PR campaign support our “We’ve Got the Food You Crave” initiative?

St. John: It delivered very well on the objectives. For example, we exceeded our total revenue and stock market valuation goals and we got the 60 percent favorability response on the surveys!

Marketing Manager: (in a loud voice) Well, St. John, I don’t see that you can claim that your PR campaign lead to those results. We started up a new marketing campaign the pointed out the range of new products we have launched in this past year.  We put together some slick ads that showed young and hip people biting into our new line of exotic burritos, and we saw revenue spike about three weeks after we started these ads!

CEO: St. John, your thoughts?

St. John: We know that the PR campaign worked well in partnership with those ads?

Marketing Manager: How do you know that?

St. John: Because we did focus groups with several of the customers in key markets.  When we asked them how much they noticed the ads, they pointed out that they did, but they also said they were often drawn in by the special events we held and/or saw the news stories we placed in both traditional media and social media. We also walked up to customers who were waiting for their food in our restaurants and they said they also remembered both of these approaches.

Marketing Manager: (in a quieter voice) Oh…

There are some lessons to learn from this exchange:

  • A PR campaign needs both qualitative and quantitative evaluation.
  • The PR people running the campaign need to have both sets of data when talking with senior management.
  • If the PR people don’t have qualitative measurements, others can claim they are responsible for the ostensible successes of the PR campaign.
  • Other staff managers (e.g., those over marketing, human resources, and financing) may make such claims because they want more resources and/or prestige.
  • It’s best to not do what the marketing manager did and avoid asking questions you don’t have an answer for.

A word about AVE

Over the last 30 years, some public relations practitioners have

Quote from Neil deGrasse Tyson: "Follow the evidence wherever it leads, and question everything.
Neil deGrasse Tyson Question Everything” by the Environmental Illness Network is licensed under CC BY-NC-ND 2.0.

maintained that there is another quantitative approach to establishing the effectiveness of a PR campaign: Advertising Value Equivalency (AVE). The argument for using AVE is that it allows the PR team to offer a succinct accounting of how the campaign made a difference for the client. Simply put, it measures the amount of publicity attained in various news outlets and tells the client how much it would have cost for them to purchase a similar amount of space or time via advertising.  This kind of measurement is suspect as it doesn’t directly address what the campaign is trying to accomplish with the client’s stakeholders. However it is popular among some PR professionals because:

  • It’s a readily accessible way of demonstrating a “big numbers” numerical effect of the campaign
  • It speaks to client’s concerns about return on investment in the campaign
  • It’s a clear way of demonstrating why public relations, and not advertising, should receive more resources
  • It allows the practitioner to hold down evaluation costs for the campaign
  • It’s a way for a PR person with little understanding of evaluation to still deliver some quantitative “results.”

Several public relations evaluation experts (Broom & Sha, 2013, p. 324), however, maintain that AVE is not an appropriate evaluation, because:

  • Advertising is NOT public relations; therefore AVE is not measuring PR’s efforts.
  • AVE measures cost, not the value that the campaign is delivering
  • AVE doesn’t measure place or position of the messaging
  • AVE dosen’t measure the prominence of the messaging (is it central to the news piece, or tangential?).
  • AVE doesn’t measure if how much “share of voice” (depth of coverage) the client received.
  • AVE doesn’t measure exactly what messages were conveyed.
  • AVE doesn’t account for collaterals to the messaging, like photos, video,  and logos.

Rather than AVE, the public relations team needs to gather convincing qualitative and quantitative data that reveals how the campaign did at least one of these three things among the client’s stakeholders: 1) generate awareness about the client and its messages, 2) spark a predisposition that is favorable to the client’s messages and 3) generate action from the stakeholders that can lead to the client accomplishing their goals.

Ultimately, good evaluation of a PR campaign is like the successful ending of a movie’s storyline: if it finishes in a convincing way, the actors involved may be called upon to do a sequel.  And that’s where the public relations team goes back to the beginning by doing up-front research for the next story!

References

Broom, G. & Sha, B-L. (2013) Effective Public Relations (11th edition). New York: Pearson

Image Attribution

It’s a Jump to Conclusions mat of course” by The Jof is licensed under CC BY-NC-SA 2.0.

“Brazil Rio+20 – GEF Evaluation Office Book Launch” by Global Environment Facility (GEF) is licensed under CC BY-NC-ND 2.0

Neil deGrasse Tyson Question Everything” by the Environmental Illness Network is licensed under CC BY-NC-ND 2.0.

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Strategic Public Relations Planning by Burton St. John III is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book