3

Kurt Hanson: Analyzing the Roaring of Pandora’s Biggest Bear

bearIn the course of Pandora’s three-year history as a public company, it has had pretty strong support from the stock analyst world.  At the moment, for example, Morningstar’s website reports on seven analyst opinions, six of which are “Buy.”

However, consistently during that period there’s been one major bear — Rich Greenfield, of the research firm BTIG. Greenfield has issued approximately 29 analyst reports on Pandora, every one of them negative. (On the other hand, oddly, he seems to like the rest of our industry. He’s written positively about almost every one of Pandora’s major competitors, including iHeartRadio, Slacker, Songza, Spotify, and others — even twice about the now-defunct Turntable.fm!)

(Also oddly [at least to me], he’s not just predicting business problems for Pandora, he’s sometimes actively trying to cause them, as in his September 2012 report, “Congress Should Be Working to Raise Royalty Rates on Pandora, Not Lower Them.”)

Greenfield’s latest report got a lot of industry press last week — visible to the point where I thought some of the issues he raises deserve a look.

Greenfield begins, “Pandora has convinced investors that they are now the top radio station in virtually every major market across the US, with advertisers supposedly set to rapidly shift ad dollars towards Pandora and away from terrestrial radio. The more we dig into Pandora’s claims, it becomes clear that this is more ‘Art’ than ‘Science.’”

But what does he mean by that?

(1) He doesn’t like user registration

He begins with a criticism of the process of user registration, showing that when that he registered as a Pandora listener and lied about his zip code, he was served ads for advertisers in that zip code.

Not sure that’s a deal-killer yet. After all, since this would result in the consumer hearing ads that are less relevant to his or her needs, there’s really no motivation for the average consumer to do this. Furthermore, this is a potential (albeit maybe not meaningful) problem for ALL websites that use registration, including sites that Greenfield likes, like Facebook.

(2) He is not aware of “geo-sniffing”

Switching topics to Triton Digital’s audience measurement and ad delivery platform, Greenfield then argues that if, as a registered Pandora listener, you move, “Triton does not know your location.”

That is not true.  As Triton clients know, it “geo-sniffs” the actual location of each listener.  (Knowing a user’s IP address gives Triton a high probability of calculating the user’s actual geographic location.) As a result, ads can be served to that listener based on either registration data OR geo-sniffed location, depending on the desires of the webcaster and/or advertiser. (I’m not sure that’s part of Pandora’s current ad-serving approach, but it’s technically possible.)

(3) He falsely accuses Pandora of padding its numbers

Regarding Triton’s audience measurement, he writes, “Cached content can also drive up (distort) listening results, even if the song was not actually played.”

Actually, that’s not how the Triton system works; it only measures the cached content that’s actually played.

Next, Greenfield argues, “It is not clear how Pandora One listeners are treated. Are paying users who receive no advertising excluded from Triton stats? We presume Pandora One users are by far the heaviest listeners to Pandora, yet they generate zero advertising revenue as consumers are paying to remove ads. We estimate 13% of Pandora’s reported listening hours come from ad-free Pandora One listening. Terrestrial radio always has ads.”

Both of those last two conclusions are untrue: (A) As we often explain in RAIN, the monthly Webcast Metrics reports only measure Pandora’s ad-supported streams. His main point, that their numbers are inflated by 13% is wrong. (B) And terrestrial radio doesn’t “always” have ads — lots of radio stations debut with long periods of commercial-free days or weeks; many stations offer long commercial-free sets, commercial-free hours, and some even commercial-free days; and some rated public and religious stations run no commercials whatsoever. And in all those cases, Nielsen Audio would rate them anyway in exactly the same way.

(4) He doesn’t understand what accreditation is

Greenfield then writes about Triton’s MRC accreditation, “It is important to realize that not all accreditations are the same.” He goes on to suggest that the accreditation of PPMs is a different and better form of accreditation than Triton’s.

Actually, however, the accreditation he’s talking about for Nielsen Audio is of their meters (the physical devices), not their ratings estimates.  Those are accredited in only about 26 of the top 48 markets. Triton’s procedures, I believe, are accredited in all of their local “Webcast Metrics Local” markets.

Greenfield concludes his point by arguing that the MRC doesn’t verify Triton’s “data,” but that’s not what the MRC does.  They verify processes.

(5) He misrepresents the capabilities of MediaOcean and Strata

On the subject of “Buying Dashboard(s),” Greenfield argues that when MediaOcean or Strata (planning and buying software used in ad agencies) displays a ranker that includes Pandora, the only available data that can be seen for Pandora is AQH, not reach. He argues that could lead to Pandora delivering more many spots per listener than the advertiser intends.

Again, that’s not true: As webcasters know, Triton produces both AQH and cume (a/k/a, in some reports, unique IP addresses) estimates for almost every conceivable daypart for all of its Webcast Metrics clients, and that data is available in the MediaOcean or Strata buying dashboards at the touch of button.

(6) He doesn’t understand how ratings are derived

Greenfield writes, “This Strata dashboard relies on AQH measurement, which does not have a reach component.  The same consumers constantly streaming Pandora can lead to a very high AQH, without reach factored into those stats.”

That’s simply not how ratings work: AQH absolutely has a reach component! Statistically, AQH (the number of people listening at the average moment) is a product of two factors: (A) How many unique individuals listen to a station during a given time period (i.e., reach), and (B) how long each individual listens during that time period (i.e., TSL).

Saying AQH does not have a reach component is like saying the area of a rectangle does not have a height component. (Of course it does. It has to!)

(7) He denies valid claims of #1-ness

Moving to Pandora’s claim of being the #1 “station” in most markets (and using New York City as an example), Greenfield asks, “Why would anyone compare all of Pandora to Z-100?”

The answer, I believe, is that (1) they are both consumer brands of radio, (2) you can compare their AQHs, and (3) you can buy all or part of each station’s audience. (In Z-100’s case, you can buy particular features, days, or dayparts; in Pandora’s case, you can buy particular days or dayparts and/or specific age, gender, and geographic subgroups.)

Admittedly, there’s a confusing terminology issue involving the use of the world “station”: In the Pandora world, listeners create “stations” — and some listeners create hundreds of them. So is Pandora a “station” itself or does it contain within its internal databases hundreds of millions of user-created “stations”?

But as shown in a Strata screenshot in his report, Pandora’s AQH audience size in NYC among P18-34s seems to be about as big as the AQH of the top three broadcast stations (Z-100, Mega 97.9, and WLTW) combined. Parsing the language as I just did, there’s nothing inaccurate about that statement.

Of course, as RAIN’s Jennifer Lane points out, it’s also true that you can choose to look at this at a corporate level, not a station level. In that analysis, Pandora’s NYC AQH audience is smaller than Clear Channel’s or CBS Radio’s.  “If Pandora gets to aggregate all of their listening in a market, then Clear Channel or CBS should be able to do the same,” she argues.

My response is: Sure, but as long as one defines one’s terms carefully, comparisons like this are accurate and fair game, and have been part of radio sales’ history since time immemorial.

This is analogous to the fact that one radio station can say it’s #1 in AQH, another (usually the news station) can say it’s #1 in cume, a third can say it’s #1 in F18-34s, and a fourth can say its three-station cluster is #1 under some criteria. As long as all terms are carefully defined, all four can be telling the truth.

(8) He quibbles over valid alternative data sources

Greenfield devotes another large section of his analyst report to debunking Pandora’s use of data from The Media Audit data to make its claim of being the #1 station in most major markets.

Here, he has a couple of minor but valid points:  (A) The Pandora sales piece he reproduces has a footnote saying that the claim is based on “radio usage” without specifying whether that’s AQH or cume. (It’s actually cume. The Media Audit is a qualitative service; as such, it doesn’t measure AQH.)  (B) The Media Audit’s questionnaire, from which a cume ranker is derived, merges together aided awareness for the online radio brands of Pandora and iHeart with unaided awareness for the AM/FM stations; I’m not sure it would have changed the results, but I would have footnoted that.

But this is quibbling:  Although Pandora can’t quote Nielsen Audio numbers in their sales pieces unless they are allowed to become a subscriber, most everyone in the industry would probably use those AQH estimates as their primary data source for AM/FM audience sizes — and everyone who has access to those ratings (i.e., everyone who reads the trades and owns a calculator to do some simple math) can see that Pandora’s main point is correct.

(9) He ignores Internet radio’s primary value propositions

Greenfield’s final section, “So What is Pandora Advertising Good For?,” argues that Pandora is useless as an interactive buy because “it operates on partially flawed data” and useless as a traditional radio buy because its reach claims are “apples and oranges” (which, as noted above, I don’t believe is true).

I’ve had the chance to have a lot of meetings with advertisers last year — sometimes as the opening speaker at Interactive Audio Bureau (IAB) events, sometimes on sales calls on behalf of just AccuRadio, and sometimes on joint sales calls with Pandora — and I can assure you that planners and buyers at agencies are intrigued and excited by the opportunities offered by online radio and believe the medium is good for quite a bit.

Advertisers like the benefits of short stopsets, of accompanying visuals, of new types of creative ad units, and of precise targeting. That’s why they’ll spend about $1 billion on Pandora this year, and tens or maybe even hundreds of millions of dollars on other brands (e.g., iHeartRadio) as well.

Whether you’re an investor, an advertiser, or a broadcaster looking for growth opportunities for your company, online radio is an exciting opportunity for you to explore.  And I would not let one individual’s bearish opinion slow you down too much!

Kurt Hanson

3 Comments

  1. Kurt, even if Pandora did use geo-teasing, they’d still be way off since so many corporations use headquarters city as their default IP address. Pandora has verified (at RAIN Summit LV 2013) that they do not update any registration data, so the zips really are pretty old and a huge percentage of them are outdated by 2 or more years. You’re also ignoring what Pandora declares in their SEC documents — that they know they have multiple users who have multiple registrations and use them regularly (probably to avoid the skip caps), so Pandora says they have no idea how many UNIQUE users they have each month or ever. They freely admit that. Makes it difficult to determine reach and frequency, something everything else on Strata and Media Ocean can actually provide.
    No matter how good Pandora’s product is (and it is a good product), there are definitely problems with its data and its ability to validate its data. It simply is not as good as Pandora touts it to be (or, for that matter, any digital product touts their numbers to be, given the problems with incorrect/outdated registration data, click fraud, impression fraud, etc). I am not at all inferring that Pandora is deliberately trying to defraud anyone, just that they tend to inflate or obfuscate the viability of their deliverables. I believe that is what Richard Greenfield continues to point out and contest. I believe he also stated that a company that pays its CEO $23+ million for 4 months of performance on a stock that still cannot produce a postive P/E ratio makes Les Moonves look underpaid. Hard to argue with that logic. And makes it difficult to recommend the stock as a “buy”.

  2. I was disappointed by the Greenfield blog post, in that I believe he could have done a better job of enumerating attributes and challenges for both valuable media – i.e., radio and online radio. Many of Greenfield’s complaints about Pandora are things that, as a media planner with 40+ years of expertise, I find to be attributes. Also, an uninformed reader of his post could easily get the erroneous idea that PPM’s have been perfected as a ratings tool. And … what’s up with his comparing Pandora to Google, Facebook and Twitter? Isn’t that somewhat like comparing TV to a billboard?

Comments are closed.