from the fix-it-so-it's-just-normal-broke-again dept
Earlier this week, when I explained how basically all audience metrics are garbage both online and off, I trimmed the specifics on several platforms since knocking them all down in detail would have made that already-hefty post even longer. But, recent revelations about Facebook's long-running inflation of a key video metric call for a deeper look at the world of Facebook video content and why, yet again, nobody has any idea how many people really see something (and this time, advertisers are unhappy):
Several weeks ago, Facebook disclosed in a post on its "Advertiser Help Center" that its metric for the average time users spent watching videos was artificially inflated because it was only factoring in video views of more than three seconds. The company said it was introducing a new metric to fix the problem.
... Ad buying agency Publicis Media was told by Facebook that the earlier counting method likely overestimated average time spent watching videos by between 60% and 80%, according to a late August letter Publicis Media sent to clients that was reviewed by The Wall Street Journal. ... Publicis was responsible for purchasing roughly $77 billion in ads on behalf of marketers around the world in 2015, according to estimates from research firm Recma.
What happened here is actually pretty subtle, so bear with me. Facebook distinguishes "plays" from "views" -- with the former being every single play of the video, including those auto-plays that you scroll straight past and never even look at, and the latter being only people who actually watched the video for three seconds or longer. Of course, there are still a million ways in which this metric is itself broken (I've certainly let plenty of videos play for more than three seconds or even all the way through while reading a post above or below them) but the distinction is a good one. All of the more detailed stats are based on either plays or views (mostly views) and are clearly labeled, but the one metric at issue was the "Average Duration of Video Viewed." This metric could be fairly calculated as either the total amount of time from all plays divided by the total number of plays, or the same thing based only on time and number of views -- but instead, it was erroneously being calculated as total play time divided by total number of views. In other words, all the second-or-two autoplays from idle newsfeed scrollers were being totalled up, and that time was being distributed among the smaller number of people who stayed on the video for more than three seconds as part of their average duration, leading to across-the-board inflation of that figure.
Now, in some ways this error is minor: it had no impact on billing for promoted videos, since average view time is not a factor there, and all the other more-detailed metrics about view duration were accurate, including a per-second graph of viewer drop-off for each video. But, indirectly, it's a pretty big deal, because average view duration is a top-line metric for publishers figuring out which content is the most engaging. Beyond that, it's the key metric for determining whether Facebook Video is truly engaging as a whole, and given the massive explosion of both publishers and advertisers putting all their focus on video recently, it's worrying to think they might have been doing so at least in part based on a broken, inflated metric.
Of course, none of that changes the fact that even when the metrics are working properly, they most likely still suck. Much of the Facebook video boom has been in the live streaming arena, where publishers like BuzzFeed have been, well, buzzing about "peak concurrent viewer" numbers that rivaled the ratings of major cable networks. But television ratings represent something entirely different from this "peak" figure, and a similar system would likely peg these streams' audiences at closer to zero. But then again, what we're talking about here is Nielsen ratings, and I don't need to reiterate just how many problems those have. All we're doing is comparing a bunch of vague, hard-to-support and impossible-to-confirm numbers with each other, and ending up with almost no new insight into the reality of audiences.
Still, if the system is going to run on bullshit, it should at least be internally consistent bullshit -- so it's good that this latest Facebook error has been caught and fixed.