One of the many phrases that has become popular to an annoying degree over the last few years is the concept of “dark patterns.” These are, we’re told, sneaky, ethically dubious ways, in which companies — usually “big tech” — trick users into doing what the companies want. I’m not saying that companies don’t do sketchy stuff to try to make money. Lots of companies do. Indeed, we’ve spent decades calling out some pretty sketchy behavior by companies to get your money. But the phrase “dark patterns” has such a connotation to it, and it is now used in cases that, um, don’t even seem that bad (and, yes, I’ve used the term myself, once, but, that was demonstrating specific behavior that was pretty clearly fraudulent).
Of course, the NY Times is among the media orgs which have really popularized the phrase. It wrote one of the earliest popular articles about the concept, and has called it out multiple times when talking about tech companies. That last one is particularly notable because it was written by a member of the NY Times, Greg Bensinger. He really, really doesn’t like “dark patterns” that manipulate users into… say, “signing up for things.”
These are examples of “dark patterns,” the techniques that companies use online to get consumers to sign up for things, keep subscriptions they might otherwise cancel or turn over more personal data. They come in countless variations: giant blinking sign-up buttons, hidden unsubscribe links, red X’s that actually open new pages, countdown timers and pre-checked options for marketing spam. Think of them as the digital equivalent of trying to cancel a gym membership.
He’s pretty sure we need legislation to take down dark patterns.
Companies can’t be expected to reform themselves; they use dark patterns because they work. And while no laws will be able to anticipate or prevent every type of dark pattern, lawmakers can begin to chip away at the imbalance between consumers and corporations by cracking down on these clearly deceptive practices.
The company’s paywall strategy revolves around the concept of the subscription funnel (Figure 1). At the top of the funnel are unregistered users who do not yet have an account with The Times. Once they hit the meter limit for their unregistered status, they are shown a registration wall that blocks access and asks them to make an account with us, or to log in if they already have an account. Doing this gives them access to more free content and, since their activity is now linked to their registration ID, it allows us to better understand their current appetite for Times content. This user information is valuable for any machine learning application and powers the Dynamic Meter as well. Once registered users hit their meter limit, they are served a paywall with a subscription offer. It is this moment that the Dynamic Meter model controls. The model learns from the first-party engagement data of registered users and determines the appropriate meter limit in order to optimize for one or more business K.P.I.s (Key Performance Indicators).
Cool. (And, yes, this is legitimately cool to see how the company handles the meter in a more dynamic way — more companies should be more dynamic like that).
But, um, also, isn’t that… a dark pattern? I mean, it’s not clear to the end user. It’s designed to — and I’ll quote here — “get consumers to sign up for things, keep subscriptions they might otherwise cancel or turn over more personal data.”
The writeup by the data scientist is pretty clear what they’re trying to do here:
Thus, the model must take an action that will affect a user’s behavior and influence the outcome, such as their subscription propensity and engagement with Times content.
I mean, this is all kind of interesting, but… it sure sounds like what the NY Times editorial side is complaining about as a dark pattern.
And that’s where some of the problem with the term comes into play. There’s a spectrum of behavior — some of which is just smart business and tech practices, and some of which is more nefarious. But using the term “dark patterns” to broadly describe anything that we don’t understand, or can’t see that is designed to get you to do something… becomes problematic pretty quickly. I don’t have a problem with the way the NY Times runs its paywall. It’s trying to make it work in a reasonable way that converts more users into subscribers.
Clearly, this is not as nefarious as services that make it impossible to, say, cancel your subscription without first having to talk to a human (oh wait, the NY Times does that too?). Um, okay, it’s not as nefarious as literally tricking people into signing up for recurring payments rather than a one time thing. That’s even worse.
But, it is still a spectrum. And when we refer to any optimization efforts as “dark patterns” then the problematic parts of “dark patterns” lose their meaning. It becomes way too easy to smear perfectly reasonable efforts as somehow nefarious. It’s fine to talk about sketchy things that companies do, but we should be specific about what they are and why they’re sketchy, rather than just assuming anything that is designed to drive conversions is inherently problematic.
The fact that the NY Times on the tech/business side uses exactly what the editorial side condemns in order to be able to pay the editorial side’s salaries, should at least inform the framing of some of this discussion.
Over the years, we at Techdirt have tended to resist the kinds of “audience growth strategies” that many other news publications have taken — perhaps to our own detriment. I remember when Digg was the new hotness and generating lots of traffic for news sites. Someone approached us about getting our stories highly promoted on Digg and I told them I didn’t want to game the system, and would rather let people find us organically. I know plenty of other news sites did play plenty of games. The same thing happened once everyone (and more) left Digg for Reddit. Reddit did drive a lot of traffic to us for a few years, though it’s tapered off in the past few years. And, obviously, over the last couple of years, all the publications have been talking about Facebook and how it drives so much traffic.
A year or so ago, I was at an event and chatting with a guy from another news site who nonchalantly tossed off the claim that “well, every news site these days now knows how to game Facebook for an extra 10 to 20 million views…” and I thought “huh, actually, I have no idea how to do that.” All of this might make me very bad at running a media site (I certainly know of some other news sites that used gaming social media to leverage themselves into massive acquisition offers from legacy media companies). But, to me, it meant being able to focus on actually creating good content, rather than figuring out how to game the system or who I should be sucking up to for traffic. I’ll admit to struggling with this issue at times — sometimes wondering if we’re missing out on people reading our stuff that would like it. And, every once in a while, we’ll do little things here or there to focus on “optimizing” our site for this or that source of traffic. But it’s never been a huge focus.
As mentioned above, much of this is because focusing on creating good content takes quite a bit of time, and is much more interesting to me than figuring out how to game this or that algorithm. Part of it is because I think this also tends to build a more loyal — if potentially smaller — core audience. People come to Techdirt because they like Techdirt (well, for some of you, because you hate it) not because someone gamed an algorithm to get you here. Some of this is because I’ve always been a bit wary of relying too heavily on any third party who could suddenly rip the rug out from under you.
And that seems to be happening with Facebook and some news sites. Back in June, the company announced a big change to its newsfeed, which suggested it would start downplaying “news” and promoting more stuff about your family and friends. And the latest reports suggest that many media sites took a massive traffic hit in July in response to those changes. This has some in the media pulling out their hair over what to do, but really, it’s kind of what you get for chasing someone else’s algorithm. As some have noted, the only really important lesson here may just be people who use Facebook actually prefer interacting with friends’ baby pictures, rather than cheap clickbait.
People get more satisfaction from interacting w friends' baby pictures than cheap clickbait?
Indeed — I certainly don’t go to Facebook for news. And over the last few months, I’ve noticed that I’m gravitating more and more to Snapchat as a preferred social media platform for personal stuff, as it just feels more comfortable there. A great column by Farhad Manjoo at the NY Times does a pretty good job explaining why this is and also explaining why Facebook-owned Instagram recently launched something of a Snapchat clone. The short version is:
But when you open Instagram or Snapchat, Mr. Trump all but disappears. While Facebook and Twitter have lately become relentlessly consumed with news, on these picture-based services Mr. Trump is barely a presence; he (and his Democratic rival) are about as forgotten as GoTrump.com, Mr. Trump?s failed travel search engine.
FWIW, if you followed Manjoo on Snapchat (as I do), you would have seen him make this point — that there’s very little Donald Trump on Snapchat — earlier, before this column appeared. But it’s true that something like Snapchat feels more actually social and less “news” based. And part of that is the fleeting nature of Snapchat:
The differences are instructive. On Facebook, my friends will post about their promotions; on Snapchat, they tell you about their anxieties at work. On Facebook, they show off smiling photos of their perfect kids on some perfect vacation. On Snapchat, they show pictures of their kids in the midst of some disastrous tantrum, throwing food all over the floor, peeing in the tub, covered in mud and paint and food, because hey, that?s life, O.K.?
But, of course, nowadays, all I keep hearing about is how media organizations need to “have a Snapchat strategy.” And Snapchat itself is promoting this rhetoric as well. Lots of news organizations have jumped on board Snapchat in a big way, and we’ve heard that some are having great success with it. But as cool as I find Snapchat, I’m probably going to continue to stick with my general strategy of trying to create good content and hope that you continue to find it worthwhile. I’ll leave the “gaming” of social media to everyone else.
On Monday we wrote about T-Mobile flat out lying about the nature of its BingeOn mobile video service — and after a couple of days of silence, the company has come out swinging — by lying some more and weirdly attacking the people who have accurately portrayed the problems of the service. As a quick reminder, the company launched this service a few months ago, where the company claimed two things (though didn’t make it entirely clear how separate these two things were): (1) that the company would not count data for streaming video for certain “partner” companies and (2) that it would be “optimizing” video for all users (though through a convoluted process, you could opt-out).
There were a bunch of problems with this, starting with the fact that favoring some partner traffic over others to exempt it from a cap (i.e., zero rating) is a sketchy way to backdoor in net neutrality violations. But, the bigger issue was that almost everything about T-Mobile’s announcement implied that it was only “partner” video that was being “optimized” while the reality was that they were doing it for any video they could find (even downloaded, not streamed). The biggest problem of all, however, was that the video was not being “optimized” but throttled by slowing down video.
Once the throttling was called out, T-Mobile went on a weird PR campaign, flat out lying, and saying that what they were doing was “optimizing” not throttling and that it would make videos stream faster and save users data. However, as we pointed out, that’s blatantly false. Videos from YouTube, for example, were encrypted, meaning that T-Mobile had no way to “optimize” it, and tests from EFF proved pretty conclusively that the only thing T-Mobile was doing was slowing connection speeds down to 1.5 Mbps when it sensed video downloads of any kind (so not even streaming), and that actually meant that the full amount of data was going through in many cases, rather than an “optimized” file. EFF even got T-Mobile to admit that this was all they were doing.
So that makes the response of T-Mobile execs yesterday and today totally baffling because rather than actually respond to the charges, they’ve doubled down on the blatant lying, suggesting that either it’s executives have no idea what the company is actually doing, or that they are purposely lying to their users, which isn’t exactly the “uncarrier” way that the company likes to promote.
We’ll start with the big cheese himself, CEO John Legere, whose claim to fame is how “edgy” he is as a big company CEO. He’s now released a statement and a video that are in typical Legere outspoken fashion — but it’s full of blatant lies.
The video and the typed statement are fairly similar, but Legere adds some extra color in the video version.
Let’s parse some of the statements. I’ll mostly be using the ones from the written statement as they’re easier to cut and paste, rather than transcribe, but a few from the video are worth calling out directly.
I?ve seen and heard enough comments and headlines this week about our Binge On video service that it?s time to set the record straight. There are groups out there confusing consumers and questioning the choices that we fight so hard to give our customers. Clearly we have very different views of how customers get to make their choices — or even if they?re allowed to have choices at all! It?s bewildering ?so I want to talk about this.
Of course, this is a nice, but misleading attempt to frame the conversation. No one is complaining about “giving choices to consumers.” They’re complaining about (1) misleading consumers and (2) providing a worse overall experience by throttling which (3) directly violates the the FCC’s prohibition on throttling. The next part I’m taking from the video itself, rather than the printed statement, because Legere goes much further in the video, including the curses, which magically don’t show up in the printed version:
There are people out there saying we?re ?throttling.? That’s a game of semantics and it’s bullshit! That’s not what we’re doing. Really! What throttling is is slowing down data and removing customer control. Let me be clear. BingeOn is neither of those things.
This is flat out wrong and suggests Legere doesn’t even know the details of his own service. As the EFF’s tests proved (and the fact that YouTube videos are encrypted should make clear) T-Mobile is absolutely slowing down data. In fact, EFF got T-Mobile to confirm this, so Legere claiming it’s “bullshit” is… well… bullshit!
But he’s playing some tricky word games here, claiming that throttling is not just slowing down data, but also removing customer control. That’s (1) not true and (2) also misleading. For all of Legere and T-Mobile’s talk about “giving more options to consumers” or whatever, they’re totally leaving out the fact that they automatically turned this on for all users without a clear explanation as to what was happening, leading to multiple consumer complaints about how their streaming videos were no longer functioning properly — even for users on unlimited data plans.
Customer choice? Sure they could “opt-out” after through a convoluted process that many did not understand. But T-Mobile made the choice for all its users, rather than providing a choice for its customers to make.
Mobile customers don?t always want or need giant heavy data files. So we built technology to optimize for mobile screens and stream at a bitrate designed to stretch your mobile data consumption. You get the same quality of video as watching a DVD, but use only 1/3 as much data (or, of course, NO data used when it?s a Binge On content provider!). That’s not throttling. That’s a huge benefit.
Again, this is both wrong and misleading. There is no optimization. Legere is lying. They are 100% slowing down the throughput on video when they sense it. The EFF’s tests prove as much. Yes, for some video providers when they sense lower bandwidth, they will downgrade the resolution, but that’s the video provider optimizing, not T-Mobile. T-Mobile is 100% throttling, and hoping that the video provider downgrades the video.
But in cases where that doesn’t happen then it doesn’t save any data at all (the EFF test confirmed that the full video file still comes through, just slower).
Also, note the play on words “You get the same quality of video as watching a DVD.” At first you think he’s saying that you get the same video quality overall, but he’s not. He’s saying as a DVD, at 480p, which is lower than the 1080p that many HD videos are offered at. And that’s what many people are complaining about — that they’d like to watch videos at the full 1080p, but T-Mobile made the choice that they can’t do that unless they go through a convoluted process to turn this off.
Rather than respond to any of this, Legere then claims that “special interest groups” and Google are doing this…. “to get headlines.”
So why are special interest groups — and even Google! — offended by this? Why are they trying to characterize this as a bad thing? I think they may be using Net Neutrality as a platform to get into the news.
Wait, what? Google — the same Google that absolutely refused to say anything publicly at all about net neutrality for years during the debate suddenly wants to get into the news by jumping on the net neutrality bandwagon? Does Legere have any idea how ridiculous that sounds? And it’s not like Google has a problem getting into the news. And what about EFF and others? Does he really think they need to get extra news coverage?
But note the facts here: at no point does Legere respond to the actual charges leveled against the company. He then concludes by yelling at everyone for daring to complain about this:
At T-Mobile we’re giving you more video. More choice. And a powerful new choice in how you want your video delivered. What’s not to love? We give customers more choices and these jerks are complaining, who the hell do they think they are? What gives them the right to dictate what my customers, or any wireless consumer can choose for themselves?
Nice. I’m part of the contingent complaining about this and I’m also a T-Mobile customer… and the CEO just called me a jerk while telling me he’s fighting for his customers? Really now?
And again this whole statement is blatantly misleading. The “choice” was made by T-Mobile for all users, and getting out of it involves a convoluted process that most don’t understand and where none of this was made clear to end users. Beyond violating the FCC’s “no throttling” rule, I wonder if it also violates the FCC’s transparency rules as well, in which they are required to be much more upfront about how the data is being treated.
Also, the statement above is from the video where we’re described as “jerks,” but in the written version it leaves out the “jerks” claim, but also includes the following bit mocking YouTube for letting users choose to change the resolution on videos:
YouTube complained about Binge On, yet at the same time they claim they provide choice to customers on the resolution of their video. So it’s ok for THEM to give customers choice but not for US to give our customers a choice? Hmmm. I seriously don’t get it.
But that’s bullshit also. YouTube’s choice option there is a clear pulldown on every video shown, so that a user just needs to click on the video their watching and set the resolution. T-Mobile’s is a process that’s not clear at all, with some users reporting they had to call in and get T-Mobile customer service to turn BingeOn off for their account. To compare the two situations is completely bonkers.
As far as I can tell, Legere either doesn’t understand what his own company is doing technically, or knows and is purposely misrepresenting it. Neither of those look good and go against the entire “uncarrier” concept they keep pitching. I’d expect better as a T-Mobile customer than being told that I’m a “jerk” for pointing this out.
And it appears he’s not the only one among senior execs at T-Mobile who still don’t realize what their own company is doing. On Wednesday at a Citigroup conference, T-Mobile’s Chief Operating Officer Mike Sievert
spewed some more nonsense suggesting he, too, has no idea what his own company is doing:
At a Citigroup investor conference Wednesday, T-Mobile executives shot back, saying YouTube?s stance is ?absurd.? YouTube is owned by Alphabet Inc. ?We are kind of dumbfounded, that a company like YouTube would think that adding this choice would somehow be a bad thing,? said T-Mobile Chief Operating Officer Mike Sievert. He said YouTube hasn?t ?done the work yet to become part of the free service.?
Taken at face value, that comment makes no sense. If YouTube hasn’t done the work yet to become a part of the free service than why the fuck is T-Mobile slowing down its videos? YouTube wasn’t complaining about “adding this choice.” YouTube was complaining about direct throttling of video content by T-Mobile, in clear violation of the FCC’s prohibition on throttling.
Sievert and Legere both don’t seem to understand (1) what YouTube and users are complaining about or (2) what his own company is doing. That’s… troubling, given that these are the CEO and COO of the company. It really seems like T-Mobile execs might want to spend some time talking to its tech team to understand the fact that the only thing T-Mobile is doing to video is throttling it down to 1.5 Mbps, rather than any actual “optimization” before spewing more nonsense and calling their own customers “jerks.” And, they might want to realize that their claim that this is all “bullshit” is actually complete bullshit. And that their bullshit may very well violate the FCC’s rules.
If you haven’t heard about quantum computing, it’s an alternative to “classical computing” that relies on some strange properties of quantum physics. Sure, the computer (or whatever device) you’re reading this on also relies on physics a bit, but it stores information digitally as ones and zeroes — and not some superposition of two states of matter in an array. A few organizations are working on quantum computers (e.g., Google, NASA, D-Wave, Cambridge Quantum Computing, Yale Quantum Institute, Microsoft, IBM, etc.), but the true potential is still just slightly out of reach (for now).
Some folks say romance is dead, but maybe it’s just in a deep meditative trance. In any case, it’s almost that time of year again when chocolates and flowers and possibly awkward marriage proposals are being considered. For the romantic souls, this is an annual tradition that shouldn’t be missed. For the more cynical, it’s just another commercial holiday to boost sales of various pink-heart themed items.
Queuing theory is a subset of applied math that looks into the behavior of waiting in line and algorithms that optimize various aspects of this particular kind of resource allocation. Retailers of all kinds are interested in this kind of math because it can improve customer satisfaction and get more products out the door. Apple reduces long cashier lines with employees who can accept payments anywhere in its stores. Fry’s Electronics has the giant single line that feeds into a massive array of cashiers (aka the serpentine line). There are self-checkout lanes at the grocery store, but there’s no silver bullet to eliminate waiting in lines. Here are just a few more links on this problem of civilization.
There’s some heavy details in all of this, many of them at least somewhat technical, so let’s dispense with the typical introductions and get right to the meat of this GPU industry sandwich. It’s no secret to anyone paying attention to the video game industry that the graphics processor war has long been primarily waged between rivals Nvidia and AMD. What you may not realize is just how involved those two companies are with the developers that use their cards and tools. It makes sense, of course, that the two primary players in PC GPUs would want to get involved with game developers to make sure their code is optimized for the systems on which they’ll be played. That way, gamers end up with games that run well on the cards in their systems, buy more games, buy more GPUs, and everyone is happy. According to AMD, however, Nvidia is attempting to lock out AMD’s ability to get involved with developers who use the Nvidia GameWorks toolset, and the results can already be seen on the hottest game of the season thus far.
Some as-brief-as-possible background to get things started. First, the GameWorks platform appears to be immensely helpful to developers creating graphically impressive games.
Developers license these proprietary Nvidia technologies like TXAA and ShadowWorks to deliver a wide range of realistic graphical enhancements to things like smoke, lighting, and textures. Nvidia engineers typically work closely with the developers on the best execution of their final code. Recent examples of Nvidia GameWorks titles include Batman: Arkham Origins, Assassin’s Creed IV: Black Flag, and this week’s highly anticipated Watch Dogs.
Now, while this is and should be a licensing-revenue win for Nvidia, aspects of the agreement in using GameWorks may actually seek to extend that win into a realm that threatens the larger gaming ecosystem. As mentioned previously, both Nvidia and AMD have traditionally worked extremely closely with developers, even going so far as assisting them in optimizing the game code itself to offer the best experience on their respective cards. How? Well, I’ll let PR lead for AMD, Robert Hallock, chime in.
“Gameworks represents a clear and present threat to gamers by deliberately crippling performance on AMD products (40% of the market) to widen the margin in favor of NVIDIA products,” Hallock told me in an email conversation over the weekend. But wait, it stands to reason that AMD would be miffed over a competitor having the edge when it comes to graphical fidelity and features, right? Hallock explains that the core problem is deeper: “Participation in the Gameworks program often precludes the developer from accepting AMD suggestions that would improve performance directly in the game code—the most desirable form of optimization.The code obfuscation makes it difficult to perform our own after-the-fact driver optimizations, as the characteristics of the game are hidden behind many layers of circuitous and non-obvious routines,” Hallock continues. “This change coincides with NVIDIA’s decision to remove all public Direct3D code samples from their site in favor of a ‘contact us for licensing’ page. AMD does not engage in, support, or condone such activities.”
In other words, the dual symbiotic relationships that have always existed between developers and both Nvidia and AMD becomes one-sided, with AMD being locked out of the process in some very important ways. It means that an essential information repository and communications lines for development and game code optimization nearly become proprietary in favor of Nvidia. And, lest you think one shouldn’t simply take the word of a rival PR flack on this kind of thing, other tech journalists appear to not only agree, but have predicted this exact outcome nearly a year ago when the GameWorks program was first rolled out.
“AMD is no longer in control of its own performance. While GameWorks doesn’t technically lock vendors into Nvidia solutions, a developer that wanted to support both companies equally would have to work with AMD and Nvidia from the beginning of the development cycle to create a vendor-specific code path. It’s impossible for AMD to provide a quick after-launch fix. This kind of maneuver ultimately hurts developers in the guise of helping them.”
Forbes’ Jason Evangelho then digs into the title du jour, Watch Dogs, an Ubisoft production developed within the GameWorks platform. When a tech journalist is this surprised by how stark the difference in performance is between two rival GPU manufacturers, it’s worth taking him seriously.
I’ve been testing it over the weekend on a variety of newer AMD and Nvidia graphics cards, and the results have been simultaneously fascinating and frustrating. It’s evident that Watch Dogs is optimized for Nvidia hardware, but it’s staggering just how un-optimized it is on AMD hardware. I guarantee that when the game gets released, a swarm of upset gamers are going to point fingers at AMD for the sub-par performance. Their anger would be misplaced.
The graphic above may not appear all that staggering at first, until you understand the cards involved and what it actually represents. The two cards in question aren’t remotely in the same category of power and cost when compared to one another. That AMD card that is barely keeping up with the Nvidia card is a $500 workhorse, while the Nvidia card is a mid-range $300 staple of their linecard. Both cards were updated with the latest drivers for Watch Dogs prior to testing. The problem, as suggested above, is that the level of optimization done for the Nvidia cards far outpaces what’s been done on AMD’s end and it is thanks to the way the GameWorks platform is licensed and controlled. Games outside of that platform, with the exact same cards being tested, tell a far different story.
To further put this in perspective, AMD’s 290x graphics card performs 51% better than Nvidia’s 770 on one of the most demanding PC titles around, Metro: Last Light — which also happens to be an Nvidia optimized title. As you would expect given their respective prices, AMD’s flagship 290x can and should blow past Nvidia’s 770 and compete with Nvidia’s 780Ti on most titles. To really drive the point home, my Radeon 290x can hit 60fps on Metro: Last Light with High quality settings and 4x anti-aliasing, at a higher resolution of 1440p.
There’s some history here, with Nvidia having a reputation for being more proprietary than AMD, which has always been seen as more of an open-source, open-dialogue, open-competition company. Indeed, Nvidia even has some history with trying to hide colluding with competitors behind trade secret law. But if it’s allowed to simply lock up the open dialogue that everyone agrees makes for the best gaming ecosystem all around, the results could be quite poor for the PC gaming community as a whole. Particularly if upset AMD GPU owners who aren’t aware of the background end up pointing the fingers at their co-victims of Nvidia rather than the villain itself.
Back in April, in writing about Union Square Ventures’ Hacking Society event, I discussed the importance of measuring the unmeasurable, in noting that we all too often seem to be evaluating information-era economics using industrial-era metrics. That’s a problem. Nick Grossman, who organized that Hacking Society session, has a great post discussing this same concept and highlighting Tim O’Reilly’s discussion about this topic, in which he describes the clothesline paradox, which actually seems to come from a discussion in the early 1970s, and highlights how metrics can mislead. You can think of the clothesline paradox like this:
If you take down your clothes line and buy an electric clothes dryer the electric consumption of the nation rises slightly. If you go in the other direction and remove the electric clothes dryer and install a clothesline the consumption of electricity drops slightly, but there is no credit given anywhere on the charts and graphs to solar energy which is now drying the clothes.
In my mind, there are two “problems” associated with this, and while I think there is interest in attacking the first one, the second problem is often ignored. The first problem is that we notice that important information is measured with the wrong metrics. We see this all the time in the internet era. People talk about “the collapse” of the music industry, but miss the fact that more music has been produced, recorded and released in the last decade than in any previous decade. In fact, some of the evidence suggests more music was produced and recorded in the last decade than all other decades combined. Of course, that’s an example of a metric that can be determined, but not all such metrics are that easy to pin down. For example, we talked about how Craigslist almost certainly helped contribute to the challenge that many newspapers are facing, because it undercut the cash cow that supported many of them: the classified advertising business. And if you used traditional metrics, you’d bizarrely and incorrectly suggest that Craigslist somehow “destroyed” value. But that’s because no one takes into account all the value that Craigslist created, not for itself, but for its users. But how do you measure the fact that I can now find someone to take my old couch away for free? There’s value in that transaction, but no one “measures” it. What about the fact that I can more efficiently rent out an apartment – without having to pay the local newspaper? Again, there’s value, but it’s not properly measured.
The second problem is a little trickier to understand. It’s that when we have things that we can measure, we instinctively gravitate towards using those metrics, even if they’re the wrong metrics! I was thinking about this as I read Paul Graham’s excellent thoughts on “black swan farming,” which is all about the counter-intuitive process involved in funding startups. There’s a ton of tremendously thought-provoking lines in that piece, but I’m going to concentrate on one, which was really more of an aside, unrelated to the larger article (which you should go read), because it helped clarify my thinking on this point. Graham talks about not bothering to measure how many of the YCombinator companies he funds and trains later go on to raise more money after their initial fundraising efforts, noting:
I deliberately avoid calculating that number, because if you start measuring something you start optimizing it, and I know it’s the wrong thing to optimize.
And here’s where the problem of using the wrong metrics becomes compounded. Even if you know something is the wrong metric, just having the number almost forces you to optimize for it. So rather than looking at, say, what’s best for the overall culture of music, we look at “revenue for the record labels” and decide we need to “fix” that. Or, we look at the patent system as a proxy number for “innovation” and then the focus becomes solely on increasing the number of patents we issue, rather than on actually maximizing innovation.
When you have the wrong metrics, not only do you have bad or incomplete information, but even when you know that it’s almost impossible not to optimize for those metrics, because you don’t have anything else to work towards.
There is a lot of new interest in quantifying all sorts of new data — and one benefit of the information age is that it also helps to create new data that can be quantified. But not all quantified data is actually that useful, and unfortunately, we often get so focused on the fact that we have a number, we ignore the possibility that the number is not telling us anything useful.
I was recently reminded of Shelby Bonnie’s opinion piece from three years ago about why we need to kill the CPM as a metric for advertising (for those who don’t know, CPM — or “cost per thousand” impressions — is how most banner ads are sold). He noted, quite accurately, that even those with the best of intentions to get away from “CPM-based” advertising seem to end up there in the end anyway. Because we have that number. And it becomes what people optimize around, just because it’s there.
All campaigns start with the best of intentions: “let’s do something creative, engaging, and unique!” But unless someone really senior from the agency or client side intervenes, the road for a campaign always leads to the media buyer and the dreaded spreadsheet, where the two most important columns are impressions and cost. Ironically, there’s usually some good stuff in campaigns, but they are thrown in for free as “value adds.” At some point, publishers decide that if all clients care about is impressions, then OK, we’ll give them impressions. The output is an industry that overproduces shallow, superficial, commoditized impressions. Why do we have so many bad sites that republish the same junky content–content that’s often made by machines or $1-per-post contractors? Why do sites intentionally try to get us to turn lots of pages with tons of top 10 lists, photo galleries, or single-paragraph summaries of someone else’s story?
The more I spend time thinking about these issues, the more I think these combined problems — both not having the right data and then optimizing for the wrong data — are the keys to many of the issues that we’re regularly discussing around here. Figuring out ways to get beyond that, and to find the right data, and break our habits of relying on bad data are going to be increasingly important.
Having talked with a bunch of music execs recently, as well as a few different companies that do analytics in the music space, one thing became clear: unlike most other industries, record label execs tend not to be particularly data or analytics-driven. Let’s just say they didn’t get into the recording industry because they were good at math. There are a few exceptions, obviously, but getting many industry execs to think logically and examine data isn’t particularly easy. This isn’t that surprising, given how many examples of actions by big record label execs that make little to no sense when thought about analytically.
Yet another study has come out suggesting that the industry has pricing all wrong, pointing out that the increase in sales from dropping the price of music would increase profits. And yet what has the industry been trying to do? That’s right: trying to raise the price. The study suggested that the “optimal” price for music might be closer to $0.60 per track. That still seems way too high to me when you look at how people flocked to services like Allofmp3.com, but in general I think the basic concept makes sense. You can maximize revenue by dropping prices, but it doesn’t seem like many record industry execs have realized that.