Why Tech Might Actually Be The Solution To Capitalism's Addiction Problem
from the problem-not-a-problem dept
Source: The Atlantic
Maya MacGuineas, the president of the Committee for a Responsible Federal Budget, published a frightening article about technology and capitalism in the April edition of The Atlantic magazine. MacGuineas contends that the tech companies are manipulating us into using their products, addicting our children to potentially harmful devices, and stealing our extremely valuable data in exchange for “free” services.
MacGuineas warns us of “habit-forming” products and the “Orwellian art of manipulating the masses”:
Many technology companies engineer their products to be habit-forming. A generation of Silicon Valley executives trained at the Stanford Behavior Design Lab in the Orwellian art of manipulating the masses. The lab’s founder, the experimental psychologist B. J. Fogg, has isolated the elements necessary to keep users of an app, a game, or a social network coming back for more. One former student, Nir Eyal, distilled the discipline in Hooked: How to Build Habit-Forming Products, an influential manual for developers. In it, he describes the benefits of enticements such as “variable rewards”—think of the rush of anticipation you experience as you wait for your Twitter feed to refresh, hoping to discover new likes and replies. Introducing such rewards to an app or a game, Eyal writes approvingly, “suppresses the areas of the brain associated with judgment and reason while activating the parts associated with wanting and desire.”
Except the masses aren’t so easy to manipulate. One experiment on online shopping behavior found that the whales in the market are not influenced much at all by advertising: “More frequent users whose purchasing behavior is not influenced by ads account for most of the advertising expenses, resulting in average returns that are negative.” Arguably the most famous line in the history of the advertising industry comes from nineteenth-century retailer John Wanamaker: “Half the money I spend on advertising is wasted, the trouble is I don’t know which half.”
Even with the advent of microtargeting based on behavioral and contextual data, there is still a debate within the industry about whether advertising is worth the cost. A 2014 piece in The Atlantic by Derek Thompson is simply titled, “A Dangerous Question: Does Internet Advertising Work at All?” Thompson comes to a bleak conclusion: “The more we learn which half of advertising is working, the more we realize we’re wasting way more than half.”
There’s also reason to believe that advertising and other persuasive techniques are less effective in the internet age than they used to be. We have moved from an environment of information scarcity — in which companies had some amount of control over their brand image — to one of information abundance. As Thompson put it,
Think about how much you can learn about products today before seeing an ad. Comments, user reviews, friends’ opinions, price-comparison tools: These things aren’t advertising (although they’re just as ubiquitous). In fact, they’re much more powerful than advertising because we consider them information rather than marketing. The difference is enormous: We seek information, so we’re more likely to trust it; marketing seeks us, so we’re more likely to distrust it.
Even if targeted advertising is very ineffective at influencing consumer decisions, maybe cutting-edge machine learning algorithms — the ones used to recommend content in social media feeds — can still make a big impact on user behavior. Consider this recent story from the NYT (emphasis added):
Google Brain’s researchers wondered if they could keep YouTube users engaged for longer by steering them into different parts of YouTube, rather than feeding their existing interests. And they began testing a new algorithm that incorporated a different type of A.I., called reinforcement learning.
The new A.I., known as Reinforce, was a kind of long-term addiction machine. It was designed to maximize users’ engagement over time by predicting which recommendations would expand their tastes and get them to watch not just one more video but many more.
Reinforce was a huge success. In a talk at an A.I. conference in February, Minmin Chen, a Google Brain researcher, said it was YouTube’s most successful launch in two years. Sitewide views increased by nearly 1 percent, she said — a gain that, at YouTube’s scale, could amount to millions more hours of daily watch time and millions more dollars in advertising revenue per year.
YouTube’s “most successful launch in two years” netted the platform a less than 1 percent increase in views. Across a billion users this is a significant achievement. But from the perspective of the individual YouTube user, this change is barely noticeable. Some people seem to believe that humans are sheep and tech companies are shepherds that can guide them wherever the profit motive leads. But the data contradicts this thesis at every step.
MacGuineas also leaves out some crucial context about Eyal’s book Hooked. As he told Ezra Klein in an interview for Vox, Eyal wrote the book not only to explain how Big Tech was trying to influence us but also to democratize these tools for small- and medium-sized businesses. The hope was that once small competitors had the same tools and strategies as the tech giants, there would be a more level playing field in the market. The techniques described in Hooked are now common knowledge across many industries and therefore it is unlikely that consumer decisions between product A and product B are distorted on the margin.
It’s also worth noting that Eyal recently wrote another book called Indistractable: How to Control Your Attention and Choose Your Life. The goal of the book is to provide readers with tips, strategies, and advice for aligning their short-term behavior with their long-term goals. People should be accountable for the decisions they make about how to spend their time and money and books like Indistractable are useful in helping individuals make the best choices for their self-interest in the long run.
MacGuineas also chooses to cite an odd example of the harms caused by the tech industry:
And [the tech companies] do, in fact, manipulate our behavior. As Harvard Business School’s Shoshana Zuboff has noted, the ultimate goal of what she calls “surveillance capitalism” is to turn people into marionettes. In a recent New York Times essay, Zuboff pointed to the wild success of Pokémon Go. Ostensibly a harmless game in which players use smartphones to stalk their neighborhoods for the eponymous cartoon creatures, the app relies on a system of rewards and punishments to herd players to McDonald’s, Starbucks, and other stores that pay its developers for foot traffic. In the addiction economy, sellers can induce us to show up at their doorstep, whether they sell their wares from a website or a brick-and-mortar store. And if we’re not quite in the mood to make a purchase? Well, they can manipulate that, too. As Zuboff noted in her essay, Facebook has boasted of its ability to subliminally alter our moods.
Pokémon Go is ostensibly and actually harmless. When an augmented reality video game nudges you to walk by a Starbucks, you do not suffer any tangible consumer injury. You still retain your autonomy and the research shows that tiny nudges like this have almost no effect on your ultimate choices. It would be unsurprising if in the near future companies pulled their spending from Pokémon Go because it proved to be ineffective, like so much of the rest of the advertising industry.
According to Common Sense Media, “US teens spend an average of more than seven hours per day on screen media for entertainment, and tweens spend nearly five hours.” MacGuineas finds this usage alarming, calling tech products “addictive” and “potentially harmful”:
American society has long treated habit-forming products differently from non-habit-forming ones. The government restricts the age at which people can buy cigarettes and alcohol, and dictates places where they can be consumed. Until recently, gambling was illegal in most places, and closely regulated. But Big Tech has largely been left alone to insinuate addictive, potentially harmful products into the daily lives of millions of Americans, including children, by giving them away for free and even posturing as if they are a social good. The most addictive new devices and apps may need to be put behind the counter, as it were—packaged with a stern warning about the dangers inherent in their use, and sold only to customers of age.
This much screen time sounds excessive, and maybe it is. But while the use of technology by teenagers (e.g., smartphones, social media, video games) has been trending up over the last 20 years, risky behavior (e.g., drugs, alcohol, cigarettes, sex) has been trending down for almost every category:
Source: Washington Post
Which of these is “capitalism’s addiction problem”? Given how many risky behaviors are on the decline, tech products may be capitalism’s addiction solution rather than its problem.
Now, as Jonathan Haidt has shown, there is some valid concern about the effect social media has on certain subgroups, in particular pre-teen girls. The rate of non-fatal self harm in this group nearly tripled between 2000 and 2015. But does this mean we need government regulators to ban these products for everyone?
Not quite. Haidt recommends simple advice for parents to protect their kids: “I am on a campaign to encourage parents to adopt 3 norms: 1) all screens out of bedroom 30 min before bedtime; 2) no social media until high school; 3) time limits on total daily device use (such as 2 hrs or less).” Given the evidence, these kind of limits seem reasonable for mitigating the harms caused of letting children use technology at too young an age.
Lastly, MacGuineas also thinks regulators should require people to pay for Facebook:
Perhaps the most immediate and important change we can make is to introduce transparency—and thus, trust—to exchanges in the technological realm. At present, many of the products and services with the greatest power to manipulate us are “free,” in the sense that we don’t pay to use them. But we are paying, in the form of giving up private data that we have not learned to properly value and that will be used in ways we don’t fully understand. We should start paying for platforms like Facebook with our dollars, not our data.
The logic here is: 1. Your data is more valuable than you realize. 2. Therefore, you should be forced to pay Big Tech companies to access services that are currently free. It also betrays a certain level of privilege to ignore the fact that many people, especially those in the developing world, cannot afford to pay for these digital services. And while it may feel different in our own solipsistic worlds, the sad truth is that our personal data is not worth nearly as much as MacGuineas and others claim.
The prices from the data broker market are startling low:
“General information about a person, such as their age, gender and location is worth a mere $0.0005 per person, or $0.50 per 1,000 people.”
“Knowing that a woman is expecting a baby and is in her second trimester of pregnancy, for instance, sends the price tag for that information about her to $0.11.”
“For $0.26 per person, buyers can access lists of people with specific health conditions or taking certain prescriptions.”
“[T]he sum total for most individuals often is less than a dollar.”
Given the reality of the market valuation of our personal data, we should just take the free services.
The tropes in this article are nothing new for those who have been following this debate over the last few years. The false narrative that tech is especially addictive and harmful has been on the rise for quite some time now. Unfortunately that doesn’t make it any more true.
Alec Stapp is the Director of Technology Policy at the Progressive Policy Institute