alec.stapp's Techdirt Profile

alec.stapp

About alec.stapp

Posted on Techdirt - 20 May 2020 @ 03:37pm

The Case For Contact Tracing Apps Built On Apple And Google's Exposure Notification System

Apple and Google have now released their update to their mobile operating systems to include a new capability for COVID-19 exposure notification. This new technology, which will support contact tracing apps developed by public health agencies, is technically impressive: it enables notifications of possible contact with COVID-positive individuals without leaking any sensitive personal data. The only data exchanged by users are rotating random keys (i.e., a unique 128-digit string of 0s and 1s) and encrypted metadata (i.e., the protocol version in use and transmitted power levels). Keys of infected individuals, but not their identities or their locations, are downloaded by the network upon a positive test with the approval of a government-sanctioned public health app.

Despite being a useful tool in the pandemic arsenal and adopting state-of-the-art techniques to protect privacy, the Apple-Google system has drawn criticism from several quarters. Privacy advocates are dreaming up ways the system could be abused. Anti-tech campaigners are decrying ?tech solutionism.? None of these critiques stands up to scrutiny.

How the exposure notification API works

To get a sense for how the Apple-Google exposure notification system works, it is useful to consider a hypothetical system involving raffle tickets instead of Bluetooth beacons. Imagine you were given a roll of two-part raffle tickets to carry around with you wherever you go. Each ticket has two copies of a randomly-generated 128-digit number (with no relationship to your identity, your location, or any other ticket; there is no central record of ticket numbers). As you go about your normal life, if you happen to come within six feet of another person, you exchange a raffle ticket, keeping both the ticket they gave you and the copy of the one you gave them. You do this regularly and keep all the tickets you?ve exchanged for the most recent two weeks.

If you get infected with the virus, you notify the public health authority and share only the copies of the tickets you?ve given out?the public health officials never see the raffle tickets you?ve received. Each night, on every TV and radio station, a public health official reads the numbers of the raffle tickets it has collected from infected patients (it is a very long broadcast). Everyone listening to the broadcast checks the tickets they?ve received in the last two weeks to see if they?ve ?won.? Upon confirming a match, an individual has the choice of doing nothing or seeking out a diagnostic test. If they test positive, then the copies of the tickets they?ve given out are announced in the broadcast the next night. The more people who collect and hand out raffle tickets everywhere they go, and the more people who voluntarily announce themselves after hearing a match in the broadcast, the better the system works for tracking, tracing, and isolating the virus.

The Apple-Google exposure notification system works similarly, but instead of raffle tickets, it uses low-power Bluetooth signals. Every modern phone comes with a Bluetooth radio that is capable of transmitting and receiving data over short distances, typically up to around 30 feet. Under the design agreed to by Apple and Google, iOS and Android phones updated to the new OS, that have their Bluetooth radios on, and that have a public health contact tracing app installed will broadcast a randomized number that changes every 10 minutes. In addition, phones with contact tracing apps installed on them will record any keys they encounter that meet criteria set by app developers (public health agencies) on exposure time and signal strength (say, a signal strength correlating with a distance up to around six feet away). These parameters can change with new versions of the app to reflect growing understanding of COVID-19 and the levels of exposure that will generate the most value to the network. All of the keys that are broadcast or received and retained are stored on the device in a secure database.

When an individual receives a positive COVID-19 diagnosis, she can alert the network to her positive status. Using the app provided by the public health authority, and with the authority?s approval, she broadcasts her recent keys to the network. Phones download the list of positive keys and check to see if they have any of them in their on-device databases. If so, they display a notification to the user of possible COVID-19 exposure, reported in five-minute intervals up to 30 minutes. The notified user, who still does not know the name or any other data about the person who may have exposed her to COVID-19, can then decide whether or not to get tested or self-isolate. No data about the notified user leaves the phone, and authorities are unable to force her to take any follow-up action.

Risks to privacy and abuse are extremely low

As global companies, Google and Apple have to operate in nearly every country around the world, and they need to set policies that are robust to the worst civil liberties environments. This decentralized notification system is exactly what you would design if you needed to implement a contact tracing system but were concerned about adversarial behavior from authoritarian governments. No sensitive data ever leaves the phone without the user?s express permission. The broadcast keys themselves are worthless, and cannot be tied back to a user?s identity or location unless the user declares herself COVID-positive through the public health app.

Some European governments think Apple and Google?s approach goes too far in preserving user privacy, saying they need more data and control. For example, France has indicated that it will not use Apple and Google?s API and has asked Apple to disable other OS-level privacy protections to let the French contact tracing app be more invasive (Apple has refused). The UK has also said it will not use Apple and Google?s exposure notification solution. The French and British approach creates a single point of failure ripe for exploitation by bad actors. Furthermore, when the government has access to all that data, it is much more likely to be tempted to use it for law enforcement or other non-public health-related purposes, risking civil liberties and uptake of the app.

Despite the tremendous effort the tech companies exerted to bake privacy into their API as a fundamental value, it is not enough for some privacy advocates. At Wired, Ashkan Soltani speculates about a hypothetical avenue for abuse. Suppose someone set up a video camera to record the faces of people who passed by, while also running a rooted phone?one where the user has circumvented controls installed by the manufacturer?that gave the perpetrator direct access to the keys involved. Then, argues Soltani, when a COVID-positive key was broadcast over the network, the snoop could be able to correlate it with the face of a person captured on camera and use that to identify the COVID-positive individual.

While it is appropriate for security researchers like Soltani to think about such hypothetical attacks, the real-world damage from such an inefficient possible exploit seems dubious. Is a privacy attacker going to place cameras and rooted iPhones every 30 feet? And how accurate would this attack even be in crowded areas? In a piece for the Brookings Institution with Ryan Calo and Carl Bergstrom, Soltani doubles down, pointing out that ?this ?decentralized? architecture isn?t completely free of privacy and security concerns? and ?opens apps based on these APIs to new and different classes of privacy and security vulnerabilities.?

Yet if ?completely free of privacy and security concerns? is the standard, then any form of contact tracing is impossible. Traditional physical contact tracing involves public health officials interviewing infected patients and their recent contacts, collecting that information in centralized government databases, and connecting real identities to contacts. The Google-Apple exposure notification system clearly outperforms traditional approaches on privacy grounds. Soltani and his collaborators raise specious problems and offer no solution other than privacy fundamentalism.

Skeptics of the Apple-Google exposure notification system point to a recent poll by the Washington Post that found ?nearly 3 in 5 Americans say they are either unable or unwilling to use the infection-alert system.? About 20% of Americans don?t own a smartphone, and of those who do, around 50% said they definitely or probably would not use the system. While it?s too early to know how much each component of coronavirus response contributes to suppression, evidence from Singapore and South Korea suggests that technology can augment the traditional public health toolbox (even with low adoption rates). In addition, there are other surveys with contradictory results. According to a survey by Harris Poll, ?71% of Americans would be willing to share their own mobile location data with authorities to receive alerts about their potential exposure to the virus.? Notably, cell phone location data is much more sensitive than the encrypted Bluetooth tokens in the Apple-Google exposure notification system.

Any reasonable assessment of the tradeoff between privacy and effectiveness for contact tracing apps will conclude that if the apps are at all effective, they are overwhelmingly beneficial. For cost-benefit analysis of regulations, the Environmental Protection Agency has established a benchmark of about $9.5 million per life saved (other government agencies use similar values). By comparison, the value of privacy varies depending on context, but the range is orders of magnitude lower than the value of saving a life, according to a literature review by Will Rinehart.

If we have any privacy-related criticism of the tech companies? exposure notification API, it is that it requires the user to opt in by downloading a public health contact tracing app before it starts exchanging keys with other users. This is a mistake for two reasons. First, it signals that there is a privacy cost to the mere exchange of keys, which there is not. Even the wildest scenarios concocted by security researchers entail privacy risks from the API only when a user declares herself COVID-positive. Second, it means that the value of the entire contact tracing system is dependent on uptake of the app at all points in time. If the keys were exchanged all along, then even gradual uptake of the app would unlock value in the network that had built up even before users installed the app.

The exposure notification API is part of a portfolio of responses to the pandemic

Soltani, Calo, and Bergstrom raise other problems with contact tracing apps. They will result in false positives (notifications about exposures that didn?t result in transmission of the disease) and false negatives (failures to notify about exposure because not everyone has a phone or will install the app). If poorly designed (without verification from the public health authority), apps could allow individuals who are not COVID-positive to ?cry wolf? and frighten a bunch of innocent people, a practice known in the security community as ?griefing.? They want their readers to understand that the rollout of a contact tracing app using this API will not magically solve the coronavirus crisis.

Well, no shit. No one is claiming that these apps are a panacea. Rather, the apps are part of a portfolio of responses that can together reduce the spread of COVID and potentially avoid the need for rolling lockdowns until a cure or vaccine is found (think of how many more false negatives there would be in a world without any contact tracing apps). We will still need to wear masks, supplement phone-based tracing methods with traditional contact tracing, and continue some level of distancing until the virus is brought fully under control. (For a point-by-point rebuttal of the Brookings article, see here from Joshua B. Miller).

The exposure notification API developed by Google and Apple is a genuine achievement: it will enable the most privacy-respecting approach to contact tracing in history. It was developed astonishing quickly at a time when the world is in desperate need of additional tools to address a rapidly spreading disease. The engineers at Google and Apple who developed this API deserve our applause, not armchair second-guessing from unpleasable privacy activists.

Under ordinary circumstances, we might have the luxury of interminable debates as developers and engineers tweaked the system to respond to every objection. However, in a pandemic, the tradeoff between speed and perfection shifts radically. In a viral video in March, Dr. Michael J. Ryan, the executive director of the WHO Health Emergencies Programme, was asked what he?s learned from previous epidemics and he left no doubt with his answer:

Be fast, have no regrets. You must be the first mover. The virus will always get you if you don?t move quickly. […] If you need to be right before you move, you will never win. Perfection is the enemy of the good when it comes to emergency management. Speed trumps perfection. And the problem in society we have at the moment is that everyone is afraid of making a mistake. Everyone is afraid of the consequence of error. But the greatest error is not to move. The greatest error is to be paralysed by the fear of failure.

We must move forward. We should not be paralyzed by the fear that somewhere someone might lose an iota of privacy.

Posted on Techdirt - 12 March 2020 @ 10:45am

Why Tech Might Actually Be The Solution To Capitalism's Addiction Problem

Source: The Atlantic

Maya MacGuineas, the president of the Committee for a Responsible Federal Budget, published a frightening article about technology and capitalism in the April edition of The Atlantic magazine. MacGuineas contends that the tech companies are manipulating us into using their products, addicting our children to potentially harmful devices, and stealing our extremely valuable data in exchange for “free” services.

The Masses Are Not So Easily Manipulated

MacGuineas warns us of “habit-forming” products and the “Orwellian art of manipulating the masses”:

Many technology companies engineer their products to be habit-forming. A generation of Silicon Valley executives trained at the Stanford Behavior Design Lab in the Orwellian art of manipulating the masses. The lab’s founder, the experimental psychologist B. J. Fogg, has isolated the elements necessary to keep users of an app, a game, or a social network coming back for more. One former student, Nir Eyal, distilled the discipline in Hooked: How to Build Habit-Forming Products, an influential manual for developers. In it, he describes the benefits of enticements such as “variable rewards”—think of the rush of anticipation you experience as you wait for your Twitter feed to refresh, hoping to discover new likes and replies. Introducing such rewards to an app or a game, Eyal writes approvingly, “suppresses the areas of the brain associated with judgment and reason while activating the parts associated with wanting and desire.”

Except the masses aren’t so easy to manipulate. One experiment on online shopping behavior found that the whales in the market are not influenced much at all by advertising: “More frequent users whose purchasing behavior is not influenced by ads account for most of the advertising expenses, resulting in average returns that are negative.” Arguably the most famous line in the history of the advertising industry comes from nineteenth-century retailer John Wanamaker: “Half the money I spend on advertising is wasted, the trouble is I don’t know which half.”

Even with the advent of microtargeting based on behavioral and contextual data, there is still a debate within the industry about whether advertising is worth the cost. A 2014 piece in The Atlantic by Derek Thompson is simply titled, “A Dangerous Question: Does Internet Advertising Work at All?” Thompson comes to a bleak conclusion: “The more we learn which half of advertising is working, the more we realize we’re wasting way more than half.”

There’s also reason to believe that advertising and other persuasive techniques are less effective in the internet age than they used to be. We have moved from an environment of information scarcity — in which companies had some amount of control over their brand image — to one of information abundance. As Thompson put it,

Think about how much you can learn about products today before seeing an ad. Comments, user reviews, friends’ opinions, price-comparison tools: These things aren’t advertising (although they’re just as ubiquitous). In fact, they’re much more powerful than advertising because we consider them information rather than marketing. The difference is enormous: We seek information, so we’re more likely to trust it; marketing seeks us, so we’re more likely to distrust it.

Even if targeted advertising is very ineffective at influencing consumer decisions, maybe cutting-edge machine learning algorithms — the ones used to recommend content in social media feeds — can still make a big impact on user behavior. Consider this recent story from the NYT (emphasis added):

Google Brain’s researchers wondered if they could keep YouTube users engaged for longer by steering them into different parts of YouTube, rather than feeding their existing interests. And they began testing a new algorithm that incorporated a different type of A.I., called reinforcement learning.

The new A.I., known as Reinforce, was a kind of long-term addiction machine. It was designed to maximize users’ engagement over time by predicting which recommendations would expand their tastes and get them to watch not just one more video but many more.

Reinforce was a huge success. In a talk at an A.I. conference in February, Minmin Chen, a Google Brain researcher, said it was YouTube’s most successful launch in two years. Sitewide views increased by nearly 1 percent, she said — a gain that, at YouTube’s scale, could amount to millions more hours of daily watch time and millions more dollars in advertising revenue per year.

YouTube’s “most successful launch in two years” netted the platform a less than 1 percent increase in views. Across a billion users this is a significant achievement. But from the perspective of the individual YouTube user, this change is barely noticeable. Some people seem to believe that humans are sheep and tech companies are shepherds that can guide them wherever the profit motive leads. But the data contradicts this thesis at every step.

MacGuineas also leaves out some crucial context about Eyal’s book Hooked. As he told Ezra Klein in an interview for Vox, Eyal wrote the book not only to explain how Big Tech was trying to influence us but also to democratize these tools for small- and medium-sized businesses. The hope was that once small competitors had the same tools and strategies as the tech giants, there would be a more level playing field in the market. The techniques described in Hooked are now common knowledge across many industries and therefore it is unlikely that consumer decisions between product A and product B are distorted on the margin.

It’s also worth noting that Eyal recently wrote another book called Indistractable: How to Control Your Attention and Choose Your Life. The goal of the book is to provide readers with tips, strategies, and advice for aligning their short-term behavior with their long-term goals. People should be accountable for the decisions they make about how to spend their time and money and books like Indistractable are useful in helping individuals make the best choices for their self-interest in the long run.

MacGuineas also chooses to cite an odd example of the harms caused by the tech industry:

And [the tech companies] do, in fact, manipulate our behavior. As Harvard Business School’s Shoshana Zuboff has noted, the ultimate goal of what she calls “surveillance capitalism” is to turn people into marionettes. In a recent New York Times essay, Zuboff pointed to the wild success of Pokémon Go. Ostensibly a harmless game in which players use smartphones to stalk their neighborhoods for the eponymous cartoon creatures, the app relies on a system of rewards and punishments to herd players to McDonald’s, Starbucks, and other stores that pay its developers for foot traffic. In the addiction economy, sellers can induce us to show up at their doorstep, whether they sell their wares from a website or a brick-and-mortar store. And if we’re not quite in the mood to make a purchase? Well, they can manipulate that, too. As Zuboff noted in her essay, Facebook has boasted of its ability to subliminally alter our moods.

Pokémon Go is ostensibly and actually harmless. When an augmented reality video game nudges you to walk by a Starbucks, you do not suffer any tangible consumer injury. You still retain your autonomy and the research shows that tiny nudges like this have almost no effect on your ultimate choices. It would be unsurprising if in the near future companies pulled their spending from Pokémon Go because it proved to be ineffective, like so much of the rest of the advertising industry.

Teens Are Addicted to Their Screens — and Not Much Else

According to Common Sense Media, “US teens spend an average of more than seven hours per day on screen media for entertainment, and tweens spend nearly five hours.” MacGuineas finds this usage alarming, calling tech products “addictive” and “potentially harmful”:

American society has long treated habit-forming products differently from non-habit-forming ones. The government restricts the age at which people can buy cigarettes and alcohol, and dictates places where they can be consumed. Until recently, gambling was illegal in most places, and closely regulated. But Big Tech has largely been left alone to insinuate addictive, potentially harmful products into the daily lives of millions of Americans, including children, by giving them away for free and even posturing as if they are a social good. The most addictive new devices and apps may need to be put behind the counter, as it were—packaged with a stern warning about the dangers inherent in their use, and sold only to customers of age.

This much screen time sounds excessive, and maybe it is. But while the use of technology by teenagers (e.g., smartphones, social media, video games) has been trending up over the last 20 years, risky behavior (e.g., drugs, alcohol, cigarettes, sex) has been trending down for almost every category:

Source: Washington Post

Which of these is “capitalism’s addiction problem”? Given how many risky behaviors are on the decline, tech products may be capitalism’s addiction solution rather than its problem.

Now, as Jonathan Haidt has shown, there is some valid concern about the effect social media has on certain subgroups, in particular pre-teen girls. The rate of non-fatal self harm in this group nearly tripled between 2000 and 2015. But does this mean we need government regulators to ban these products for everyone?

Not quite. Haidt recommends simple advice for parents to protect their kids: “I am on a campaign to encourage parents to adopt 3 norms: 1) all screens out of bedroom 30 min before bedtime; 2) no social media until high school; 3) time limits on total daily device use (such as 2 hrs or less).” Given the evidence, these kind of limits seem reasonable for mitigating the harms caused of letting children use technology at too young an age.

Pay for Facebook? Who, me?

Lastly, MacGuineas also thinks regulators should require people to pay for Facebook:

Perhaps the most immediate and important change we can make is to introduce transparency—and thus, trust—to exchanges in the technological realm. At present, many of the products and services with the greatest power to manipulate us are “free,” in the sense that we don’t pay to use them. But we are paying, in the form of giving up private data that we have not learned to properly value and that will be used in ways we don’t fully understand. We should start paying for platforms like Facebook with our dollars, not our data.

The logic here is: 1. Your data is more valuable than you realize. 2. Therefore, you should be forced to pay Big Tech companies to access services that are currently free. It also betrays a certain level of privilege to ignore the fact that many people, especially those in the developing world, cannot afford to pay for these digital services. And while it may feel different in our own solipsistic worlds, the sad truth is that our personal data is not worth nearly as much as MacGuineas and others claim.

The prices from the data broker market are startling low:

  • “General information about a person, such as their age, gender and location is worth a mere $0.0005 per person, or $0.50 per 1,000 people.”

  • “Knowing that a woman is expecting a baby and is in her second trimester of pregnancy, for instance, sends the price tag for that information about her to $0.11.”

  • “For $0.26 per person, buyers can access lists of people with specific health conditions or taking certain prescriptions.”

  • “[T]he sum total for most individuals often is less than a dollar.”

Given the reality of the market valuation of our personal data, we should just take the free services.

The tropes in this article are nothing new for those who have been following this debate over the last few years. The false narrative that tech is especially addictive and harmful has been on the rise for quite some time now. Unfortunately that doesn’t make it any more true.

Alec Stapp is the Director of Technology Policy at the Progressive Policy Institute

Posted on Techdirt - 25 October 2019 @ 09:18am

Google And Facebook Didn't Kill Newspapers: The Internet Did

There is an infamous chart in media circles. It shows newspaper advertising revenue steadily rising until about the year 2000. A few years later, it drops off a cliff. Superimposed on this chart is the exponential growth of Google and Facebook:

Source: Thomas Baekdal

The obvious implication, at least to those who work in journalism, is that Google and Facebook killed their industry. That’s certainly the conclusion Matt Stoller comes to in a recent op-ed for the New York Times:

The collapse of journalism and democracy in the face of the internet is not inevitable. To save democracy and the free press, we must eliminate Google and Facebook’s control over the information commons. That means decentralizing these markets and splitting information utilities from one another so that search, mapping, YouTube and other Google subsidiaries are separate companies, and Instagram, WhatsApp and Facebook once again compete.

First, it’s important to note that newspaper advertising revenue peaked a few years before the rise of Google (and many years before Facebook). That’s one hint a broader phenomenon is at work. But more generally, Stoller frames the history of advertising and journalism in a fundamentally incorrect way. He argues as if the ad-based business model for local journalism was the natural state of the world — upended only by Facebook’s and Google’s so-called “monopolies.” In reality, print newspapers had a monopoly on local information distribution due to the prevailing technologies of their time. These regional monopolies were slowly eroded by the introduction, first, of radio and then television (see Lorain Journal Co. v. United States). But, ultimately, newspapers were disrupted by the internet.

Source: Matthew Ball

So, why did newspapers have a local monopoly in the first place? Mostly due to the high fixed costs (e.g., printing presses, warehouses, reporters, delivery trucks) and low marginal costs (i.e., paper and ink) of newspaper production and distribution. Therefore, it was very easy for newspapers to dominate the local market with one bundled product, which included everything from political news and opinion to sports and classifieds. The monopoly profits were used to fund, among other things, investigative journalism (which would lose money as a standalone business but provides value and prestige as part of a bundle).

The internet blew this arrangement to pieces. No longer was owning printing presses and delivery trucks sufficient to charge advertisers and readers whatever you wanted. The newspaper was unbundled by many internet companies, large and small. The infographic below shows how different digital services peeled off a piece of the newspaper value proposition:

Source: Haseeb Qureshi

For example, Craigslist, eBay and other free or low-cost digital classified ad services out-competed print classified ads:

Source: Business Insider

Google and Facebook entered the ad market with more efficient self-service platforms for advertisers and quickly gained market share. Stoller attributes their success not to superior efficiency but to anti-competitive acquisitions:

Enabled by a loose merger policy, there was a roll-up of the internet space. From 2004 to 2014, Google spent at least $23 billion buying 145 companies, including the advertising giant DoubleClick. And since 2004, Facebook has spent a similar amount buying 66 companies, including key acquisitions allowing it to attain dominance in mobile social networking. None of these acquisitions were blocked as anti-competitive.

What this merger analysis omits, however, is that the vast majority of these were vertical mergers, meaning the acquired company wasn’t a direct competitor with the acquirer. In other words, Google is not dominant in search today because it engaged in killer acquisitions of rival search engines. Of the 66 Facebook acquisitions, only Instagram is a plausible case of horizontal integration between social media companies (and, even then, Facebook invested heavily post-acquisition).

But Stoller also exaggerates the extent of their market power, calling them “global monopolies sitting astride public discourse.” In 2018, the global ad market was about $540 billion. Google’s revenue from advertising was $116 billion; Facebook’s was $55 billion. That’s a combined 32 percent market share. Not quite a monopoly (or duopoly, technically).

Source: Benedict Evans

And if you expand the market definition to include both advertising and marketing, as Benedict Evans shows in the chart below, then Google and Facebook’s share gets cut in half.

Source: Benedict Evans

Stoller concludes by saying, “Advertising revenue should once again flow to journalism and art.” But even if we break up all the big tech companies, your local newspaper will not magically have a profitable ad business again. The money currently flowing to consolidated Facebook would go to the newly independent Facebook, Instagram, and WhatsApp (or other digital services if you curtail advertising on social media). Internet platforms are simply more efficient at matching advertisers with customers than are traditional newspapers.

If we want to “fix” journalism, it will require a new path forward (i.e., innovative business models). We’ve tried radical protectionism before: In 1970, Nixon signed the Newspaper Preservation Act, which gave newspapers a special carve-out from the antitrust laws. According to the NYT (in 1999!), it had no effect on the end result:

But in the world of 1999, does the loss of a newspaper matter as much as it did in 1970, when the scores of cable channels and hundreds of Web sites did not exist?

The new media landscape and the growing competition for advertising dollars, make it harder for a weak newspaper to survive, and make its survival less urgent, Mr. Lacy believes. ”I’m not sure I’d call the Newspaper Preservation Act a failure,” he said. ”Just a nonsuccess. People back then did not realize that this would not make a difference in the long run.”

It’s possible policymakers could construct another special protection for journalism. But it would entail massive state control of media that no one would be satisfied with. Resurrecting the regional monopolies once enjoyed by local newspapers is both undesirable and unrealistic.

More posts from alec.stapp >>