Zach Graves's Techdirt Profile

Zach Graves

About Zach Graves

Posted on Techdirt - 23 February 2021 @ 01:40pm

Is Mandated Sideloading The Answer To App Store Deplatforming?

Smartphone app store policies have come into focus recently, following a series of recent conflicts between app makers and app store operators (principally Apple and Google). These include the removal of conservative-oriented social media platforms Parler and Gab, and the ensuing debate about balancing free speech and harmful content. There have also been numerous conflicts over monetization, including disputes over transaction fees for digital goods and services (e.g. <a href=”>Spotify and Epic Games), and privacy changes that affect third party advertisers (e.g. Facebook).

With scrutiny of the tech industry at an all time high, the otherwise niche issue of app store policies has become an increasingly salient part of the broader debate over digital market competition, raising the specter of new government regulation. But what is the optimal level of openness in a competitive app ecosystem, and how does public policy help achieve it? These are harder questions to answer than they seem—involving deep technical, economic, and legal issues.

A Tale of Two Smartphone Operating Systems

According to Statcounter, the global mobile operating system market is dominated by Google’s Android operating system (72% market share), followed by Apple’s iOS (27% market share). Despite having a substantially smaller user base, the Apple App Store earns substantially more direct revenue than the Google Play Store. But this is misleading at first glance.

First, there are important demographic differences. iPhone owners are more concentrated in developed nations, and even in those countries tend to be more affluent and spend more on apps. Their business models are also different. Unlike Apple, which has limited advertising offerings, Google earns substantial revenues through mobile advertising, and even pays Apple billions each year for the privilege to be its default search engine to expand the revenues it can capture. They are also designed in fundamentally different ways. Whereas iOS is a proprietary closed system, Android is (mostly) open source. Notably, there are versions of Android without Google Play or other Google services, particularly in mainland China where it doesn’t operate. Apple, on the other hand, operates the App Store on all iOS devices; and unlike Google, does business in the lucrative mainland China market.

As a result of these different architectures, a conspicuous difference between Android and iOS is that the former allows the installation of apps outside of its Play Store. This can be either through a pre-installed third party app store that ships with the device (e.g. Samsung’s Galaxy Store or the Amazon Appstore), or direct installation of apps or even other app stores, called “sideloading.” Circumventing the Play Store also means that developers can take payments without cutting Google in, typically 30%. Meanwhile, Apple requires users to go through its App Store to download apps, where it takes a similar cut.

Policymakers Respond

Grasping onto this difference, and facing pressure from lobbyists, policymakers in multiple states have proposed new legislation that would force Apple to redesign their operating system to allow circumventing both the App Store and In-App Purchase system (see similar bills in GA, ND, HI, AZ). Notably, a similar provision also exists in the European Commission’s proposed Digital Markets Act.

In theory, this sounds like a good idea. In the wake of recent controversies, many in Silicon Valley have been looking towards decentralization as the answer. Indeed, systems with more openness and interoperability tend to foster innovation and competition, and give users more freedom. The ability to install apps directly could also be an essential workaround when companies remove controversial apps, particularly where they are pressured to do so by activists or governments.

However, there are some good reasons to be wary of rushing to pass such a mandate, both as a substantial fix for digital market competition, and as a precedent for local governments dictating or overseeing software designs—something they’re not known to be particularly competent in.

Trade Offs of a Sideloading Mandate: Cybersecurity and Privacy

Suddenly forcing iOS to allow unvetted apps could introduce a flood of serious cybersecurity vulnerabilities, facilitating everything from spyware to ransomware to identify theft. Such an unanticipated requirement could pose a serious challenge to developers, potentially necessitating years of new work and investment.

A 2019 threat intelligence report from Nokia observed that Android devices were fifty times more likely to be infected than iOS, with the “vast majority” of malware distributed through trojanized sideloaded applications. Because of this risk, Android takes measures to discourage sideloading through user interface mechanisms. Google’s Advanced Protection Program also blocks sideloaded apps for this reason.

Because Android is a more open system than iOS, its privacy and security features are constructed differently. While both operating systems have some form of automated threat detection, app containerization, and other features to limit an app’s access to sensitive systems, these are architected based on different assumptions.

For Apple, a closed-system approach is at the heart of its strategy for iOS. If Apple engineers could no longer count on vetting during the app review process, they may be forced to build new redundancies from scratch, or even redesign major parts of the operating system. Because iOS isn’t open source like Android, it’s hard to tell how much of an architectural challenge this will be.

Apple’s preference for closed systems can be traced to Steve Jobs’ philosophy of end-to-end control of hardware and software, and lack of patience for consumer tinkering, going all the way back to the first Macintosh computer. In 2007, around the launch of the first iPhone, Steve Jobs described applying this thinking to iOS (then “iPhone OS”) in an interview with the New York Times:

You don’t want your phone to be like a PC. The last thing you want is to have loaded three apps on your phone and then you go to make a call and it doesn’t work anymore….These are devices that need to work, and you can’t do that if you load any software on them…That doesn’t mean there’s not going to be software to buy….but it means it has to be more of a controlled environment.

Apple may not give you every option you might want, but it may be a worthwhile tradeoff if your priority is security and privacy, or a seamlessly integrated ecosystem. In recent years Apple’s marketing department has leaned into this as a competitive advantage, and it’s what their customers have come to identify with its brand.

There are also ways out of Apple’s walled garden. The simplest workaround is to access applications directly from a mobile web browser. For instance, if you really want to use Gab, you can create a home screen icon from Safari and access it like an app. Similarly, you can make purchases there without Apple taking a cut. There are, of course, limitations to what you can do in a mobile browser (notably third party browsers are required to use Apple’s WebKit rendering engine and, as with other parts of iOS, Apple reserves some private API functions for itself).

In the US, determined users can legally jailbreak iOS devices to sideload apps without requiring too much technical skill (here’s a handy guide). This works on most Apple devices, after which users can install a range of unauthorized apps and even app stores. But caveat emptor. Unauthorized app stores don’t do much of anything to combat malware. There are other downsides of jailbreaking, including making it much harder to update software, having certain apps break, and potentially voiding your warranty. Notably, Apple has also argued for making jailbreaking illegal.

For those that don’t want to jailbreak their device, there’s also the option to sideload apps from your computer to iOS directly through a known exploit, or through developer environments like Xcode and Testflight. With this approach you can still access third party app stores, such as AltStore or AppValley, albeit with more limitations than jailbreaking. Importantly, installing unauthorized apps through these methods can still expose you to malware.

In short, it’s not that hard to circumvent Apple’s restrictions on unauthorized apps if you really want to. Particularly if you’re doing something simple like trying to access an alternative to Twitter that isn’t in the App Store. But if you decide to go all the way and jailbreak your phone, you might be wise to use your banking app on a different device.

Good Reasons to Limit Local Government Control of Digital Markets

There are good reasons to be wary of governments dictating and implementing software design requirements—particularly at the state and local level. As I’ve discussed at length elsewhere, both Congress and federal agencies face serious capacity gaps for in-house policy expertise—particularly for science and technology issues. Yet, relative to states, they have a wealth of competence.

According to the National Conference of State Legislatures, only 4-10 states have legislative bodies that can be considered full-time, well-paid, and sufficiently staffed. Many states have part-time legislatures where lawmakers work other jobs and are supported by a skeleton crew of staff. Whereas Congress is assisted by thousands of support staff at legislative agencies—including the Government Accountability, Congressional Research Service, and Congressional Budget Office—legislative support agencies in the states vary widely in staffing, resources, and services offered, and generally pale in comparison. For instance, while CRS has over 600 staff with expertise in different policy areas, Arizona’s service agency has only five staff, and is also in charge of fixing the computers. State regulatory agencies likewise vary in quality, staffing, and technical competence.

Given the cross-jurisdictional nature of digital commerce, it’s less than ideal to have a patchwork of state regulations, or to allow a single jurisdiction to dictate policies for everyone (as we’ve seen with California’s costly and error-filled privacy laws). As such, if we’re truly set on creating and implementing a mandate for app store interoperability, it would be best to leave this to Congress and federal regulators.


Questions of interoperability policy are tricky, involving a range of tradeoffs and technical challenges. As policymakers approach these issues, regulatory humility is warranted. While iOS is almost certainly below the optimal level of openness, it’s also worth remembering that Android phones are readily available and consumers are free to choose them.

Furthermore, it’s unclear that a sideloading mandate would dramatically change the competition landscape. Even on Android, few users in the US take advantage of sideloading. Nor has the availability of this option pushed down their ~30% Play Store transaction fees. Even in the market for PC software, where users can download anything from the Internet, popular stores like Steam and GoG still charge app developers around 30%. Although some are lower, like Epic Games (12%) and Microsoft (15%), large stores clearly add value (such as through vetting and aggregation) and are not just exploiting a captive market.

Enacting a sideloading mandate to allow Parler or Gab, as some Republican policymakers may want, also isn’t a compelling argument. These sites don’t require complex API access, and it’s easy enough to access them through a mobile browser. But that’s not to say the underlying concern about speech restrictions on closed platforms isn’t legitimate in some circumstances.

Our system of government’s respect for free speech and the rule of law makes it so US policymakers have a limited ability to coerce companies like Apple and Google to take down apps. But this isn’t true everywhere. And this debate isn’t just about US consumers. For instance, Google’s transparency report indicates they complied with removal requests in Russia and Thailand for apps engaged in “government criticism.” Similarly, Apple’s transparency report shows governments, including China, have pressured or required the company to remove numerous apps. And mobile browsers aren’t safe. In some parts of the world, product design choices have implications for human rights, and for helping empower people to resist oppressive governments. 

Going back to the US, it’s not clear the sideloading mandate some states have proposed makes sense, either in theory, or how it would likely turn out in practice. Dramatic interventions in the market—such as dictating and overseeing software designs—should meet a substantial burden of proof to demonstrate their necessity and consistency with American principles governance. It’s not clear that the proponents of these proposals have overcome this burden.

But there’s also a normative question: Should Apple voluntarily embrace interoperability for iOS and allow third party app stores, alternative payment systems, and sideloading?

First, we have to consider the potential downsides. They could lose out on revenue from big apps like Fornite that can leverage alternative distribution channels, they would likely have to invest in architectural changes to their operating system, and it could weaken their reputation for security and reliability (e.g. devices your grandparents can use without accidentally downloading a virus). 

But smartphones have made a lot of progress since Steve Jobs expressed concerns about reliability and user experience in making the first iPhone in 2007. While sideloading still poses serious security risks, Android has demonstrated that it can be implemented as an option for advanced users, without compromising reliability for everyone else. Despite Android being more open, the Play Store still brings in a lot of revenue for Google, even without factoring advertising. If Apple were to move iOS towards being more open, it could also have benefits for diffusing criticism of the company, particularly as it expands its business in China and other repressive countries.

Today our phones are handling increasingly sensitive information—including our banking, identification, and health records. This makes them a valuable target for bad actors, and so it’s easy to see why many people would choose security over openness. But this can be a false dichotomy. If products are built with the right assumptions, we can have a high degree of both. This doesn’t mean risks go away; merely that users are allowed to make an informed decision to cross the guard rails and take them on.

Those interested in constructive ways to support a more open app ecosystem should also look to Cory Doctorow’s writings on “adversarial interoperability” at the Electronic Frontier Foundation. This concept outlines a series of mechanisms that support permissionless competition through reforming overbearing laws like software patents, the Digital Millennium Copyright Act (which governs jailbreaking), and the Computer Fraud and Abuse Act. These changes have the advantage of improving the entire ecosystem, rather than targeting one company, deregulating protectionist policies. Steve Jobs, who first teamed up with Steve Wozniak in the 1970s to sell illegal phone phreaking gear, might even approve.

Posted on Techdirt - 22 February 2019 @ 09:39am

Does Twitter Have An Anti-Conservative Bias, Or Just An Anti-Nazi Bias?

In an article for Quillette titled, “It Isn’t Your Imagination: Twitter Treats Conservatives More Harshly Than Liberals,” Columbia University research fellow Richard Hanania offers us proof–once and for all–that social media companies are biased against conservatives. Either that, or it’s the latest in a growing list of bogus, exaggerated or otherwise dubious anti-conservative bias claims (I’ll let you judge for yourself).

“Until now, conservatives have had to rely on anecdotes to make their case,“ Hanania writes. Adding that, “[m]y results make it difficult to take [social media platforms’] claims of political neutrality seriously.” The data he collected (with the help two research assistants, no less) looks at “prominent, politically active” people suspended from Twitter since the company’s launch in 2006.

Accounts included in the data set were selected from individuals and organizations whose suspension was covered in a “mainstream” news outlet, and who expressed a preference for either Donald Trump or Hillary Clinton in the 2016 presidential election.

Out of 22 (!!!) accounts in the data set that met these criteria, 21 (or 95%) were Trump supporters. Despite the small sample size, the author argues this is compelling evidence for Twitter’s anti-conservative bias. Even if conservatives are more likely to break Twitter’s rules, he argues, it “doesn’t seem credible” the disparity would be so wide.

But let’s look a little more closely at this. These are the 22 accounts make up the data set:

  1. Rose McGowan (the list’s lone Clinton supporter)
  2. Azealia Banks
  3. Tila Tequila
  4. James O’Keefe
  5. Richard Spencer
  6. Baked Alaska
  7. Roger Stone
  8. Gavin McInnes
  9. Candace Owens
  10. Alex Jones
  11. Chuck Johnson
  12. Robert Stacy McCain
  13. Milo Yiannopoulos
  14. Radix Journal
  15. National Policy Institute
  16. Craig R. Brittain
  17. David Duke
  18. American Nazi Party
  19. James Allsup
  20. American Renaissance
  21. Jared Taylor
  22. Laura Loomer

Scanning the list, you probably noticed the “American Nazi Party.” This is not an anomaly. The bulk of the list is a who’s who of outspoken or accused white nationalists, neo-Confederates, holocaust deniers, conspiracy peddlers, professional trolls, and other alt-right or fringe personalities (go ahead, pick a couple and Google them). It does not include any mainstream conservatives, unless, I suppose, you count recently-indicted Trump campaign advisor and “dirty trickster” Roger Stone.

Reasons listed for banning these individuals in Hanania’s own data sheet include “violent threats,” “harassment,” “inciting violence,” “targeted abuse,” “doxxing,” “pro-Nazi tweets,” and “racist slurs.” Additionally, about a quarter of the accounts listed are still active and no longer suspended.

Kicking off a bunch of Nazis and trolls isn’t very compelling evidence that your average conservative is getting unfair treatment on Twitter. The majority of the “victims” here seem to have been engaged in abuse, and it’s reasonable for a private company like Twitter to kick off people who are undermining the quality of their platform by harassing or threatening other users.

Considering the alt-right’s propensity to scream and yell about getting “deplatformed,” these 22 accounts probably aren’t that representative of Twitter’s 67 million U.S. monthly active users. Nor does their small number (despite the author having two research assistants) indicate a broad, systemic problem.

Of course, social media companies may be not be perfectly neutral when it comes to politics. The Bay Area, where many of these companies are based, is a very liberal place. In 2016, only 9.4% of San Francisco County voted for Donald Trump. It’s entirely plausible that this disposition affects their products and policies in subtle ways. Yet, to date there has not been compelling evidence of systemic bias or a grand conspiracy to silence conservatives (despite this becoming a standard trope in congressional hearings and conservative conferences).

But social media platforms aren’t bastions of free speech, either. Their evolving norms and policies around content moderation raise a host of concerns and issues. At minimum, platforms could do a lot better at being transparent in their enforcement and governance decisions.

For conservatives, as I’ve argued before, crying wolf about censorship is a self-defeating strategy that will only make people not listen when it actually happens. Nazis, while sometimes useful in edge cases around free speech or references to Godwin’s law, are not stand ins for the median conservative American. Targeted abuse or incitements to violence are also not the same thing as free speech. Let’s not get these things mixed up.

Posted on Techdirt - 18 October 2018 @ 11:55am

The Decline Of Congressional Expertise Explained In 10 Charts

When Mark Zuckerberg was called to testify earlier this year, the world was shocked by Congress’s evident lack of basic technological literacy. For many, this performance illustrates the institution’s incompetence. After all, if our elected representatives have trouble understanding how Facebook works, how capable are they of understanding the complexities of the federal government, or crafting legislation across a range of technical subjects?

For those of us who live and work in the “swamp,” the Zuckerberg hearings were no great surprise. Just this year, we’ve seen Congress struggle with technology issues such as quantum computing, cryptocurrencies, and the governance of online platforms. Indeed, it seems effectively incapable of tackling major technology policy issues such as the debate over online privacy, election cybersecurity, or artificial intelligence.

This state of affairs is the product of decades of institutional deterioration, sometimes referred to as the “big lobotomy.” While scholars of American government may offer various books or white papers chronicling this decline, the pattern is evident from a few trends that this post will highlight.

The decline of congressional support agencies

Members of Congress typically come from professional backgrounds in business, law or finance rather than science or technology (for instance, there are currently twice as many talk radio hosts as scientists). To help them understand technical policy issues, Members of Congress and their staff rely on expert advisors in legislative branch support agencies such as the Congressional Research Service (CRS), the Government Accountability Office (GAO), and formerly the Office of Technology Assessment (OTA).

Of the congressional support agencies, CRS is the primary analytical workhorse that supports day-to-day operations, producing digestible reports and timely memos at the request of congressional offices. Unfortunately, the capacity of CRS has declined precipitously in recent decades. From 1979 to 2015, CRS’ staff has shrunk by 28% – a loss of 238 positions.

Source: Original chart based on data from the Brookings Institution, Vital Statistics on Congress, Table 5-8.

While CRS serves Congress with responsive memos and digestible reports, it also used to have an agency that conducted deep authoritative technical research. This agency was the Office of Technology Assessment, which for over two decades helped Congress understand the nuances of complex science and technology issues. In 1995, Congress eliminated funding for OTA, creating a gap that has not since been filled.

Source: Original chart based on data from the Brookings Institution, Vital Statistics on Congress, Table 5-8, and Future Congress Wiki.

In addition to needing analysis related to the nuances and tradeoffs of particular regulatory policies, Congress also needs help understanding its $4 trillion in annual federal spending and the sprawling administrative state. To help rein in waste, fraud and abuse, Congress relies on the Government Accountability Office – which is empowered to conduct audits and investigations in the federal government. GAO boasts a savings of “$112 for every dollar invested.” Yet, from 1979 to 2015, its staff has been cut by 44% – a loss of 2,314 positions.

Source: Original chart based on data from the Brookings Institution, Vital Statistics on Congress, Table 5-8.

The decline of congressional committees

A critical source of policy expertise in Congress lies within congressional committees. Yet, like support agencies, committee staffing levels have declined significantly over time. From 1979 to 2015, the number of full-time standing committee staff has shrunk by 38% – a loss of 1,361 positions.

Source: Original chart based on data from the Brookings Institution, Vital Statistics on Congress, Table 5-1.

Key committees for technology policy reflect a similar trend. For instance, from 1981 to 2015 (note: 1979 data for House committees was unavailable), the House Energy and Commerce Committee went from 151 to 83 full-time staff. From 1979 to 2015, its Senate counterpart, the Committee on Commerce, Science, and Transportation, went from 96 to 64 staff. Similarly, from 1981 to 2015, the House Judiciary Committee went from 75 to 61 full-time staff. From 1979 to 2015, its Senate counterpart went from 223 to 91 staff.

With the decline in staffing, committees and subcommittees have also spent much less time conducting hearings, deliberating on policy, and developing legislation. From the 96th Congress (1979-1980) to the 114th Congress (2015-2016), the aggregate number of committee and subcommittee meetings across both chambers decreased by 66%.

Source: Original chart based on data from the Brookings Institution, Vital Statistics on Congress, Table 6-1 and 6-2.

Additionally, as shown by Casey Burgat and Charles Hunt, committees are also increasingly shifting resources to communications positions over policy roles.

Personal office staff resources are shifting to constituent services

With the rise of new digital tools and a growing population, Congress has been bombarded with a torrent of new communications from constituents and advocacy groups. Per a Congressional Management Foundation study, Congress received four times as many communications in 2004 than in 1995. Responding to this influx, more staff have shifted from policy to constituent relations and communications roles. Legislative staff may also be called more often to assist with constituent work.

This trend can be seen in the percentage of personal office staff based in district and state offices. From 1979 to 2016, the percentage of personal office staff based in district offices in the House of Representatives went from 35% to 47%.

Source: Original chart based on data from the Brookings Institution, Vital Statistics on Congress, Table 5-3.

In the same period in the Senate, the percentage of personal office staff based in state offices has gone from 24% to 43%. Since overall legislative branch staffing and budgets have declined over this period, this trend means fewer resources for retaining policy experts.

Source: Original chart based on data from the Brookings Institution, Vital Statistics on Congress, Table 5-4.

From 1979 to 2015, the total number of personal office staff has gone from 10,660 to 9,947. Senate numbers have remained relatively stable, since Senate office budgets are tied to population and distance. In the House of representatives, the total number of personal office staff has declined by 15% – a loss of 1,037 positions.

Source: Original chart based on data from the Brookings Institution, Vital Statistics on Congress, Table 5-1.

The decline of legislative branch compensation

In congressional offices, legislative analysis and other policy work falls on a variety of different staff positions. While titles and roles vary by office, these include “legislative correspondent,” “legislative assistant,” “legislative director,” and “chief of staff.” To varying extents, these roles are involved in other activities, such as constituent services, administrative work, and communications.

While cost of living in DC has gone up in recent decades (making it one of the most expensive cities in the country), the overall inflation-adjusted compensation for congressional policy staff has declined.

Source: Original chart based on data from the Congressional Research Service.

The median salary for a lawyer in the House of Representatives in 2015 was $56,000. In the private sector in DC, lawyers can easily earn several times that (an attractive exit for many congressional staff). Congressional salaries also fall significantly short of their executive branch counterparts, contributing to an expanding compensation gap. In short, compensation for working in Congress is far below the level needed to attract top talent.

Congressional staff do not believe they have access to sufficient resources or expertise

In a Congressional Management Foundation survey, a group of senior congressional staff were asked about their perspectives on institutional capacity issues. Therein, they rated a range of different areas as either “very important”/”very satisfied” or “somewhat important”/”somewhat satisfied.”

In one question, 81 percent said that access to policy expertise was “very important,” but only 24 percent said they were “very satisfied” with the status quo – a gap of 57 percentage points.

Source: Original chart based on survey data from the Congressional Management Foundation.

In another question, 67 percent said having adequate time and resources for Members of Congress to consider and deliberate on policy was “very important.” However, only 6% reported that they were “very satisfied” with the status quo.

While congressional capacity has declined, the need for it has increased

The Constitution sets up Congress as the first among three equal branches of government, intending it to lead on policy and provide a check on the executive branch’s potential for waste, fraud and abuse. Unfortunately, Congress has ceded much of its policymaking power and oversight responsibility to the administrative state. As Congress has shrunk over the past few decades, the size and scope of the federal government overall has expanded significantly. For instance, between 1979 and 2014, the U.S. Code of Federal Regulations grew from 98,032 pages to 175,268. Over the same period, inflation-adjusted federal discretionary spending grew from $810 billion to $1,220 billion (in 2017 dollars).

When most of our timeline data begins in 1979, it was just a year after the first computers were installed in the White House. It would still be several years before the introduction of 3 ½-inch floppy disks – which people today only know through the save icon. And it would still be over a decade before the launch of the World Wide Web.

Needless to say, since the late 20th century, the number and complexity of science and technology policy challenges have increased at an accelerating rate. These include issues such as infrastructure cybersecurity, election hacking, artificial intelligence, cryptocurrencies, CRISPR, data privacy, and more. If we’re to maintain America’s lead in innovation and meet the policy challenges of the 21st century, we’ll need to rebuild a capable and expert legislature.

If you’re interested in working on the solution, check out the Future Congress project. This is a new coalition and resource hub working to improve science and technology expertise in the legislative branch.

Zach Graves is head of policy at Lincoln Network. Daniel Schuman is policy director at Demand Progress.

Posted on Techdirt - 28 August 2018 @ 11:49am

Conservatives: Stop Crying Wolf On Tech Bias Or No One Will Ever Take You Seriously

In an article picked up by Drudge Report and then tweeted by President Donald Trump himself, PJ Media editor Paula Bolyard makes the shocking claim that Google deliberately manipulates its search results to favor left-wing views and undermine the President.

In supporting this allegation, she goes to Google and looks through the first hundred listings on the search engine results page. Therein, she finds that 96 percent of results for “Trump” are from liberal media outlets. Bolyard remarks:

I was not prepared for the blatant prioritization of left-leaning and anti-Trump media outlets. Looking at the first page of search results, I discovered that CNN was the big winner, scoring two of the first ten results. Other left-leaning sites that appeared on the first page were CBS, The Atlantic, CNBC, The New Yorker, Politico, Reuters, and USA Today

She adds that other than Fox News and the Wall Street Journal, traditional right-leaning outlets didn’t make the cut:

PJ Media did not appear in the first 100 results, nor did National Review, The Weekly Standard, Breitbart, The Blaze, The Daily Wire, Hot Air, Townhall, Red State, or any other conservative-leaning sites except the two listed above.

Aha! A big tech company caught red handed pushing its progressive agenda. Well…not so fast. Rather than uncovering compelling evidence of bias, this article’s author and its promoters merely reveal their ignorance of how search engines work.

First, the author seems to conflate Google Search and Google News, two products which use different algorithms and serve different functions. Google News is a searchable news aggregator and app (with some overt editorial functions), whereas Google Search tries to give users the most useful and relevant information in response to a query.

In order to determine what constitutes a relevant and useful result, search engines use complex algorithms to rank the quality of different pages based on a variety of signals such as keywords, authoritativeness, freshness or site architecture. A big part of this quality determination is based on outside links to a site – an idea going back to Larry Page and Sergey Brin’s work at Stanford in the late 1990s that culminated in the creation of the PageRank algorithm.

Page and Brin realized that incoming links to a site served as a proxy for quality markers like authoritativeness, trustworthiness and popularity. Today, Google Search is much more complex, utilizing complex machine-learning functions like RankBrain and an evolving set of algorithms with names like Hummingbird, Panda, Penguin and Pigeon. However, incoming links are still a key factor. Additionally, while Google uses manual quality raters to test new algorithm changes, they do not use them on live search results.

Google News’ approach to ranking results is also driven by algorithms that use a number of the same signals (you can get an idea from their patent), with a couple exceptions where manual input is used for editorial features, major events, and cross-over results from Google Search for particular topics.

With this in mind, it should be no great surprise that outlets like the New York Times, CNN, and Washington Post trounce outlets like PJ Media, National Review, and the Weekly Standard in organic search. The sites in the latter group don’t have metrics that support them rising to the top of the search algorithm. Of course, PJ Media found Fox and WSJ weren’t affected by this “bias” because their numbers are actually comparable to the former group of “left-wing” outlets (see below).

(Data from

This approach to ranking quality isn’t just a Google thing. If you look at competitors like DuckDuckGo or Bing (which PJ Media didn’t seem to bother doing), you’re going to see pretty similar results. Maybe this says something about the media landscape. But it’s not a good reason to storm Mountain View with pitchforks.

PJ Media’s conspiracy-mongering is based on an avoidable misunderstanding that could throw gasoline on the techlash and lead to policies that chill American innovation (although at least for now, conservatives still think a Fairness Doctrine for the Internet is a dumb idea).

It’s worth saying that libertarians and conservatives aren’t totally unreasonable in wanting to investigate whether they’re getting fair treatment by tech companies. After all, Silicon Valley is a very liberal place that doesn’t always reflect their norms or values (I also say this as someone with generally right-leaning views who has worked for organizations like the Cato Institute and R Street). That being said, if you’re going to make an allegation that there’s a big conspiracy, you should do your due diligence. This means taking time to understand the underlying technology before jumping to conclusions.

On Google’s part, given all of the tensions around bias lately, they would probably be wise to be more transparent about how their news algorithm works and do more proactive outreach to avoid future misunderstandings.

Zach Graves is Head of Policy for Lincoln Network

Posted on Techdirt - 9 February 2018 @ 11:55am

Washington's Growing AI Anxiety

Most people don’t understand the nuances of artificial intelligence (AI), but at some level they comprehend that it’ll be big, transformative and cause disruptions across multiple sectors. And even if AI proliferation won’t lead to a robot uprising, Americans are worried about how AI and automation will affect their livelihoods.

Recognizing this anxiety, our policymakers have increasingly turned their attention to the subject. In the 115th Congress, there have already been more mentions of ?artificial intelligence? in proposed legislation and in the Congressional Record than ever before.

While not everyone agrees on how we should approach AI regulation, one approach that has gained considerable interest is augmenting the federal government’s expertise and capacity to tackle the issue. In particular, Sen. Brian Schatz has called for a new commission on AI; and Sen. Maria Cantwell has introduced legislation setting up a new committee within the Department of Commerce to study and report on the policy implications of AI.

This latter bill, the ?FUTURE of Artificial Intelligence Act? (S.2217/H.4625), sets forth a bipartisan proposal that seems to be gaining some traction. While the bill’s sponsors should be commended for taking a moderate approach in the face of growing populist anxiety, it’s not clear that the proposed advisory committee would be particularly effective at all it sets out to do.

One problem with the bill is how it sets the definition of AI as a regulatory subject. For most of us, it’s hard to articulate precisely what we mean when we talk about AI. The term ?AI? can describe a sophisticated program like Apple’s Siri, but it can also refer to Microsoft’s Clippy, or pretty much any kind of computer software.

It turns out that AI is a difficult thing to define, even for experts. Some even argue that it’s a meaningless buzzword. While this is a fine debate to have in the academy, prematurely enshrining a definition in a statute ? as this bill does ? is likely to be the basis for future policy (indeed, another recent bill offers a totally different definition). Down the road, this could lead to confusion and misapplication of AI regulations. This provision also seems unnecessary, since the committee is empowered to change the definition for its own use.

The committee’s stated goals are also overly-ambitious. In the course of a year and a half, it would set out to ?study and assess? over a dozen different technical issues, from economic investment, to worker displacement, to privacy, to government use and adoption of AI (although, notably, not defense or cyber issues). These are all important issues. However, the expertise required to adequately deal with these subjects is likely beyond the capabilities of 19 voting members of the committee, which includes only five academics. While the committee could theoretically choose to focus on a narrower set of topics in its final report, this structure is fundamentally not geared towards producing the sort of deep analysis that would advance the debate.

Instead of trying to address every AI-related policy issue with one entity, a better approach might be to build separate, specialized advisory committees based in different agencies. For instance, the Department of Justice might have a committee on using AI for risk assessment, the General Services Administration might have a committee on using AI to streamline government services and IT infrastructure, and the Department of Labor might have a committee on worker displacement caused by AI and automation or on using AI in employment decisions. While this approach risks some duplicative work, it would also be much more likely to produce deep, focused analysis relevant to specific areas of oversight.

Of course, even the best public advisory committees have limitations, including politicization, resource constraints and compliance with the Federal Advisory Committee Act. However, not all advisory bodies have to be within (or funded by) government. Outside research groups, policy forums and advisory committees exist within the private sector and can operate beyond the limitations of government bureaucracy while still effectively informing policymakers. Particularly for those issues not directly tied to government use of AI, academic centers, philanthropies and other groups could step in to fill this gap without any need for new public expenditures or enabling legislation.

If Sen. Cantwell’s advisory committee-focused proposal lacks robustness, Sen. Schatz’s call for creating a new ?independent federal commission? with a mission to ?ensure that AI is adopted in the best interests of the public? could go beyond the bounds of political possibility. To his credit, Sen. Schatz identifies real challenges with government use of AI, such as those posed by criminal justice applications, and in coordinating between different agencies. These are real issues that warrant thoughtful solutions. Nonetheless, the creation of a new agency for AI is likely to run into a great deal of pushback from industry groups and the political right (like similar proposals in the past), making it a difficult proposal to move forward.

Beyond creating a new commission or advisory committees, the challenge of federal expertise in AI could also be substantially addressed by reviving Congress’ Office of Technology Assessment (which I discuss in a recent paper with Kevin Kosar). Reviving OTA has a number of advantages: OTA ran effectively for years and still exists in statute, it isn’t a regulatory body, it is structurally bipartisan and it would have the capacity to produce deep-dive analysis in a technology-neutral manner. Indeed, there’s good reason to strengthen the First Branch first, since Congress is ultimately responsible for making the legal frameworks governing AI as well as overseeing government usage.

Lawmakers are right to characterize AI as a big deal. Indeed, there are trillions of dollars in potential economic benefits at stake. While the instincts to build expertise and understanding first make for a commendable approach, policymakers will need to do it the right way ? across multiple facets of government ? to successfully shape the future of AI without hindering its transformative potential.

Posted on Techdirt - 18 September 2017 @ 10:42am

Yes, You Can Believe In Internet Freedom Without Being A Shill

You may have noticed lately that there’s an increasing (and increasingly coordinated) effort to paint today’s biggest and most successful companies as some kind of systemic social threat that needs to be reined in. As veteran tech journalist John Battelle put it, tech companies frequently are assumed these days to be Public Enemy No. 1, and those of us who defend the digital world in which we now find ourselves are presumptively marked as shills for corporate tech interests.

But a deeper historical understanding of how we got to today’s internet shows that the leading NGOs and nonprofit advocacy organizations that defend today’s internet-freedom framework actually predate the very existence of their presumed corporate masters.

To get taste a of the current policy debate surrounding Google and other internet companies, consider the movie I Am Jane Doe, which documents the legal battle waged by anti-sex-trafficking groups and trafficking victims against the website The film, which premiered this February with a congressional screening, also tracks a two-year investigation and report by the Senate Subcommittee on Investigations into the site’s symbiotic relationship with traffickers.

The documentary is powerful and powerfully effective. It has managed to accomplish what few works of art can ? encourage Congress to fast-track legislative action. Last month, a powerful group of 27 bipartisan cosponsors introduced new legislation targeting titled the Stop Enabling Sex Traffickers Act, or SESTA. While there were rumors the bill would be attached to the upcoming “must-pass” defense authorization bill, it now appears it will move through regular order, with a hearing in the Senate Commerce Committee scheduled for Sept. 19.

Some documentarians strive to be perceived as neutral chroniclers, but I Am Jane Doe producer Mary Mazzio has lobbied aggressively on behalf of the bill. The film’s official website and social media accounts have also jumped into the fight, publishing legislative guides and lobbying materials, as well as rallying a coalition to go after the bill’s opponents.

Here’s our problem with Mazzio’s blunderbuss approach: since the bill’s introduction, internet-freedom advocates (including a letter by R Street, the Copia Institute and others) as well as legal academics have raised alarm bells. In particular, the bill’s overly broad provisions would gut key protections for free expression and digital commerce by amending a foundational law undergirding today’s internet ? Section 230 of the Communications Decency Act.

If you love even parts of what the internet has to offer, you likely owe thanks in some way or other to Section 230. We don’t view any statute as immune from any criticism, but we do insist that any effort to chisel away at a law expressly crafted to protect and promote freedom of speech on the internet deserves a great deal of scrutiny. The problems posed by the proposed legislation are both expansive and complex, and internet freedom groups have the expertise to highlight these complexities.

Mazzio isn’t one for complexity, as her film makes it a point to smear internet-freedom groups rather than address their arguments on the merits. The producers do interview experts from the Electronic Frontier Foundation (EFF) and the Center for Democracy and Technology (CDT), but ultimately paint those experts as shills for big tech companies. They allege advocates of online free speech and expression callously oppose commonsense efforts to curb trafficking simply because they would hurt big tech’s bottom line.

This kind of rhetoric has continued throughout the advocacy campaign to pass SESTA.

But the film’s promoters would be well-served to pay closer attention to the facts. Defenders of Section 230 aren’t “supporting Backpage,” any more than advocates of Fifth Amendment rights support criminals or oppose police. They also should look closer at the history.

While it may be easy to paint Section 230 proponents as shills for big tech because some of them sometimes receive funding from tech companies, the reality is that organizations like CDT and EFF supported these policies before today’s Big Tech even existed. And other nonprofits, like the foundation that hosts the immensely valuable free resource Wikipedia, don’t depend on corporate funding?they’re primarily funded by individual donations?yet insist that Section 230 is what made it possible for them to exist.

While Google was founded in 1998 and launched in 2004, both CDT and EFF, who are mischaracterized in the film as ersatz public-interest advocates, were deeply engaged in the debate way back in 1995 over the Communications Decency Act. There’s perhaps no greater evidence of how relatively un-slick and un-corporate those organizations were than their self-representations in the ASCII art newsletters of this pre-Google period supporting the bill that would later become Section 230 of the CDA:

Both organizations opposed almost all the language in the CDA and both spearheaded legal efforts that led to the CDA mostly being struck down by the U.S. Supreme Court in 1997. But both also supported the Cox/Wyden amendment that would later become Section 230 of the act, which also created legal protections for “good Samaritan” blocking of offensive material. The Cox/Wyden amendment was added to the Telecommunications Act in the Senate in 1995, and signed into law by President Bill Clinton on Feb. 8, 1996.

It’s not just that Google and Backpage weren’t around when Section 230 became law. Facebook wasn’t founded until 2004, YouTube wasn’t founded until 2005 and Twitter wasn’t founded until 2006. This isn’t just a coincidence. Our vibrant online ecosystem exists because of Section 230 and the liability protections it affords to online platforms. It is the law that made today’s internet possible.

The Internet Association, a tech trade association that has helped lead industry opposition to SESTA, is mostly made up of tech companies (like Google, Facebook, Twitter, Airbnb, Yelp, Snap and Pinterest) that found success after the CDA and that rely, in one form or another, on its intermediary liability protections for user generated content. And keep in mind that it’s not just tech giants that oppose SESTA’s language amending Section 230. It’s dozens of startups and medium-sized companies, too.

Indeed, as outlined in this letter from internet-freedom advocates, there are good reasons to think SESTA’s proposed changes are hastily conceived and ill-suited to address the problems they purport to solve. Sex trafficking is a horrible crime, but Section 230 already does not protect sites like Backpage if they deliberately facilitate criminal acts. The limited immunity afforded to online platforms by Section 230 does not apply to any federal criminal law, nor should it apply to state criminal law if platforms are acting in bad faith. Furthermore, a 2015 amendment to sections of the federal code governing sex trafficking should make it even easier for federal prosecutors to go after sites that host ads for trafficking, although we still need time for the courts to interpret how it is applied.

The DOJ already has the power under current law?even without SESTA?to prosecute Backpage and its founders. Indeed, lawyers for Backpage acknowledged that "indictments may issue anytime" from a federal grand jury in Arizona. If they don’t, it’s the proper role of Congress to hold a hearing and ask Attorney General Jeff Sessions why they aren’t prosecuting this case or those like it.

If we need additional resources for the FBI or the DOJ’s criminal and civil rights divisions to investigate and prosecute these cases, that’s a conversation worth having. It also bears examining whether Congress should clarify the standards for platforms that contribute to the development of user content, given the different interpretations among the circuits.

But what we’re seeing in the “I Am Jane Doe” advocacy campaign is that SESTA’s proponents don’t want to have substantive conversations about the law. Instead, they want to create their own “fake news” version of what the issues are and rush their bill to passage, no matter the consequences.

Both the intended and unintended consequences of SESTA could be catastrophic. In effect, the law threatens to undermine all of Section 230’s benefits to the global internet ecosystem in order to make it easier to prosecute Backpage and its founders, who seem likely to end up in jail no matter what. While today’s tech giants will likely have the resources to navigate this in some form, the barriers it sets up could mean the next wave of internet platforms never come?and the ones that we have left are further incentivized to restrict speech. Rather than open a dialogue about current cases and the state of the law and how to refine Section 230’s protections, SESTA proponents want to rush in with a legislative chainsaw to carve out vast new liabilities for online platforms?the same platforms that provide us with the internet we love and upon which we all now rely.

If Congress rushes to pass SESTA without listening to the substantive arguments of the bill’s critics, it will be making a catastrophic mistake.

Mike Godwin is a senior fellow with the R Street Institute who worked extensively on the CDA at EFF in the mid-1990s. Godwin later worked for the Center for Democracy and Technology as well. Zach Graves is technology policy director at the R Street Institute.

Posted on Techdirt - 8 June 2017 @ 03:36am

Theresa May's Plan To Regulate The Internet Won't Stop Terrorism; It Might Make Things Worse

In the wake of Saturday’s horrific attack on London—the third high-profile terrorist incident in the United Kingdom in the past three months—British policymakers were left scrambling for better ways to combat violent extremism. Prime Minister Theresa May called for new global efforts to “regulate cyberspace to prevent the spread of extremism and terrorist planning,” charging that the internet cannot be allowed to be a “safe space” for terrorists.

While May’s desire for a strong response is easy to understand, her call for more expansive internet regulation and censorship is wrongheaded and will make it harder to win the war against violent extremism.

May didn’t specify the details of her proposal, but to many observers it was clear that she’s asking for sweeping new powers to compel tech companies to help spy on citizens and censor online content. Unfortunately, this isn’t simply a knee-jerk response to horrible circumstances, but reflects a longstanding ambition of May’s Conservative Party to impose draconian controls on cyberspace.

As home secretary, May introduced and oversaw passage of the Investigatory Powers Act, legislation that civil-liberties advocates have called the worst surveillance bill of any western democracy. Following last month’s attack in Manchester, May’s government purportedly briefed newspapers of its intent to invoke the law to compel internet companies to “break their own security so that messages can be read by intelligence agencies.” David Cameron, May’s predecessor, argued for internet companies to be compelled to create backdoors in their software so that there would be no digital communications “we cannot read.”

Even if the U.K. government got the expansive new powers it seems to want, there’s no reason to think it would stop terrorism in its tracks. Researchers have found that suicide attacks are a social phenomenon involving support networks that radicalize the perpetrators. Most people in these networks aren’t themselves terrorists. Allowing them to operate openly makes it easier both for moderating voices to intervene and for intelligence agencies to track. If the communities are forced underground and offline, they’ll be harder to infiltrate and monitor.

Moreover, there’s no way to create communications backdoors that only apply to bad guys. While committed terrorists could easily adapt to open source or analog means of communication in response to a government-mandated backdoor, law-abiding civilians would be exposed to new cybersecurity risks and have their economic and civil liberties compromised. Experience has shown that backdoors inevitably will be hacked, making everyone less safe. As the U.S. House Homeland Security Committee noted in its report on the topic, all of the proposed solutions to access encrypted information “come with significant trade-offs, and provide little guarantee of successfully addressing the issue.”

The policy also would have serious consequences for the United Kingdom’s global competitiveness. As the MIT report “Keys Under Doormats” notes, mandating architectures that allow access to encrypted communications “risks the real economic, geopolitical, and strategic benefits of an open and secure internet for law enforcement gains that are at best minor and tactical.” One of the factors behind the West’s dominance in technology and innovation is that its apps are not government-sanctioned, as they are in China or Russia. After all, what consumer would want to buy an app or device that had a built-in backdoor?

All this isn’t to say that governments should stand back and do nothing to stop terrorist activity online. It’s illegal almost everywhere in the world to provide material support to terrorist activities, not to mention the obvious crimes of murder and conspiracy. But terrorists don’t have free reign in cyberspace. In addition to the United Kingdom’s comparatively robust domestic snooping powers, the nation’s Counter Terrorism Internet Referral Unit (CTIRU) already coordinates flagging and removing unlawful terrorist-related content. Since its launch in 2010, it has worked with online service providers to remove a quarter million pieces of terrorist material.

There also are already international agreements to help authorities uncover and track people engaged in these activities and to exchange intelligence about them across borders. For instance, Mutual Legal Assistance Treaties (MLATs) allow the cross-border flow of data about criminal matters between investigative bodies. While the current MLAT agreements can be slow and cumbersome, efforts are underway to create a new process and allow U.K. authorities to go directly to U.S.-based online service providers, upon meeting certain conditions.

The United Kingdom also is already a key part of the national security data-sharing arrangements between the “Five Eyes,” under which intelligence from Canada, Australia, New Zealand and, of course, the United States is shared almost in real time. While the details are classified, there is evidence that this intelligence sharing has prevented numerous attacks.

To her credit, May emphasized the importance of improving these sorts of international agreements in her speech about fighting terrorism. This is an area where we can and should make positive steps toward reform, increasing the capacity for intelligence sharing in real time and improving cooperation, while ensuring that the right checks and balances are in place.

Combating violent extremism online doesn’t have to be a Pyrrhic victory for democratic societies. Certain risks are unavoidable, and no level of internet regulation will stop the most determined attackers. But there are real steps policymakers can take now to enhance our tools without sacrificing our security, liberty or global competitiveness in the process.

Zach Graves is tech policy director and Arthur Rizer is national security and justice policy director for the R Street Institut

Posted on Techdirt - 9 August 2016 @ 11:48am

No, A New Study Does Not Say Uber Has No Effect On Drunk Driving

The first rule of science journalism is to read the study before you write about it. Alas, that hasn’t stopped media outlets from routinely misreporting, exaggerating or exercising insufficient skepticism about scientific research, particularly in the service of clickbait headlines and extra views.

A recent study from the American Journal of Epidemiology on whether the introduction of ridesharing has had an effect on alcohol-related crash fatalities was the latest victim of this kind of sloppy reporting. The Washington Post announced: “Is Uber reducing drunk driving? New study says no.” CNN declared: “Uber doesn’t decrease drunk driving, study says.” Fortune writes: “A New Study Says Uber Has Had No Impact on Drunk Driving.” Other outlets published similar stories.

But alcohol-related fatalities are not the same thing as drunk driving rates. According to the National Highway Traffic Safety Administration, nearly 10,000 Americans die each year in crashes involving a drunken driver; about two-thirds of that total are the drunken drivers themselves. But according to the FBI‘s Uniform Crime Reporting Program, there are annually about 1.1 million arrests for driving under the influence, which itself is just a fraction of the Centers for Disease Control and Prevention’s estimate of 121 million incidents each year in which intoxicated drivers aren’t caught. Astoundingly, according to one analysis, drunk drivers average just one arrest per 27,000 miles driven while intoxicated.

Ideally, society would like each of these three numbers to fall, but first, we must be able tell them apart. The AJE study’s authors make clear that they “did not examine Uber’s association with other traffic outcomes, including drunk driving incidences and nonfatal crashes.” This leads one to the conclusion that these journalists ? or at least, those writing the headlines ? may not have actually read the study at all.

When it comes to whether services like Uber and Lyft reduce drunk driving overall, logic suggests that more available and convenient transportation options likely would make it easier for many to plan a night out without getting behind the wheel, and reduce the incentives to drive under the influence. The CDC already lists taking a taxi as an important preventative measure and ridesharing options are usually cheaper and very often more convenient than getting a taxi. As these services increase in popularity ? particularly among millennials, who both use ridesharing more and have a greater propensity to drive drunk ? one would expect a corresponding decline in the number of DUI arrests and alcohol-related fatalities.

There isn’t much research on the subject, but most observations to date seem to support the supposition. A 2015 study published by Temple University’s Fox School of Business concluded the introduction of UberX in California led to a reduction in the rate of motor-vehicles homicides per quarter of between 3.5 and 5.6 percent. Another study by Mothers Against Drunk Driving, in partnership with Uber, also looked at the introduction of UberX in California and found that alcohol-related crashes by drivers under age 30 fell 6.5 percent, or 59.21 fewer crashes per month.

In June 2016, Providence College published a study which found that “DUIs are 15 to 62 percent lower after the entry of Uber” and the introduction of the service “is associated with a 6 percent decline in the fatal accident rate.” More recently, when Uber and Lyft were pushed out of Austin, Texas, DUI arrests spiked by 7.5 percent.

Given that background literature, it’s important to note some significant limitations in the approach used by the AJE study’s authors. They looked at data from 2005 to 2014 for the top 100 metropolitan statistical areas (MSAs) in which Uber has entered the market. Of course, in many of those MSAs, the company may be operating in the largest city or cities, but not across the whole metropolitan area. Also notable is that in most of the MSAs the study examines, Uber was introduced at some point in 2014, the same year the authors’ data ends.

Additionally, many of these jurisdictions also did not have friendly regulatory climates for ridesharing in the period the authors examined. Aside from California and Colorado, where state-level pre-emption laws were passed, most ridesharing regulation through 2014 was done at the city level. It was fairly common at the time for transportation network companies to have uncertain legal status and for jurisdictions to impose hostile regulations, issue cease and desist orders or hold sting operations to block Uber and Lyft from operating. Additionally, carpool services like UberPOOL and Lyft Line, which are significantly cheaper, had not yet become widely available. Today, ridesharing is cheaper, more popular and fully legal in most major cities.

It also may not be that surprising the AJE study didn’t line up with results from other research that focused on California. Uber was founded in San Francisco and launched there in 2009. Lyft launched in 2012. TNCs have been legal statewide in California since the California Public Utilities Commission’s initial rulemaking in 2013. California is the oldest and probably strongest ridesharing market. If ridesharing has an effect on alcohol-related fatalities or drunk driving more generally, it would show up there first.

In much of the rest of the country, ridesharing is not as well-established. According to Pew, as of December 2015, only 15 percent of U.S. adults had used a ridesharing service. Of those, only 17 percent reported they use it more than once or twice a month. In short, outside of millennials in major urban centers, ridesharing hasn’t yet caught on in a big way.

More research looking at more recent data is needed to better understand the effects of ridesharing on drunk driving rates. And with each new report, whatever its conclusion, one hopes science journalists will bring more care and a healthy skepticism to the table. In the meantime, this study alone isn’t a compelling reason to dismiss other evidence supporting the positive effects of ridesharing on reducing drunk driving.

Zach Graves is a senior fellow at the R Street Institute, a free market think tank based in Washington, DC

Posted on Techdirt - 29 June 2016 @ 11:54am

Lessons From The Downfall Of A $150M Crowdfunded Experiment In Decentralized Governance

Hype around blockchain has risen to an all-time high. A technology once perceived to be the realm of crypto-anarchists and drug dealers has gained increasing popular recognition for its revolutionary potential, drawing billions in venture-capital investment by the world’s leading financial institutions and technology companies.

Regulators, rather than treating blockchain platforms (such as Bitcoin or Ethereum) and other “distributed ledgers” merely as tools of illicit dark markets, are beginning to look at frameworks to regulate and incorporate this important technology into traditional commerce.

That progress was challenged recently, when more than $54 million was stolen from The DAO (short for “decentralized autonomous organization”) — an experimental and unregulated investment fund built on the blockchain platform Ethereum. As people realized The DAO was being drained, the ensuing panic also crashed the price of Ether (or ETH), Ethereum’s cryptocurrency.

Beyond potentially making a lot of people poorer ? who probably should have known better than to invest in an experimental “robotic corporation” — the theft has created a massive political rift within the blockchain community, and threatens to undermine trust in a technology described as the “trust machine”. In addition, this event raises serious questions about the cybersecurity risks of distributed applications, the (lack of) enforcement of existing securities laws and the potential for increased scrutiny by regulators looking to protect unwary investors.

Prior to last week, The DAO was widely considered a phenomenal success. It enjoyed the largest crowdfunding in history, raising the equivalent of more than $150 million, or about a tenth of the value of the Ethereum blockchain platform on which it was built. While you could conceivably build a DAO for anything, since it was a piece of software, The DAO was created for the purpose of developing the Ethereum platform and other decentralized software projects. According to its “manifesto” on

The goal of The DAO is to diligently use the ETH it controls to support projects that will:

• Provide a return on investment or benefit to the DAO and its members.
• Benefit the decentralized ecosystem as a whole.

In short, it was developed as a venture-capital fund and, importantly, its investors expected returns.

What is a DAO, anyway? And how does it work? Christoph Jentzsch — founder of the German company, which helped create The DAO — explained the concept in his white paper as “organizations in which (1) participants maintain direct real-time control of contributed funds and (2) governance rules are formalized, automated and enforced using software.”

As American Banker’s Tanaya Macheel writes, DAOs and the smart contracts on which they are built could have a lot to offer traditional financial institutions:

In theory, distributed autonomous organizations (of which the DAO is one of the first examples) are a hardcoded solution to the age-old principal-agent problem. Simply put, backers shouldn’t have to worry about a third party mismanaging their funds when that third party is a computer program that no one party controls.

At a time when the financial services industry is trying to automate old processes to cut costs, errors and friction, DAOs represent perhaps the most extreme attempt to take people out of the picture.

DAOs can be deployed on the distributed global computer of the Ethereum platform or other suitable blockchains, including private ones. One mechanism to fund them is through a “crowdsale” of DAO tokens that act like shares of stock, which is what The DAO did. Token-holders can vote on new proposals (weighted by the number of tokens a user controls) to change the structure of the DAO and alter its code. Tokens also can be traded and have an exchange-value. As The DAO’s “official website” describes it:

The DAO is borne from immutable, unstoppable, and irrefutable computer code, operated entirely by its members.

How exactly does an immutable decentralized computer get “hacked”? According to DAO developer Felix Albert, it wasn’t. Unlike the failed bitcoin exchange Mt. Gox — where nearly $500 million of bitcoins were lost due to a combination of breach and fraud — the theft exploited a bug that previously had been undiscovered (or more accurately, hadn’t been fixed) in its code.

A quirk of robotic corporations is that they take their bylaws literally. Like Asimov’s robots, DAOs are built with rules to govern their behavior that cannot easily be revised or overwritten once they are set in motion. Inevitably, these sometimes conflict with our preconceived ideas of how they ought to operate.

Technical analysis of the DAO theft revealed the attacker exploited a function originally designed to protect users:

The attack [on The DAO] is a recursive calling vulnerability, where an attacker called the “split” function, and then calls the split function recursively inside of the split, thereby collecting ether many times over in a single transaction.

It wasn’t really a hack at all. It was human error. Making matters worse, The DAO’s promoters (in this case, Chief Operating Officer Stephan Tual) had said this kind of bug wouldn’t be an issue just a few days before the theft (whoops).

Lots of potential vulnerabilities for The DAO had been discussed and it was even suggested to place a moratorium on proposals. Meanwhile, its promoters confidently asserted everything was fine:

We are assuming that the base contract is secure. This assumption is justified due to the community verification and a private security audit.

Additionally,’s blog claimed that the generic DAO framework code had been audited by a leading security firm:

We’re pleased to announce that one of the world’s leading security audit companies, Deja Vu Security, has performed a security review of the generic DAO framework smart contracts.

On close inspection, the only report they linked in their blog was three pages long. It’s unclear whether a rigorous formal audit had ever been conducted. After the attack, people started asking for the audit report and wondering why hadn’t shared it. The security firm, Deja Vu, even responded on Reddit.

Hi Everyone, Adam Cecchetti CEO of Deja vu Security here. For legal and professional reasons Deja vu Security does not discuss details of any customer interaction, engagement, or audit without written consent from said customer. Please contact representatives from for additional details.

Whoever was in charge of auditing the code screwed up big-time. As former Ethereum release coordinator Vinay Gupta explained on YouTube, The DAO was an experiment that was never built to handle this much risk:

We all knew as we watched this happening that this was an emperor’s clothes scenario … there was no way that that smart contract had undergone an appropriate amount of scrutiny for something that was a container for $160 million.

Sure, everyone involved should have stopped it from getting carried away. But what are the actual consequences when a decentralized extralegal robot corporation doesn’t do what it’s expected to? Is anyone really “in charge” of making sure it works? Is anyone on the hook if the whole thing goes down the tubes because of its creators’ (or proposal authors’) lack of due diligence?

For one thing, as Coin Center’s Peter Van Valkenburgh explains, DAOs are likely to run afoul of existing securities law ? potentially implicating their developers, promoters and investors:

The Securities Act intentionally defines “promoter” broadly: “any person that, alone or together with others, directly or indirectly, takes initiative in founding the business or enterprise of the issuer.” Given the breadth of this language, developers should carefully weigh the risks of being visibly associated with the release and sale of [DAO] tokens.

Individuals deemed to be promoters of a [DAO] may be found to be in violation of Section 5(a) and 5(c) of the Securities Act. Under these sections it is unlawful to directly or indirectly offer to sell or buy unregistered securities, or to “carry” for sale or delivery after the sale an unregistered security or a prospectus detailing that security. Even if a [DAO] is deemed to be an unregistered security, it remains very unclear how promoting that [DAO] would or would not equate to these unlawful activities, and who?if anyone?would be found to have violated the law. Nonetheless, broad interpretation of these laws may potentially implicate any participant or visibly affiliated developer or advocate.

So DAO evangelists could soon be in hot water, regardless of any disclaimers they put up.

To the Securities and Exchange Commission’s credit, they have thus far been relatively open to innovations like crowdfunding, as well as the potential for blockchain technology. As SEC Chairwoman Mary Jo White recently said in an address at Stanford University:

Blockchain technology has the potential to modernize, simplify, or even potentially replace, current trading and clearing and settlement operations … We are closely monitoring the proliferation of this technology and already addressing it in certain contexts … One key regulatory issue is whether blockchain applications require registration under existing Commission regulatory regimes, such as those for transfer agents or clearing agencies. We are actively exploring these issues and their implications.

Beyond financial regulation, the broader legal treatment of DAOs is a murky subject. With applications running on Ethereum, it’s not always clear what the point of enforcement is. You can’t exactly sue a DAO in court and then seize its assets. And, while The DAO’s creators were in the public eye, that doesn’t necessarily have to be the case; it could be deployed anonymously.

Even if DAOs are created without a formal legal status, governments may impose legal status on them. As business lawyer Stephen Palley writes at CoinDesk:

If you don’t formalize a legal structure for a human-created entity, courts will impose one for you. As most lawyers will tell you: a general partnership, unless properly formalized or a deliberately created structure, is a Very Bad Thing … [T]he members of a general partnership can end up jointly and severally liable on a personal basis for partnership obligations.

For instance, I don’t think this is how the law works:

Even if the SEC or other government entity decides to crack down on DAOs, it might be easier said than done. Because they operate on pseudonymous distributed computers, those parties may not be easy to track down (notably, we still don’t know who Satoshi Nakamoto is). Even if you did, they might not have any control over it or know what it was doing. Its code also may have been radically altered from its original programming/intent.

But as far as The DAO is concerned, are we in for a slew of lawsuits or calls for SEC action by disgruntled investors? Not so fast. Investors in The DAO may yet be able to recover their losses.

Various prominent stakeholders in the Ethereum community, from Ethereum inventor Vitalik Buterin to’s Christopher Jentzsch, have suggested that the only sensible solution is to create a “fork” of the Ethereum network that could freeze the attacker’s stolen funds and shut down The DAO, with the option to create a ?hard fork? to fully reverse the theft and return investors’ funds. Some have criticized this approach as a ?bailout? or ?asserting centralized control.? But it’s worth noting that it would require a plurality of miners to adopt it voluntarily; whether they will remains to be seen.

Either way, Ethereum’s credibility may be adversely affected. On the one hand, people need to trust that smart-contracts do what they are supposed to — particularly where millions of dollars are on the line. On the other hand, the credibility of the platform is also tied to its immutability. If developers and miners collude to reverse transactions they don’t like, that sets a bad precedent.

Additionally, if the community decides The DAO’s investors need to take a haircut, it could open up a Pandora’s box of legal troubles for its developers and promoters (and maybe even miners and investors), potentially stifling advancement of this important technology.

But wait a minute. Why didn’t the attacker see the this coming? Surely if he was sufficiently sophisticated to find a “recursive call” bug, he would have known that split funds would be locked away for 27 days — giving the community time to get wise to his activities and find a solution like the fork.

As previously mentioned, The DAO theft also crashed ETH prices. Savvy readers will note that a DAO vulnerability doesn’t mean the Ethereum platform itself was compromised (any more than a nasty bug in Photoshop means that everyone with Windows 10 is at risk).

Was it possible this whole event was a ruse to pull off a “big short”, as one user suggests on Reddit? As of now, there’s no proof of that, but it’s an interesting theory.

But was this even a theft at all? As’s representative said, “code is law!” If the code doesn’t do what you think it does — that’s your fault. At least, that’s the theory behind an anonymous letter uploaded to Pastebin and purportedly authored by The DAO’s attacker:

I have carefully examined the code of The DAO and decided to participate after finding the feature where splitting is rewarded with additional ether. I have made use of this feature and have rightfully claimed 3,641,694 ether, and would like to thank the DAO for this reward. It is my understanding that the DAO code contains this feature to promote decentralization and encourage the creation of “child DAOs”.

I am disappointed by those who are characterizing the use of this intentional feature as “theft”. I am making use of this explicitly coded feature as per the smart contract terms and my law firm has advised me that my action is fully compliant with United States criminal and tort law.

Adding that:

I reserve all rights to take any and all legal action against any accomplices of illegitimate theft, freezing, or seizure of my legitimate ether, and am actively working with my law firm. Those accomplices will be receiving Cease and Desist notices in the mail shortly.

If the fork moves forward to freeze or seize the attacker’s digital assets, could that open up the broader Ethereum community and its miners to legal liability? We’ll have to wait and see what happens.

Regardless how The DAO “theft” is resolved, regulators shouldn’t be in a rush to impose stricter regulations on Ethereum, which is just a platform, or DAOs in general or even The DAO specifically, should it be reincarnated with better security practices.

While The DAO attack raises serious questions about the viability of creating this “DAO 2.0”, that doesn’t mean we should stop it from happening. Whether or not you believe all the hype about Ethereum being as important as the invention of the internet, it’s an exciting technology that’s worth giving the opportunity to grow.

Unlike Bitcoin, which has been around for eight years, Ethereum is only a year old. It officially launched in July 2015, but is already the second-largest cryptocurrency by market capitalization. It’s vastly more complex than Bitcoin and still in its infancy; it will have inevitable growing pains on the way to maturity.

Just as the internet wasn’t built in a day, neither will smart-contract technology come to fruition without a permissive regulatory environment to grow, much like the Clinton administration’s Framework for Global Electronic Commerce did for the internet.

Certainly, vetting DAO code (particularly new proposals) is a big problem. More fundamentally, smart-contract security is an emerging area where people are rightly starting to pivot, following the lessons of The DAO attack. As Ethereum developer Peter Borah writes:

In his response to the bug, Slock’s COO expressed shock, referring to it as “unthinkable”, and pointing to the “thousands of pairs of eyes” that somehow missed this. It’s certainly hard to blame anyone for being shaken by the sudden disappearance of tens of millions of dollars. However, this natural reaction hides the simple truth that anyone who has dabbled in programming knows: bugs in programs are far from unthinkable — they are inevitable.

Making code open-source is not enough. We need mechanisms to create smarter (i.e., fault-tolerant) smart contracts. This could mean more rigorous independent testing, strategies to implement better development practices or, at least, more time to develop through trial-and-error in a lower-risk context. Stakeholder interests also must be aligned to make sure appropriate vetting happens, particularly where voting on code alterations is involved and particularly if we want to develop more complex autonomous programs.

The DAO is an instance of people getting carried away with an exciting new technology, while not effectively managing the new cybersecurity risks that come with it. But just because a group of people screwed up The DAO, it doesn’t mean all DAOs are DOA.

While there’s an overabundance of utopian thinking in this space, blockchain-based experiments in decentralized governance and peer-to-peer commerce could have immense benefits that offer truly revolutionary potential. Regulators should continue to take a wait-and-see approach and not use this as an invitation to try to shut them down or impose harsh new regulations.

Posted on Techdirt - 28 April 2016 @ 11:39am

Lessons From Prince's Legacy And Struggle With Digital Music Markets

Undeniably, Prince’s death last week marked the loss of a true musical genius and maverick. In his life, he was known for being a talented musical innovator with flamboyant clothes and a contrarian streak. He was adept at a range of instruments, as well as in multiple genres of music including funk, jazz, pop, rock, and R&B.

As broadly gifted an artist as he was, Prince never quite found the right approach when it came to licensing his music for distribution — in spite of the fact that sold over 100 million records, placing him among the best-selling artists of all-time. He won an Oscar, a Golden Globe, and seven Grammys, among other accolades. His massive discography includes 50 albums, 104 singles, 136 music videos, among other creative works. And yet his fans were left in the odd position, on the news of his death, of being frequently unable to provide links to Prince’s massive oeuvre.

Like David Bowie, who died only a few months earlier this year, Prince was constantly reinventing himself throughout his career. But one key reason for his reinvention — at different times, he was known by “Prince,” “Jamie Starr,” an unpronounceable glyph, and perhaps most notoriously, as “The Artist Formerly Known as Prince” — was his unhappiness with his record labels, and later with digital/Internet distribution.

And even now, if you’re looking to listen to your favorite Prince tracks on popular digital music services like Spotify or Apple Music, you’re out of luck. While you can find some live performances on YouTube, and a couple exceptions like his single “Stare” on Spotify, the streaming rights to his music are licensed exclusively through Tidal — a niche subscription-only service owned by Jay Z.

You can see why Prince may have been attracted to Tidal as a service. Since its launch in late 2014, a number of major artists have embraced it, offering exclusive releases and touting the service’s better deal for artists. Indeed, Tidal purports to “pay the highest percentage of royalties to artists, songwriters and producers of any music streaming service.”

But it’s hard to see how it would make business sense to exclusively license with them, as Prince did. For one thing, it’s not entirely clear that Tidal’s rates are that much better than Spotify. Respectively, they each claim to pay out 75% and 70% of their revenues to rights holders. Yet, Tidal has also claimed that they pay out four times Spotify’s royalty rate.

Vania Schlogel, then executive at Tidal, clarified their rates in an interview for the Hollywood Reporter

There was some confusion on the Internet about whether “royalty rate” was a percentage of Tidal’s total revenue. According to Schlogel, it is. The industry standard royalty rate, she says, is 70% (roughly 60% to record labels, roughly 10% to artists via publishers). Tidal pays 62.5% and 12.5% (which equals the 75% Jay Z is referring to).

This makes their base royalty rate going to artists 25% higher than Spotify. But Tidal also has about 45% of their subscribers on a $19.99 per month premium tier. This would make the share of revenue going to artists around 80% higher.

That’s a lot more! Artists should all be switching to exclusive deals with them, right? Well…not so fast. Spotify alone has 30 million paying subscribers. 100 million if you include ad-supported free tier listeners. Apple Music has another 11 million paid subscribers. Compare that with Tidal’s relatively paltry 3 million. Not to mention commercial distribution to YouTube’s 1 billion active users, or the dozen other streaming services out there.

Assuming those subscribers have comparable activity profiles, it wouldn’t make business sense even if they paid ten times the royalty rate — at which point it would be more than total revenue. Although, artists can do whatever they want. It’s a free market (sort of).

But for Prince, his embrace of Tidal may not have been just about royalty rates. Rather, it may have been a reflection of his proclivity to assert tight control of his brand. As Vox’s Constance Grade writes:

It’s classic Prince: Tidal is the best program not only because it pays better, but because it gives him the most control over his music and his persona. And Prince never let someone else control his persona if he could help it.

This was fully consistent with the character of a man who preferred to play small, intimate venues even when he could have been selling-out stadiums.

But making music less accessible poses serious challenges for artists and consumers alike. For one thing, as English singer/songwriter Lily Allen explains, it will reinvigorate incentives for piracy (notably, she has also had an interesting relationship with Techdirt):

I love Jay Z so much, but Tidal is (so) expensive compared to other perfectly good streaming services, he’s taken the biggest artists … Made them exclusive to Tidal (am I right in thinking this?), people are going to swarm back to pirate sites in droves … Sending traffic to torrent sites.

Perhaps unsurprisingly, when Kanye West decided to release his album The Life of Pablo exclusively on Tidal, it was pirated over 500,000 times in its first day alone — drawing fire for reinvigorating online music piracy.

A recent study by Columbia University (among other research including the Copia Institute’s “The Carrot Or The Stick?”) confirms that providing access to good legal alternatives is effective at reducing online piracy — particularly among young people. To take another example, the rise of Spotify in Sweden was followed by a major decline in music sharing on the Pirate Bay. According to Copia’s study, “a similar move was not seen in the file sharing of TV shows and movies…until Netflix opened its doors in Sweden.”

During his career, Prince also flirted with various album release strategies, and explored ways to cut out the middleman by going fully independent.

Prince’s strategy was visionary, but ahead of its time. A solution that’s just now coming of age is blockchain-driven smart contracts for digital music consumption. If they catch on, they could cut out the middleman and transparently distribute revenues directly to artists behind a given work, according to pre-arranged terms. Prototype service Ujo is already doing it with artist Imogen Heap’s single “Tiny Human.” So, in actuality, perhaps Jay Z should be more worried about blockchain than Spotify.

Indeed, as streaming becomes the dominant revenue source in the music market, and consumers continue to shift away from physical media and digital downloads, the pressure from artists will only increase as they seek more transparency, and a stronger ability to renegotiate their share of revenues from all sides (but particularly from labels).

On Twitter, Allen echoed this sentiment, writing that rather than demonizing streaming services, artists should look towards the hefty cut of revenue taken by labels:

For Prince, online streaming services were just the latest challenge in his complex relationship engaging with evolving digital markets. Like Bowie, Prince was a digital pioneer — among the first to embrace the Internet’s potential to create a direct relationship with his fans. In 2001, he opened one of the first music subscription services, NPG Music Club, which was open for 5 years. In 2009, this was succeeded by As the Wall Street Journal describes it:, resembled a galactic aquarium, featuring doodads like a rotating orb that played videos. The promise: fans who ponied up $77 for a year-long membership would receive the three new albums, plus an ensuing flow of exclusive content, like unreleased tracks and archival videos.

It was also met with a mixed reception, and a year after its launch, it went dark.

Ultimately, as the Internet came of age, Prince met it with increasing resistance. Likely, he saw his ability to assert control slipping away. He wasn’t a fan of people repurposing his work in the analog era, so why should we expect him to embrace a digital one — where it’s far easier to remix, edit, dub and repurpose? As Mike Masnick explains, Prince became a militant enforcer of his intellectual property, who played fast and loose with the law in his litigiousness:

He’s also gone legal a bunch of times, suing a bunch of websites, threatening fan sites for posting photos and album covers on their sites, suing musicians for creating a tribute album for his birthday, issuing DMCA takedowns for videos that have his barely audible music playing in the background and 6-second Vine clips that are clearly fair use.

At one point, he even declared that the Internet is a fad, rebelling against a model that wouldn’t work on his terms:

The internet’s completely over. I don’t see why I should give my new music to iTunes or anyone else. They won’t pay me an advance for it and then they get angry when they can’t get it.

(At this point he could have styled himself “The Prince of Denial.” He even deleted his Facebook and Twitter accounts.)

Famously, Prince, via Universal Music, was responsible for the infamous “dancing baby” DMCA takedown over a video featuring Prince’s “Let’s Go Crazy” playing faintly in the background of a short clip as a toddler danced*. Ultimately our friends at EFF, who were representing Stephanie Lenz, prevailed on their fair use claim. In 2013, EFF awarded him their “Raspberry Beret Lifetime Aggrievement Award” for “extraordinary abuses of the takedown process in the name of silencing speech.”

Despite all the digital-copyright agitation Prince managed to generate in the steps he took to express his unhappiness with Internet distribution channels — and despite his insistence, it doesn’t seem as if the Internet is “over” quite yet — he will of course be remembered primarily for his genius as a songwriter, performer, and producer. And, also, as a visionary. Although he passed away just before the rise of virtual reality and mixed reality technologies, one can only imagine him as someone who would have embraced it. Even if imperfectly.

Ironically, given his virtuosity and lasting impact on pop music, limiting his digital distribution, and the ability of his fans to find new creative uses for his work, makes it orders of magnitude more difficult for fans to bring his music to new generations of listeners, who may never know what all the fuss about Prince was about. And that’s a shame.

* Post updated to reflect that while Prince/Universal sent the initial DMCA takedown, it was Lenz and EFF who brought the lawsuit for that takedown.

More posts from Zach Graves >>