Parker Higgins's Techdirt Profile

Parker Higgins

About Parker Higgins

Posted on Techdirt - 13 February 2017 @ 01:20pm

With So Much Public Interest In Our Judicial System, It's Time To Free Up Access To Court Documents

Like hundreds of thousands of Americans, I am closely following the “airport cases” around the country. In order to keep abreast of the latest developments in one of the fastest-moving cases, Washington v. Trump, I built a Twitter bot that scrapes the public docket mirror hosted by the Ninth Circuit and tweets about new documents and links as soon as they’re added.

This case leads a legal push that has attracted incredible amounts of public attention. There have been tens of thousands of protestors, dozens of organizations and companies that submitted amicus briefs (including Techdirt’s think-tank arm, the Copia Institute), and over 135,000 people who tuned into the audio-only livestream of the Ninth Circuit oral arguments (which was also broadcast live on multiple news channels).

Those numbers reveal a public demand to be informed and to participate in the law. But they also show the limitations on the kind of transparency that can satisfy that demand. Most notably, any attempts to make court proceedings more accessible to the public has to contend with the expense and overhead of dealing with PACER. My bot is only possible because the Ninth Circuit provides a public docket mirror for individual “cases of interest,” essentially duplicating the existing system outside the paywall. Those mirrors are manually updated, which means they are labor-intensive, error-prone, and not always up to date.

By contrast, look at the @big_cases bot run by USA Today reporter Brad Heath. It monitors a set of district court cases, selected by hand, and posts new documents as they get filed. These district court cases don’t have public docket mirrors, so @big_cases accesses PACER directly — and for that, it needs user credentials and ultimately to pay for the documents it downloads. For a journalist whose job is reporting on legal developments, paying these costs makes sense — and sharing the documents further is a valuable public service. Without institutional backing, though, it’s hard to justify the PACER expenses.

The costs go beyond the financial. These bots represent an experiment in meeting members of the public where they are, and those efforts are less likely if they come with a pricetag. Worse, it means these experiments will be limited to cases of widespread general interest. To pick a trivial example: Techdirt readers might be interested in a bot that tweets updates from privacy or copyright dockets. If those public documents were freely accessible, anybody could build a tool like that without worrying about subsidizing the ongoing PACER costs.

At a time when the president and his press secretary are calling into question the legitimacy of factual news reporting, an informed public requires more than ever access to primary sources. Moreover, they need to be confident in the integrity of those sources. Journalists reporting on court proceedings increasingly post the original source documents. Without a free and public government source file, though, most readers can’t see the context of the case, and they have to trust that they’re getting the full and unmodified documents in question.

The procedural stance of Washington v. Trump is unclear. A Ninth Circuit judge has made a request that both sides brief whether a larger panel should re-hear the question. The White House has issued conflicting reports about whether or not it will appeal Thursday’s order to the Supreme Court. And the District Court has indicated that a new briefing schedule might be appropriate. These paths offer various levels of transparency, and it’s frustrating to know my bot may not be able to keep up with, say, district court proceedings simply because of the antiquated PACER system.

Meanwhile, the issue continues to attract attention from lawmakers. The House Judiciary Committee will hold a hearing on Judicial Transparency and Ethics on Tuesday, February 14, and is expected to include testimony on PACER. Hopefully, the Committee uses this to recognize that a truly transparent judiciary requires rethinking how PACER functions.

Posted on Techdirt - 12 May 2016 @ 08:30am

Stakes Are High In Oracle v. Google, But The Public Has Already Lost Big

Attorneys for the Oracle and Google companies presented opening statements this week in a high-stakes copyright case about the use of application-programming interfaces, or APIs. As Oracle eagerly noted, there are potentially billions of dollars on the line; accordingly, each side has brought “world-class attorneys,” as Judge William Alsup noted to the jury. And while each company would prefer to spend their money elsewhere, these are businesses that can afford to spend years and untold resources in the courtroom.

Unfortunately, the same can’t be said for the overwhelming majority of developers in the computer industry, whether they’re hobbyist free software creators or even large companies. Regardless of the outcome of this fair use case, the fact that it proceeded to this stage at all casts a long legal shadow over the entire world of software development.

At issue is Google’s use in its Android mobile operating system of Java API labels — a category of code Google (and EFF) previously argued was not eligible for copyright. Judge Alsup, who demonstrated some proficiency with programming Java in the first leg of the case, came to the same conclusion. But then the Federal Circuit reversed that position two years ago, and when the Supreme Court declined to hear the issue, there was nowhere left to appeal. With this new decision on copyrightability handed down from above, Google and Oracle now proceed to litigate the question of whether Android’s inclusion of the labels is a fair use.

If Google wins at this stage, it’s tempting to declare the nightmare of that Federal Circuit opinion behind us. After all, fair use is a right — and even if API labels are subject to copyright restrictions, those restrictions are not absolute. Google prevailing on fair use grounds would set a good precedent for the next developer of API-compatible software to argue that their use too is fair.

Tempting, but not quite right. After all, there is a real cost to defending fair use. It takes time, money, lawyers, and thanks to the outrageous penalties associated with copyright infringement, comes with a substantial risk. Beyond all those known costs, wedging a layer of copyright permissions culture into API compatibility comes with serious unknowable costs, too: how many developers will abandon ideas for competitive software because the legal risks are too great?

There’s a reason people say that if you love fair use, you should give it a day off once in a while. Even the vital doctrine of fair use shouldn’t be the only outlet for free speech. In many areas, an absence of copyright, or the use of permissive public licenses, can foster more creativity than fair use alone could. Sadly for now, in the world of software development it’s the paradigm we have.

Reposted from the Electronic Frontier Foundation’s Deeplinks Blog

Posted on Techdirt - 19 February 2016 @ 12:49pm

NYPD Says It Has No Record Of Asking Disney To Use Copyright To Shut Down Times Square Characters, Despite Public Admission

It’s hard to imagine in the depths of this frigid New York winter, but last summer the city seemed to be in the grips of a Times Square Problem. Costumed characters — and the relative newcomers, painted topless women — were declared a public enemy, begriming the otherwise idyllic tourist mecca of midtown. But the NYPD, tasked with enforcing this mandate, had a problem: with only about one crime reported per day in Times Square, there’s not a lot to actually enforce.

So, as Techdirt and others reported at the time, the Department tried to get IP law to step in. Posessed of a legal theory that wouldn’t have survived much scrutiny, Commissioner Bill Bratton (or one of his employees) approached Disney and Marvel, asking them to sue the costumed performers for copyright infringement. The companies declined. We know all this because Bratton confirmed it to media outlets last August. In CNN:

The NYPD confirmed to CNNMoney that Commissioner Bill Bratton asked Disney and Marvel to sue for copyright infringement. But according to the NYPD, the companies aren’t biting.

And in the Daily News:

The city?s top cop said Thursday they got the cold shoulder from Disney and Marvel when they tried to enlist them in the fight against the costumed characters preying on tourists in Times Square.

The NYPD specifically asked the companies if they wanted to charge the hustlers who wear Mickey Mouse, Spider-Man and other well-known costumes with copyright infringement.

Lest you think, “But how would this all fit with the First Amendment?”, later in the same interview Bratton gives a remarkable quote that shows exactly how much respect he holds for freedom of speech:

Bratton also took a shot at Times Square artists like Andy Golub, whose specialty is using naked people as his canvas.

“What he is effectively doing is flaunting the first amendment,” he said. “Well, it may be an artistic expression, but it repulses the average person, and this is what we?re dealing with.”

Given that stance, the copyright requests of Disney and Marvel seemed like they might be extraordinary. So just after these news reports were published, late last August I made a public records request for the communications — or records of the communications — under New York’s Freedom of Information Law (FOIL).

I should not have expected much. After all, NYPD was given an “F” grade for its FOIL compliance by then Public Advocate Bill de Blasio in a 2013 survey of New York agencies. At least at that time, nearly a third of FOIL requests to NYPD simply went unanswered. Over a quarter took more than 60 days to process.

And when my request was acknowledged, I was told a review for the records would take 90 business days. Already, that’s an outrageously long time for an agency to take. FOIL allows some flexibility in response times, but 90 business days is about four months — too long for the records to be useful to a follow-up story, and far too long to allow me to refine my request and send in for more information.

Alas, this wasn’t a top priority for me, and I accepted the 90-day timeline without appeal. So a little more than three months later, I got the word: “this unit is unable to locate records responsive to your request.” In other words, no responsive documents; I get nothing.

What could this mean? As I see it, there are just a few possibilities.

  • First, maybe I phrased my request in a way that doesn’t describe the records they do have. I think I was pretty accurate with my description, but you be the judge:

    Communications and/or records of communications from January 1, 2012 to August 28, 2015 between the New York City Police Department and representatives of Disney, Marvel, The Jim Henson Company, Sesame Workshop, Sanrio, Viacom, Nickelodeon, DC Comics, Warner Brothers, Lucasfilm, or Nintendo of America, pertaining to or addressing the use of costumed characters in or around Times Square. Such communications, in the form of encouragement to Disney and Marvel to initiate copyright litigation, have been acknowledged by the NYPD to CNN in an August 28 story titled, “NYPD to Disney and Marvel: Get Minnie Mouse and Spider Man out of Times Square”

  • Maybe there are no communications at all, despite Bratton and Disney confirming there were to the media. I don’t really see what anybody has to gain by doing this.

  • Maybe the department has responsive records and chose not to give them to me. The agency didn’t cite an exemption or provide any indication that this was the case, but we’re talking about an agency that made up its own bogus “classification” scheme. It may be in violation of FOIL, but it doesn’t seem beyond the pale for the NYPD.

  • Finally, it’s possible NYPD conducted these communications in a way that did not generate any records. This is just speculation, but if this is the case, it’s hard to imagine this happening by accident. The NYPD asking Disney and Marvel to bring lawsuits, and there’s no paper trail at all?

Unfortunately, the nature of frustrated transparency efforts is that we don’t really have the answers. If the NYPD had promptly responded that it had no such records or would be withholding them according to a particular exemption, or even if it had given me a limited set, we could close this case. As it stands, we don’t really know anything more about the NYPD’s bizarre efforts to jam its “quality-of-life” issues into an ill-fitting copyright enforcement box.

Posted on Techdirt - 4 November 2015 @ 12:49pm

Copyright Terms And How Historical Journalism Is Disappearing

The National Endowment for the Humanities announced last Wednesday the “Chronicling America” contest to create projects out of historical newspaper data. The contest is supposed to showcase the history of the United States through the lens of a popular (and somewhat ephemeral) news format. But looking at the limits of the archival data, another story emerges: the dark cloud of copyright’s legal uncertainty is threatening the ability of amateur and even professional historians to explore the last century as they might explore the ones before it.

Consider that the National Digital Newspaper Program holds the history of American newspapers only up until 1922. (It originally focused on material from 1900-1910 and gradually expanded outwards to cover material from as early as 1836.) Those years may seem arbitrary?and it makes sense that there would be some cut-off date for a historical archive?but for copyright nerds 1922 rings some bells: it’s the latest date from which people can confidently declare a published work is in the public domain. Thanks to the arcane and byzantine rules created by 11 copyright term extensions in the years between 1962 and 1998, determining whether a work from any later requires consulting a flow chart from hell?the simple version of which, published by the Samuelson Clinic last year, runs to 50 pages.

The result is what’s been dubbed “The Missing 20th Century,” after it was brought to light by the striking research of Paul Heald, which shows copyright restrictions are tightly correlated with the lack of commercial availability of books. He analyzed the titles available in Amazon’s warehouses to find a steep drop-off in titles first published after 1923, which carries through until just the last few years. As Heald’s research shows, the number of books available from the 1850s is double the number available from 1950.

Despite what advocates of copyright term extensions like to say, the data suggests that after the first few years of a book’s publication, publishers as a group are much less willing to print a text that’s under copyright than one in the public domain.

The situation with newspapers is worse. After all, while books may tend to see their value to readers taper off after a few years after publication, for newspapers that same tapering happened in just days. Today’s newspaper issue may be incredibly valuable in the right hands, but yesterday’s is more likely to line bird cages or wrap fish than to end up preserved for posterity.

The big players keep their own archives. The New York Times, for example, makes articles available dating back to 1851. But that’s an incomplete solution for two major reasons. For one thing, it sets up a single point of failure that could allow catastrophic losses. Just last month, flooding threatened a priceless collection of photos in the New York Times archive; had those images been digitized and widely copied, no single flood or fire would pose a risk. But also, even a robust archive from a major publication like the Times can’t provide the kinds of insights that come from looking at a diverse collection from multiple different sources.

In the world of media journalism, we talk a lot about the future. But we can’t have a coherent conversation about that without thinking about the past and the present. And those thoughts, in turn, rely on access to the history that we’ve allowed to be locked up under effectively unlimited copyright restrictions or as orphan works.

Because this issue is bigger than the entries into a particular contest, or the way today’s history students can explore the past. The Atlantic documented last month the near-total disappearance of a groundbreaking series of investigative journalism from just eight years back. If copyright continues to jeopardize the unrestricted ability of archivists and researchers to preserve and contextualize our history, how much will we lose?

Posted on Techdirt - 31 July 2015 @ 12:21pm

Taylor Swift's Streaming Rant Nearly Identical To Garth Brooks' Used CD Rant

The music business tends to repeat itself. Conversations that seem completely intertwined with new technologies mirror those over earlier developments. Read Adrian John’s Piracy, for example, and see how closely the file-sharing debate followed the one about sheet music a century earlier.

Even with that background, the parallels between Taylor Swift’s widely discussed comments about Apple Music earlier this year and Garth Brooks’ outspoken stance on used CD sales are striking. It’s hard to argue with Swift?she is, after all, a shrewd businesswoman, and who knows what the future holds — but the fact that Brooks’ fears proved so unfounded takes some of the winds out of her sails. We may be at the end of history, and today’s problems might be totally unlike the ones we faced before, but probably not.

Here’s an excerpt of what Swift said about Apple’s free trial:

I’m sure you are aware that Apple Music will be offering a free 3 month trial to anyone who signs up for the service. I’m not sure you know that Apple Music will not be paying writers, producers, or artists for those three months. I find it to be shocking, disappointing, and completely unlike this historically progressive and generous company.

This is not about me. Thankfully I am on my fifth album and can support myself, my band, crew, and entire management team by playing live shows. This is about the new artist or band that has just released their first single and will not be paid for its success. This is about the young songwriter who just got his or her first cut and thought that the royalties from that would get them out of debt. This is about the producer who works tirelessly to innovate and create, just like the innovators and creators at Apple are pioneering in their field?but will not get paid for a quarter of a year’s worth of plays on his or her songs.

And here’s a journalist from the Seattle Post-Intelligencer paraphrasing Brooks’ comments at his sold-out arena concert, a few months after announcing he would only be selling his new record at stores that did not carry used CDs.

Brooks said that because no royalties are paid on the sale of used CDs, writers, labels, publishers and artists were being cheated. He said he would only supply chains that sell used CDs with his cassettes, and hinted that he might be working on another “format” to thwart such sales.

Brooks said he does not need any money, but lesser-known artists could suffer if secondhand CD sales take off. If used CD sales were to go into massive retail, he said, it would severely affect people in the recording industry, creating a sales loop that would profit only stores but not the creators, publishers and artists.

CD retailers, meanwhile, have argued that the cost of new CDs is too high for young buyers, and that selling used CDs exposes an artist’s music to different audiences.

For both Swift and Brooks — each among the best-selling acts of their generation — an emerging marketplace that makes music more accessible — but less well-compensated — was worth speaking out about. They both note that it’s not about them, but about the principle, and that the unpaid exposure would hurt new musicians. Both point to the middleman’s profits as an obvious evil.

To my mind, both artists are mistaken about the value of exposure and discoverability. Tim O’Reilly’s observation that obscurity is a greater threat to the emerging artist than piracy remains true; it’s also true that obscurity is a greater threat than used record sales, free trials, and most everything else.

But on the other counts, too, Garth Brooks was wrong. Used CD sales didn’t undermine the music industry and they didn’t keep new artists from finding audiences.

We know this because his plan to sell only through certain CD stores failed, amidst anti-trust investigations into his record label.

Taylor Swift was, at least narrowly, right. Apple Music should’ve been paying royalties for its free trials all along. But elsewhere, her skepticism about streaming and business models that include “free” might not be well placed. Unfortunately, because music licensing in this space is fundamentally more of a permissions culture than selling plastic discs was, we may never find out.

Reposted from parker higgins dot net

Posted on Techdirt - 27 April 2015 @ 01:39pm

The US Government Should Release These 7,584 Fruit Paintings

The federal government is sitting on 7,584 historical agricultural watercolor paintings that it should make freely available to the public today. Currently, people have access only to low-quality previews of the images; the United States Department of Agriculture, where the archive is held, should serve the public interest by making the entire collection of high quality scans free for all.

The USDA’s National Agricultural Library hosts the Pomological Watercolor Collection, which contains images of different varieties of fruits and nuts, commissioned between 1886 and 1942.

They’re remarkable as art, and also have serious scientific importance: they are some of the only documentation, for example, of thousands of apple types that no longer exist. The USDA has called the Pomological Watercolor Collection “Perhaps the most attractive as well as historically important of NAL’s treasures,” and it was cited just this week in a Washington State University article about apple preservation efforts.

The public should have access to these images, and that access should be automatic and unrestricted. Fortunately, that is technically possible: the USDA, through a grant from an environmental non-profit called The Ceres Trust, went though a multi-year digitization effort and now has high-quality scans of every image. However, members of the public can currently only view low-resolution versions online, can only request up to three high-quality scans free of charge, and must pay $10 per file beyond that.

And though the order page touts the fact that a portion of proceeds will go to conservation efforts, the numbers just don’t add up. I suspected that conservation costs are orders of magnitude higher than reproduction revenues, so I asked. Through a FOIA request to the USDA, I obtained the digitization project report, as well as a breakdown of the last three and a half years of revenues that the collection has generated.

Digitizing the images cost $288,442. Since the collection went online in 2011, members of the public have ordered just 81 images, for a total of $565. That relatively tiny amount simply cannot justify the cost to the public of keeping these images behind a paywall.

There’s no question that these paintings, if made more available, could be creating value for the public. High quality images could be used in printed teaching materials, which can spur conservation efforts and spark agricultural research interests in students. They could illustrate relevant articles on Wikipedia, providing historical context from over a hundred years of agriculture. The high quality scans could be examined closely by independent researchers to turn up new information.

The collection could even expand if it is accessible enough, as the National Agricultural Library described in its own report: one researcher, on hearing about the digitization project, contributed seven contemporaneous paintings of blueberries that had been stored in his lab.

Again, here’s the USDA’s own words on the importance of public access to the collection:

With today’s growing interest in heirloom varieties and others that are no longer commonly grown, the collection is an invaluable storehouse of fruit knowledge and history.

That knowledge is better served if the public has access to the scans, and it’s possible to do that today. If the cost of hosting and bandwidth is an issue, the Internet Archive and Wikimedia Commons would almost certainly be willing to host even the highest resolution scans.

Reposted from parker higgins dot net

Posted on Techdirt - 25 March 2015 @ 12:41pm

New York Times Turns Ads Off On 'Sensitive' Stories

I was looking at the HTML source of a recent New York Times story about a tragic plane accident?150 people feared dead?and noticed this meta tag in its head:

<meta property="ad_sensitivity" content="noads" />

There are no Google results for the tag, so it looks like it hasn’t been documented, but it seems like a pretty low-tech way to keep possibly insensitive ads off a very sensitive story?an admirable effort. It’s interesting in part because it’s almost an acknowledgement that ads are invasive and uncomfortable. They cross over into the intolerable range when we’re emotionally vulnerable from a tragic story. Advertisers know this too, and the New York Times might stipulate in contracts they’ll try to keep ads off sensitive pages.

If I had to guess, I’d say this is probably a manual switch in their CMS. It would be interesting to see what sorts of stories get dubbed unfit for ads, though scraping enough article pages to get that information might raise some eyebrows on that side of the paywall.

(This information, by the way, doesn’t have to be exposed for keeping ads off pages they serve. But it could help with debugging, and definitely could be useful for syndication and maybe even displaying in official apps.)

This isn’t the first example of companies declining to advertise against tragedies. Five years ago a user documented that Gmail doesn’t show ads on emails that contain words from a certain blacklist, at a certain density?one sensitive word per 167 “normal” words.

Reposted from parker higgins dot net

Posted on Techdirt - 18 September 2014 @ 09:04am

New Company Transparency Reports Help Quantify DMCA Abuse

It’s a sign of the times that online companies? transparency reports are starting to include a new section: the Hall of Shame. Automattic, the company behind WordPress, is the latest to do so, highlighting examples of copyright and trademark overreach by prominent figures like Janet Jackson, as well as more local businesses, organizations, and individuals attempting to silence criticism and other noninfringing speech. It even highlighted one example we’ve written about?and even dedicated a short video to?in which a baked goods company misused trademark to go after bloggers talking about derby pie, a common regional dessert in the Southern U.S. And WordPress is only the latest company to name-and-shame takedown abusers?the Wikimedia Foundation made a major splash last month when it highlighted the copyright saga behind a notorious monkey selfie.

We’ve kept up a Takedown Hall of Shame of our own for years. But these cases of egregious abuse tell only part of the story, and transparency reports also help call attention to a more subtle issue: a large percentage of takedown requests that do not result in content removal. That is to say, services routinely receive large numbers of bogus takedown demands.

There’s a real trend here. According to the latest numbers, Twitter does not comply with nearly 1 in 4 takedown notices it receives; Wikimedia complies with less than half; and WordPress complies with less than two-thirds. Each organization explains in its report that the notices with which they don’t comply are either incomplete or abusive.

When companies choose not to take down content because the notice is abusive, that’s a way of standing with their users, and it’s a significant decision. The bargain in the DMCA is straightforward: as long as services comply with takedown notices that meet the statutory requirements, they’re granted a “safe harbor” from any legal liability for copyright infringement that might otherwise arise from their hosting of user content. This had led some companies to take the short-sighted approach of removing all content for which they receive a takedown request, even if the request is defective or the content is obviously non-infringing. Since the law was enacted a decade and a half ago, some people have used the takedown mechanism as a censorship tool?sending careless or fraudulent notices in an attempt to silence lawful speech, and hoping that online services will comply just to stay in that safe harbor. And although the DMCA includes a mechanism to punish certain fraudulent takedown requests, the provision has proven difficult to enforce.

In other words, there’s a lopsided legal incentive that frequently results in services taking down non-infringing speech. The companies that stand up to bogus requests deserve kudos for doing so, and transparency reports are a good place to highlight that user-friendly behavior while also providing data about how often people are trying to abuse the DMCA.

The data from the transparency reports also supports the common understanding that users send counter-notices in only a relatively tiny number of cases. For example, Automattic reports that it got only 44 counter-notices for the 3,630 takedown notices that it received. After a short waiting period, a company can restore content for which it has received a valid counter-notice without losing its safe harbor protection. This is an important way for users to restore their non-infringing speech to public view.

Supporters of the status quo argue that the low rate of counter-notice means that most notices legitimately target infringement. But that suggestion doesn’t take into account how confusing and difficult the counter-notice process can be, and the fact that many users are intimidated by the requirement that they agree to be sued in federal court in case the rightsholder wants to claim copyright infringement (even though this is already true for users who are subject to the jurisdiction of U.S. federal courts). Users also fear the massively disproportionate statutory damages available to copyright claimants and the significant expense of defending even a winning copyright case, and allow themselves to be silenced rather than facing the expense and risk of vindicating their speech in courts.

The notice-and-takedown process is supposed to balance the interests of rightsholders, online platforms, and the general public, and transparency reports are an important mechanism to verify that’s happening. The numbers paint a troubling picture. Across the Web, we’ve seen report after report that the number of takedown notices sent to online services is skyrocketing. These three latest transparency reports support that notion, with Twitter in particular reporting a nearly 40% increase in just six months.

Taken together with the number of bogus takedowns and the rarity of counter-notices, it’s clear that the task of defending free speech is increasingly falling on online services.  The notice-and-takedown system unfortunately provides yet another example of how aggressive mechanisms of copyright enforcement are abused to censor legitimate content. We applaud those service providers who stand up to this abuse on behalf of their users.

Cross-posted from Electronic Frontier Foundation’s Deeplinks blog.

Posted on Techdirt - 25 July 2014 @ 07:46am

Epiphany: Rep. John Conyers Realizes Mid-Hearing That His Copyright Position Contradicts His Stand Against Overcriminalization

It’s hard to imagine looking at the absurdly excessive copyright penalties on the books and thinking, “Hey, maybe these should be a bit higher.” But Congress has shown itself to be exceedingly imaginative when it comes to cranking up copyright, so perhaps it is no surprise that in yesterday’s hearing on those penalties?covering statutory damages and criminal sanctions?a number of witnesses and Representatives alike seemed to think that those remedies are insufficient.

More surprising, though, was an unexpected moment of clarity from Michigan’s Rep. John Conyers, a staple of the Judiciary Committee’s reform hearing process and a reliable supporter of ratcheting up copyright enforcement capabilities. Conyers broke the first rule of copyright exceptionalism club by actually talking about the fact that this discussion would seem pretty unreasonable?even by Congressional standards?in areas outside of copyright.

Specifically, Conyers referred to the very real problem of overcriminalization, which absolutely afflicts copyright policy. This, after all, is the area of law that has made us an “Infringement Nation,” routinely racking up millions of dollars in hypothetical damages throughout the course of an average day. Conyers generally pushes back against this overcriminalization, but here he is arguing for misdemeanors to be made into felonies?what gives?

If you can’t see that, here’s the key clip, though it helps to watch the video:

Conyers: Mr. Assistant Attorney General, what else can we do besides addressing the felony streaming issue? It seems like… uh… once we get that going… uh… {long pause}… Well, it seems to me like there’s an underprosecution. Normally, I… {pause} come to the committee complaining about overcriminalization. {Looks around} And now I find myself in the awkward position of saying… uh… let’s make a felony of somebody’s misdemeanors. Can you give me some comfort in some way? {awkward smirk}

David Bitkower, the witness from the Department of Justice, basically says that from the DOJ’s perspective there is no overcriminalization problem, which is unsurprising. Then Nancy Wolff, a witness from the law firm of Cowan, DeBaets, Abrahams & Sheppard, adds that the ridiculously high damages helps plaintiffs force defendants to settle. Finally, Public Knowledge’s Sherwin Siy notes that Conyers’s question was spot on: our current excessive penalties do encourage certain plaintiffs to pursue non-meritorious claims, and that’s something to be concerned about.

You can see on Conyers’ face that he was looking for some resolution to his cognitive dissonance, but he couldn’t find it. Copyright exceptionalism is simply inconsistent with fact-based policy?so when it comes time to reconcile the two, you’re going to have a bad time.

Let’s hope this moment was a lawmaker beginning to see the light. As EFF lays out in our brand new copyright whitepaper, “Collateral Damages”, excessive and unpredictable penalties can chill free speech and stifle innovation. On such an important issue, it’s encouraging to see lawmakers breaking from the standard script.

Posted on Techdirt - 28 May 2014 @ 08:55am

Accepting Amazon's DRM Makes It Impossible To Challenge Its Monopoly

Amazon was the target of some well-deserved criticism this past week for making the anti-customer move of suspending sales of books published by Hachette, reportedly as a hardball tactic in its ongoing negotiations over ebook revenue splits. In an excellent article, Mathew Ingram connects this with other recent bad behavior by Internet giants leveraging their monopolies. Others have made the connection between this move and a similar one in 2010, when Amazon pulled Macmillan books off its digital shelves.

That dispute took place a little over four years ago, and ended with Amazon giving in and issuing a statement that people found a bit strange. Here’s a quote:

We want you to know that ultimately, however, we will have to capitulate and accept Macmillan’s terms because Macmillan has a monopoly over their own titles, and we will want to offer them to you even at prices we believe are needlessly high for e-books.

“Monopoly” was a funny choice of words there. The author John Scalzi, whose piece decrying Amazon’s actions at the time is still very much worth reading, memorably took issue:

And not only a forum comment, but a mystifyingly silly one: the bit in the comment about Amazon having no choice but to back down in the fight because “Macmillan has a monopoly over their own titles” was roundly mocked by authors, some of whom immediately started agitating against Amazon’s “monopoly” of the Kindle, or noted how terrible it was that Nabisco had a “monopoly” on Oreos.

Monopoly, of course, is economically the correct term. Publishers of books that are restricted by copyright have a set of exclusive rights granted to them by law. Their monopoly looks distinct from Amazon’s near-monopoly bookseller position, though, because it’s one agreed to in public policy. In a sense it is also more absolute, and less vulnerable to challenge, because it’s a legal monopoly, and not just a market monopoly.

To the extent Amazon has a monopoly on selling paper books, then, it could be challenged not just by legal action (such as antitrust investigation) but by other businesses competing. There would be some extreme logistical difficulties, and disparities created by economies of scale that might be impossible to overcome, but in principle other businesses are able to compete for Amazon’s market position on physical books.

Copyright behaves differently: when it comes to Macmillan or Hachette’s books, nobody may undercut prices by making production more efficient, or design prettier covers, or edit the text into a more compelling presentation. Where that’s a good thing, it’s because we’ve reached it by public policy. We’ve granted copyright holders an inviolable (if limited) legal monopoly because we as a society like the results.1

A very real danger, though, is if Amazon can take the challengeable market monopoly it has put together, and ratchet it into an unchallengeable legal monopoly. That is exactly what DRM does.

By putting DRM on its digital products—ebooks and audio books—Amazon gets the legal backing of the Digital Millennium Copyright Act’s anti-circumvention restrictions on its products. This isn’t for the advancement of public policy goals, either; Amazon gets to create the private law it wants to be enforced. Thanks to DRM, Kindle users are no longer free to take their business elsewhere—if you want a Kindle book, you must purchase it from Amazon.

Fortunately Kindle software can, for now, read other non-restricted formats. But the functionality is limited, and not guaranteed to stick around. And it’s a one-way street: other software and hardware may not read ebooks in the Kindle format. Customers who amass a Kindle library will find no compatible non-Amazon reader. The fact that individual users can usually circumvent the DRM, too, doesn’t help businesses trying to compete in that space.

Amazon has a lot of fans, and they tend to ascribe its rise as a bookseller for its aggressively pro-customer stance. If it drops that stance, even major fans would probably agree that it no longer deserves the throne. Unfortunately, DRM takes the conditional monopoly that customers like (you get to be the largest bookseller so long as you’re good to your customers) and replaces it with an unconditional one (you once achieved monopoly and that is now permanent).

Last week’s sketchy move against Hachette looks like a willingness to throw its customers under a bus in the name of better business deals. If publishers continue to insist on DRM, and if customers continue to allow it, we lose our ability to object.

  1. Of course, that is only as true as copyright policy reflects the will of the public, which it doesn’t, but it’s something to aspire to.

Republished from parker higgins dot net

More posts from Parker Higgins >>