elliot.harmon's Techdirt Profile

elliot.harmon

About elliot.harmon

Posted on Techdirt - 18 April 2019 @ 12:00pm

Don't Force Web Platforms To Silence Innocent People

The U.S. House Judiciary Committee held a hearing last week to discuss the spread of white nationalism, online and offline. The hearing tackled hard questions about how online platforms respond to extremism online and what role, if any, lawmakers should play. The desire for more aggressive moderation policies in the face of horrifying crimes is understandable, particularly in the wake of the recent massacre in New Zealand. But unfortunately, looking to Silicon Valley to be the speech police may do more harm than good.

When considering measures to discourage or filter out unwanted activity, platforms must consider how those mechanisms might be abused by bad actors. Similarly, when Congress considers regulating speech on online platforms, it must consider both the First Amendment implications and how its regulations might unintentionally encourage platforms to silence innocent people.

Again and again, we?ve seen attempts to more aggressively stamp out hate and extremism online backfire in colossal ways. We?ve seen state actors abuse flagging systems in order to silence their political enemies. We?ve seen platforms inadvertently censor the work of journalists and activists attempting to document human rights atrocities.

But there?s a lot platforms can do right now, starting with more transparency and visibility into platforms? moderation policies. Platforms ought to tell the public what types of unwanted content they are attempting to screen, how they do that screening, and what safeguards are in place to make sure that innocent people?especially those trying to document or respond to violence?aren?t also censored. Rep. Pramila Jayapal urged the witnesses from Google and Facebook to share not just better reports of content removals, but also internal policies and training materials for moderators.

Better transparency is not only crucial for helping to minimize the number of people silenced unintentionally; it?s also essential for those working to study and fight hate groups. As the Anti-Defamation League?s Eileen Hershenov noted:

To the tech companies, I would say that there is no definition of methodologies and measures and the impact. [?] We don?t have enough information and they don?t share the data [we need] to go against this radicalization and to counter it.

Along with the American Civil Liberties Union, the Center for Democracy and Technology, and several other organizations and experts, EFF endorses the Santa Clara Principles, a simple set of guidelines to help align platform moderation practices to human rights and civil liberties principles. The Principles ask platforms

  • to be honest with the public about how many posts and accounts they remove,
  • to give notice to users who?ve had something removed about what was removed, and under what rule, and
  • to give those users a meaningful opportunity to appeal the decision.

Hershenov also cautioned lawmakers about the dangers of heavy-handed platform moderation, pointing out that social media offers a useful view for civil society and the public into how and where hate groups organize: ?We do have to be careful about whether in taking stuff off of the web where we can find it, we push things underground where neither law enforcement nor civil society can prevent and deradicalize.?

Before they try to pass laws to remove hate speech from the Internet, members of Congress should tread carefully. Such laws risk pushing platforms toward a more highly filtered Internet, silencing far more people than was intended. As Supreme Court Justice Anthony Kennedy wrote in Matel v. Tam (PDF) in 2017, ?A law that can be directed against speech found offensive to some portion of the public can be turned against minority and dissenting views to the detriment of all.?

Republished from the EFF’s Deeplinks blog.

Posted on Techdirt - 12 April 2019 @ 01:31pm

Platform Liability Doesn't — And Shouldn't — Depend On Content Moderation Practices

In April 2018, House Republicans held a hearing on the ?Filtering Practices of Social Media Platforms? that focused on misguided claims that Internet platforms like Google, Twitter, and Facebook actively discriminate against conservative political viewpoints. Now, a year later, Senator Ted Cruz is taking the Senate down the same path: he lead a hearing earlier this week on ?Stifling Free Speech: Technological Censorship and the Public Discourse.?

While we certainly agree that online platforms have created content moderation systems that remove speech, we don?t see evidence of systemic political bias against conservatives. In fact, the voices that are silenced more often belong to already marginalized or less-powerful people.  

Given the lack of evidence of intentional partisan bias, it seems likely that this hearing is intended to serve a different purpose: to build a case for making existing platform liability exemptions dependent on “politically neutral” content moderation practices. Indeed, Senator Cruz seems to think that?s already the law. Questioning Facebook CEO Mark Zuckerberg last year, Cruz asserted that in order to enjoy important legal protections for free speech, online platforms must adhere to a standard of political neutrality in their moderation decisions. Fortunately for Internet users of all political persuasions, he?s wrong.

Section 230?the law that protects online forums from many types of liability for their users? speech?does not go away when a platform decides to remove a piece of content, whether or not that choice is ?politically neutral.? In fact, Congress specifically intended to protect platforms? right to moderate content without fear of taking on undue liability for their users? posts. Under the First Amendment, platforms have the right to moderate their online platforms however they like, and under Section 230, they?re additionally shielded from some types of liability for their users? activity. It?s not one or the other. It?s both.

In recent months, Sen. Cruz and a few of his colleagues have suggested that the rules should change, and that platforms should lose Section 230 protections if those platforms aren?t politically neutral. While such proposals might seem well-intentioned, it?s easy to see how they would backfire. Faced with the impossible task of proving perfect neutrality, many platforms?especially those without the resources of Facebook or Google to defend themselves against litigation?would simply choose to curb potentially controversial discussion altogether and even refuse to host online communities devoted to minority views. We have already seen the impact FOSTA has had in eliminating online platforms where vulnerable people could connect with each other.

To be clear, Internet platforms do have a problem with over-censoring certain voices online. These choices can have a big impact in already marginalized communities in the U.S., as well as in countries that don?t enjoy First Amendment protections, such as places like Myanmar and China, where the ability to speak out against the government is often quashed. EFF and others have called for Internet companies to provide the public with real transparency about whose posts they?re taking down and why. For example, platforms should provide users with real information about what they are taking down and a meaningful opportunity to appeal those decisions. Users need to know why some language is allowed and the same language in a different post isn?t. These and other suggestions are contained in the Santa Clara Principles, a proposal endorsed by more than 75 public interest groups around the world. Adopting these Principles would make a real difference in protecting people?s right to speak online, and we hope at least some of the witnesses tomorrow will point that out.

Reposted from the EFF Deeplinks blog

Posted on Techdirt - 27 December 2017 @ 01:31pm

Diego Gomez Is Safe, But His Legal Battle Demonstrates How Copyright Policy Creates Dangers To Research

In 2011, Colombian graduate student Diego G?mez did something that hundreds of people do every day: he shared another student’s Master’s thesis with colleagues over the Internet. He didn’t know that that simple, common act could put him in prison for years on a charge of criminal copyright infringement.

After a very long ordeal, we can breathe a sigh of relief: a Colombian appeals court has affirmed the lower court’s acquittal of Diego.

How did we get to the point where a student can go to prison for eight years for sharing a paper on the Internet?

Diego’s case is a reminder of the dangers of overly restrictive copyright laws. While Diego is finally in the clear, extreme criminal penalties for copyright infringement continue to chill research, innovation, and creativity all over the world, especially in countries that don’t have broad exemptions and limitations to copyright, or the same protections for fair use that we have in the United States.

In another sense, though, the case is a sad indictment of copyright law and policy decisions in the U.S. Diego’s story is a reminder of the far-reaching, worldwide implications of the United States government’s copyright law and policy. We failed Diego.

How did we get to the point where a student can go to prison for eight years for sharing a paper on the Internet? The answer is pretty simple: Colombia has severe copyright penalties because the United States told its government to introduce them. The law Diego was tried under came with a sentencing requirement that was set in order to comply with a trade agreement with the U.S.

International trade agreements are almost never good news for people who think that copyright’s scope and duration should be limited. By establishing minimum requirements that all countries must meet in protecting copyrighted works, they effectively create a floor for copyright law. It’s easy for signing countries to enact more restrictive laws than the agreement prescribes, but difficult to create less restrictive law.

Those agreements almost never carry requirements that participating nations honor limitations on copyright like fair use or fair dealing rights. Just this week, a coalition of 25 conservative groups sent a letter to the U.S. Trade Representative (USTR) arguing against the inclusion of any provision in the North American Free Trade Agreement (NAFTA) that would require countries to include balanced copyright limitations and exceptions such as fair use, as EFF and other groups have suggested. Countries like Colombia essentially get the worst of both worlds: strong protection for large rights-holders and weak protection for their citizens’ rights.

As we’ve pointed out before, it’s depressing that someone can risk prison time for sharing academic research anywhere in the world. If open access were the standard for scientific research, Diego would not have gotten in trouble at all. And once again, it’s the actions of countries like the United States that are to blame. The U.S. government is one of the largest funders of scientific research in the world. If the United States were to adopt a gold open access standard for all of the research it funds?that is, if it required that research outputs be made available to the public immediately upon publication, with no embargo period?then academic publishers would be forced to adapt immediately, essentially setting open access as the worldwide default.

EFF is delighted that Diego can rest easy and focus on his research, but unfortunately, the global conditions exist to put researchers all over the world in similar situations. No one should face years in prison for the act of sharing academic research. Making the changes in law and policy to prevent stories like Diego’s from happening again is a goal we should all share.

Republished from EFF’s Deeplinks blog.

Posted on Techdirt - 8 December 2017 @ 11:56am

Internet Censorship Bills Won't Help Catch Sex Traffickers

In the most illuminating part of last week’s House subcommittee hearing on the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA, H.R. 1865), Tennessee Bureau of Investigation special agent Russ Winkler explained how he uses online platforms?particularly Backpage?to fight online sex trafficking. Winkler painted a fascinating picture of agents on his team posing as johns, gaining trust with traffickers, and apprehending them. His testimony demonstrated how, with proper training and resources, law enforcement officers can navigate the online platforms where sex work takes place to find and stop traffickers, especially those trafficking children.

It was a rare moment of clarity in the debate over FOSTA and its sibling bill, the Stop Enabling Sex Traffickers Act (SESTA, S. 1693). Since these bills were introduced, there’s been little discussion of how law enforcement officers use the online platforms that the bills would threaten and how SESTA and FOSTA would make it more difficult for law enforcement to do its work. Winkler made it crystal clear how heavily his work relies on online platforms: “We’ve conducted operations and investigations involving numerous perpetrators and victims. The one constant we encounter in our investigations is use of online platforms like Backpage.com by buyers and sellers of underage sex.”

There are some differences between SESTA and FOSTA, but their impact on the Internet would be the same. A website or other online platform could be liable under both civil and criminal law, at both the state and federal levels, for the sex trafficking activities of its users. Since it can be very difficult to determine whether a given posting online is in aid of sex trafficking, the bills would almost certainly force websites to become significantly more restrictive in what sorts of content they allow. Many victims of trafficking would likely be pushed off the Internet entirely, as well as sex workers who weren’t being trafficked.

Winkler didn’t show much interest in the idea of targeting online intermediaries?and neither did fellow witness Derri Smith of End Slavery Tennessee. Understandably, their focus isn’t on holding Internet companies liable for user-generated content; it’s on prosecuting the traffickers themselves and getting trafficking victims out of horrific situations.

When Rep. Marsha Blackburn asked both Tennessee panelists what they need to successfully fight trafficking, neither panelist mentioned proposals like SESTA and FOSTA at all. They discussed more important measures aimed at finding and stopping traffickers and supporting survivors. Winkler referenced changes in state law “to make it more punishable for both buyers and sellers of sex acts with juveniles.”

Winkler isn’t the only person who’s tried to explain to Congress how law enforcement relies on online platforms to find and arrest sex traffickers. Numerous experts in trafficking have pointed out that the visibility of online platforms can both aid law enforcement in apprehending traffickers and provide safety to trafficking victims. Trafficking expert Alexandra Levy notes that the online platforms that FOSTA could undermine are the very platforms that law enforcement agencies rely on to fight trafficking:

While more visibility invites more business, it also increases the possibility that victims will be discovered by law enforcement, or anyone else looking for them. By extension, it also makes it more likely that the trafficker himself will be apprehended: exposure to customers necessarily means exposure to law enforcement.

Levy submitted a letter to the House Energy and Commerce Committee, Subcommittee on Communications and Technology, in advance of last week’s hearing, urging the Subcommittee not to go forward with a bill (.pdf) that would make it harder to apprehend traffickers and expose trafficking victims to more danger.

Freedom Network USA?the nation’s largest network of frontline organizations working to reduce trafficking?agrees (.pdf): “Internet sites provide a digital footprint that law enforcement can use to investigate trafficking into the sex trade, and to locate trafficking victims.”

Four months after SESTA was introduced in Congress?and with SESTA and FOSTA’s lists of cosponsors growing by the day?lawmakers continue to flock to these bills without questioning whether they provide a real solution to sex trafficking. These bills would do nothing to stop traffickers but would push marginalized voices off of the Internet, including those of trafficking victims themselves.

Reposted from EFF’s Deeplinks blog

Posted on Techdirt - 30 November 2017 @ 01:33pm

House Internet Censorship Bill Is Just Like The Senate Bill, Except Worse

SESTA and FOSTA Are Cut from the Same Cloth. Both Would Be Disastrous for Online Communities

There are two bills racing through Congress that would undermine your right to free expression online and threaten the online communities that we all rely on. The Stop Enabling Sex Traffickers Act (SESTA, S. 1693) and the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA, H.R. 1865) might sound noble, but they would do nothing to fight sex traffickers. What they would do is force online web platforms to police their users’ activity much more stringently than ever before, silencing a lot of innocent voices in the process.

We’ve already written extensively about SESTA and the dangers it would pose to online communities, but as the House of Representatives considers moving on FOSTA, it’s time to reiterate that all of the major flaws in SESTA are in FOSTA too.

Section 230 Protects Online Communities. Don’t Weaken It.

Like SESTA, FOSTA would erode a law referred to as Section 230. Passed in 1996, Section 230 says that online platforms can’t be held liable for their users’ speech, except in certain circumstances. Without Section 230, it would be extremely risky to host other people’s speech online: one lawsuit could destroy your company. Most social media sites wouldn’t exist, or they’d look very different from the ones we enjoy today.

Section 230 strikes an important balance for when and how online platforms can be held liable for their users’ speech. Contrary to SESTA’s supporters’ claims, Section 230 does nothing to protect platforms that are directly involved with breaking federal criminal law. If an Internet company is directly contributing to unlawful activity, the Department of Justice can and should prosecute it.

Under FOSTA, a site would be on the hook if a court simply found that someone had used it for sex trafficking purposes. The law would force platforms to become much more restrictive in their moderation policies, which is likely to disproportionately silence marginalized groups.

FOSTA carves an even bigger hole out of Section 230 than SESTA does. It defines the state law exemption to Section 230 more broadly, applying it to “any State criminal statute” related to sex trafficking. State sex trafficking laws are notoriously inconsistent: in Alaska and Massachusetts, for example, statutes define trafficking so broadly that they don’t require any indication that someone was forced or coerced into sex work. FOSTA could open the door to litigation far beyond the sex trafficking activities it’s intended to target.

Broad Criminal Law Would Hurt Legitimate Communities

Like SESTA, FOSTA expands federal sex trafficking law to sweep in third parties that unknowingly facilitate sex trafficking (like web platforms), but FOSTA defines those third parties even more broadly than SESTA does, criminalizing conduct by “any person or entity and by any means that furthers or in anyway aids or abets” sex trafficking. It even goes a step further by explicitly making it a crime to be a provider of an Internet service that was used for sex trafficking purposes, provided that you acted in “reckless disregard” of the possibility that your service could be used for trafficking (we’ve written already about the dangers of applying the “reckless disregard” standard to online intermediaries).

Remember, Congress already made it it a federal crime to “advertise” sex trafficking online, via the SAVE Act of 2015. No new law is necessary to prosecute platforms that knowingly facilitate sex trafficking ads. If the Department of Justice has failed to prosecute platforms that violate the SAVE Act, then lawmakers should demand an explanation. In the meantime, Congress shouldn’t pass laws threatening every other online community.

Bottom Line: These Bills Go After the Wrong Targets

We’ve talked a lot about the damage that SESTA and FOSTA would do to speech and communities online. Just as important is what they would not do: fight sex trafficking.

SESTA and FOSTA are perfect examples of Congress choosing an easy target rather than the right target. It’s easy to prosecute Internet companies, but Congress must do the serious work of understanding trafficking?its causes, its perpetrators, and the online tools law enforcement can use to fight it?and find better solutions to find and punish traffickers.

Since SESTA and FOSTA were first introduced, many experts in sex trafficking have stepped forward to explain that these bills are the wrong solution?that they would put victims of sex trafficking in much worse predicaments, moving them from the safety of the Internet to a dangerous street?where they are much less likely to get help.

It’s not pleasant to confront the dark realities of sex trafficking, but Congress must. Otherwise, it risks passing a bill that would harm the very victims it’s trying to help.

Reposted from EFF’s Deeplinks blog

Posted on Techdirt - 25 September 2017 @ 11:54am

Google Will Survive SESTA. Your Startup Might Not.

There was a shocking moment in this week?s Senate Commerce Committee hearing on the Stop Enabling Sex Traffickers Act (SESTA). Prof. Eric Goldman had just pointed out that members of Congress should consider how the bill might affect hundreds of small Internet startups, not just giant companies like Google and Facebook. Will every startup have the resources to police its users? activity with the level of scrutiny that the new law would demand of them? ?There is a large number of smaller players who don?t have the same kind of infrastructure. And for them, they have to make the choice: can I afford to do the work that you?re hoping they will do??

Goldman was right: the greatest innovations in Internet services don?t come from Google and Facebook; they come from small, fast-moving startups. SESTA would necessitate a huge investment in staff to filter users? activity as a company?s user base grows, something that most startups in their early stages simply can?t afford. That would severely hamper anyone?s ability to launch a competitor to the big Internet players?giving users a lot less choice.

Sen. Richard Blumenthal?s stunning response: ?I believe that those outliers?and they are outliers?will be successfully prosecuted, civilly and criminally under this law.?

Given the extreme penalties for under-filtering, platforms would err in the opposite direction, removing legitimate voices from the Internet.

Blumenthal is one of 30 cosponsors?and one of the loudest champions?of SESTA, a bill that would threaten online speech by forcing web platforms to police their members? messages more stringently than ever before. Normally, SESTA?s proponents vastly understate the impact that the bill would have on online communities. But in that unusual moment of candor, Sen. Blumenthal seemed to lay bare his opinions about Internet startups?he thinks of them as unimportant outliers and would prefer that the new law put them out of business.

Let?s make something clear: Google will survive SESTA. Much of the SESTA fight?s media coverage has portrayed it as a battle between Google and Congress, which sadly misses the point. Large Internet companies may have the legal budgets to survive the massive increase in litigation and liability that SESTA would bring. They probably also have the budgets to implement a mix of automated filters and staff censors to comply with the law. Small startups are a different story.

Indeed, lawmakers should ask themselves whether SESTA would unintentionally reinforce large incumbent companies? advantages. Without the strong protections that allowed today?s large Internet players to rise to prominence, startups would have a strong disincentive to grow. As soon as your user base grows beyond what your staff can directly police, your company becomes a huge liability.

But ultimately, the biggest casualty of SESTA won?t be Google or startups; it will be the people pushed offline.

Many of SESTA?s supporters suggest that it would be easy for web platforms of all sizes to implement automated filtering technologies they can trust to separate legitimate voices from criminal ones. But it?s impossible to do that with anywhere near 100% accuracy. Given the extreme penalties for under-filtering, platforms would err in the opposite direction, removing legitimate voices from the Internet. As EFF Executive Director Cindy Cohn put it, ?Again and again, when platforms clamp down on their users? speech, marginalized voices are the first to disappear.?

The sad irony of SESTA is that while its supporters claim that it will fight sex trafficking, trafficking victims are likely to be among the first people it would silence. And that silence could be deadly. According to Freedom Network USA, the largest network of anti-trafficking advocate organizations in the country (PDF), ?Internet sites provide a digital footprint that law enforcement can use to investigate trafficking into the sex trade, and to locate trafficking victims.? Congress should think long and hard before passing a bill that would incentivize web platforms to silence those victims.

Internet startups would take the much greater hit from SESTA than large Internet firms would, but ultimately, those most impacted would be users themselves. As online platforms ratcheted up their patrolling of their users? speech, some voices would begin to disappear from the Internet. Tragically, some of those voices belong to the people most in need of the safety of online communities.

Republished from EFF’s Deeplinks blog

Posted on Techdirt - 1 June 2017 @ 03:42pm

Stupid Patent Of The Month: Ford Patents A Windshield

The Supreme Court?s recent decision in Impression Products v. Lexmark International was a big win for individuals? right to repair and modify the products they own. While we?re delighted by this decision, we expect manufacturers to attempt other methods of controlling the market for resale and repair. That?s one reason we?re giving this month?s Stupid Patent of the Month award to Ford?s patent on a vehicle windshield design.

D786,157 is a design patent assigned to a subsidiary of Ford Motor Company. While utility patents are issued for new and useful inventions, design patents cover non-functional, ornamental aspects of a product.

Unlike utility patents, design patents have only one claim and usually have little or no written description. The patent only covers the non-functional design of a certain product. But design and utility patents are alike in an important way: both are intended to reward novelty. According to U.S. law, the Patent Office should issue design patents only for sufficiently new and original designs. By that test alone, it?s easy to see that the windshield patent should never have been issued.

Why did Ford apply for the patent on its windshield design? One possible reason is that it?s the automotive industry?s latest attempt to control the market for repair. If the shape of your windshield is patented by Ford, then no one else can replace it without risking costly patent litigation.

Ford has a troublesome history with independent repair shops: in 2015, it sued the manufacturer of an independent diagnostics tool under Section 1201 of the Digital Millennium Copyright Act, the infamous law that makes it illegal to circumvent digital locks on products you own. Later in 2015, the Librarian of Congress granted an exception to 1201 for some forms of auto repair, but manufacturers have continued to seek out creative ways to close out the market, whether it?s through copyright, contract clauses, or patents.

In the Supreme Court Lexmark opinion, Justice John Roberts specifically noted the danger of automobile manufacturers shutting out competition in the repair space:

Take a shop that restores and sells used cars. The business works because the shop can rest assured that, so long as those bringing in the cars own them, the shop is free to repair and resell those vehicles. That smooth flow of commerce would sputter if companies that make the thousands of parts that go into a vehicle could keep their patent rights after the first sale. Those companies might, for instance, restrict resale rights and sue the shop owner for patent infringement. And even if they refrained from imposing such restrictions, the very threat of patent liability would force the shop to invest in efforts to protect itself from hidden lawsuits.

If the Patent Office continues to issue stupid design patents like Ford’s windshield patent, it risks giving manufacturers carte blanche to decide who can repair their products. And customers will pay the price.

Republished from the EFF’s Stupid Patent of the Month series.

Posted on Techdirt - 31 August 2016 @ 04:13pm

Stupid Patent Of The Month: Elsevier Patents Online Peer Review

On August 30, 2016, the Patent Office issued U.S. Patent No. 9,430,468, titled; “Online peer review and method.” The owner of this patent is none other than Elsevier, the giant academic publisher. When it first applied for the patent, Elsevier sought very broad claims that could have covered a wide range of online peer review. Fortunately, by the time the patent actually issued, its claims had been narrowed significantly. So, as a practical matter, the patent will be difficult to enforce. But we still think the patent is stupid, invalid, and an indictment of the system.

Before discussing the patent, it is worth considering why Elsevier might want a government granted monopoly on methods of peer review. Elsevier owns more than 2000 academic journals. It charges huge fees and sometimes imposes bundling requirements whereby universities that want certain high profile journals must buy a package including other publications. Universities, libraries, and researchers are increasingly questioning whether this model makes sense. After all, universities usually pay the salaries of both the researchers that write the papers and of the referees who conduct peer review. Elsevier’s business model has been compared to a restaurant where the customers bring the ingredients, do all the cooking, and then get hit with a $10,000 bill.

The rise in wariness of Elsevier’s business model correlates with the rise in popularity and acceptance of open access publishing. Dozens of universities have adopted open access policies mandating or recommending that researchers make their papers available to the public, either by publishing them in open access journals or by archiving them after publication in institutional repositories. In 2013, President Obama mandated that federally funded research be made available to the public no later than a year after publication, and it’s likely that Congress will lock that policy into law.

Facing an evolving landscape, Elsevier has sought other ways to reinforce its control of publishing. The company has tried to stop researchers from sharing their own papers in institutional repositories, and entered an endless legal battle with rogue repositories Sci-Hub and and LibGen. Again and again, when confronted with the changing face of academic publishing, Elsevier resorts to takedowns and litigation rather than reevaluating or modernizing its business model.

Elsevier recently acquired SSRN, the beloved preprints repository for the social sciences and humanities. There are early signs that it will be a poor steward of SSRN. Together, the SSRN acquisition and this month’s stupid patent present a troubling vision of Elsevier’s new strategy: if you can’t control the content anymore, then assert control over the infrastructures of scholarly publishing itself.

Elsevier filed its patent application on June 28, 2012. The description of the invention is lengthy, but is essentially a description of the process of peer review, but on a computer. For example, it includes a detailed discussion of setting up user accounts, requiring new users to pass a CAPTCHA test, checking to see if the new user’s email address is already associated with an account, receiving submissions, reviewing submissions, sending submissions back for corrections, etc, etc, etc.

The patent departs slightly from typical peer review in its discussion of what it calls a “waterfall process.” This is “the transfer of submitted articles from one journal to another journal.” In other words, authors who are rejected by one journal are given an opportunity to immediately submit somewhere else. The text of the patent suggests that Elsevier believed that this waterfall process was its novel contribution. But the waterfall idea was not new in 2012. The process had been written about since at least 2009 and is often referred to as “cascading review.”

The patent examiner rejected Elsevier’s application three times. But, taking advantage of the patent system’s unlimited do-overs, Elsevier amended its claims by adding new limitations and narrowing the scope of its patent. Eventually, the examiner granted the application. The issued claims include many steps. Some of these steps, like “receive an author-submitted article,” would be quite hard to avoid. Others are less essential. For example, the claims require automatically comparing a submission to previously published articles and using that data to recommend a particular journal as the best place to send the submission. So it would be an exaggeration to suggest the patent locks up all online peer review.

We hope that Elsevier will not be aggressive in its own interpretation of the patent’s scope. Unfortunately, its early statements suggest it does take an expansive view of the patent. For example, an Elsevier representative tweeted: “There is no need for concern regarding the patent. It’s simply meant to protect our own proprietary waterfall system from being copied.” But the waterfall system, aka cascading peer review, was known years before Elsevier filed its patent application. It cannot claim to own that process.

Ultimately, even though the patent was narrowed, it is still a very bad patent. It is similar to Amazon’s patent on white-background photography where narrowed but still obvious claims were allowed. Further, Elsevier’s patent would face a significant challenge under Alice v CLS Bank, where the Supreme Court ruled that abstract ideas do not become eligible for a patent simply because they are implemented on a generic computer. To our dismay, the Patent Office did not even raise Alice v CLS Bank even though that case was handed down more than two years before this patent issued. Elsevier’s patent is another illustration of why we still need fundamental patent reform.

Reposted from the EFF’s Stupid Patent of the Month series.

More posts from elliot.harmon >>