Security Researcher: Recent CFAA Changes Won’t Keep Researchers From Being Prosecuted
from the thanks-for-your-help,-they-prosecuted dept
The people who are here to help are still in harm’s way. The Supreme Court may have mitigated a bit of this damage in its 2021 Van Buren decision, but its limitations on readings of the Computer Fraud and Abuse Act’)’s (CFAA) language means more on paper than it does in real life. All this did was suggest CFAA cases should only target criminal hacking efforts, but left the definition of “criminal” wide open, allowing it to remain a tool of abuse for private companies that refused to fix problems but felt justified in suing security researchers in court for exposing unfixed security flaws.
The DOJ has also recently narrowed its interpretation of the CFAA in hopes of punishing fewer security researchers and more actual criminals. But this policy change allows the DOJ to exercise its discretion when it comes to pursuing criminal charges. Even if the DOJ shows restraint, its internal change doesn’t affect the private sector, which can still sue researchers in court over perceived damages related to their exposure of security flaws or other unexpected uses of their services.
Security researcher Rianna Pfefferkorn — who has written for Techdirt occasionally – recently published a paper [PDF] detailing how recent events (the Supreme Court decision, in particular) haven’t really eliminated the threat posed to researchers who work to make the internet safe for everyone.
The paper shows the messenger can still be shot with alarming frequency, despite DOJ policy changes and the Supreme Court’s ruling. All anyone needs to do is describe reported breaches and flaws as a “loss.” And that will allow private entities to bring CFAA lawsuits and perhaps encourage the DOJ to get involved, despite its promise to steer clear of cases that don’t appear to involve malicious hacking.
Van Buren did not fully dissipate the legal risks the CFAA has long posed to a particular community: people who engage in good-faith cybersecurity research. Discovering and reporting security vulnerabilities in software and hardware risks legal action from vendors displeased with unflattering revelations about their products’ flaws. Research activities have even led to criminal investigations at times. Although Van Buren narrowed the CFAA’s scope and prompted reforms in federal criminal charging policy, researchers continue to face some legal exposure. The CFAA still lets litigious vendors “shoot the messenger” by suing over security research that did them no harm. Spending just $5,000 addressing a vulnerability is sufficient to allow the vendor to sue the researcher who reported it, because such remediation costs qualify as “loss” even in courts that read that term narrowly.
$5,000 is nothing when it comes to fixing security flaws. That amount could be consumed by corporate lawyers trying to compose a press release in response to disclosed vulnerabilities. For services with thousands of users, the tech equivalent of Hollywood accounting could be deployed to portray a momentary inconvenience as a catastrophic hit to a company’s profitability. $5,000 is a rounding error masquerading as a cause of action.
Whatever the DOJ does voluntarily doesn’t restrain the private sector. Unfortunately, due to the low bar for perceived damages, neither does the 2021 Supreme Court decision. The law itself remains unchanged, and its vague wording means pain can be inflicted on people who are just trying to do the right thing,
The law is so broad that it can be read to prohibit not just malicious computer intrusions and destruction, but also research that aims in good faith to improve the state of computer security by finding digital security vulnerabilities and reporting them to the product vendors.
The CFAA is a weapon, even if the reps that wrote it never intended it to be one. All it does is allow companies to get litigious when they’re not the ones discovering vulnerabilities in their products and services.
For a vendor that finds and patches its own bugs, there is nobody to sue; repairs are part of the cost of doing business. Yet, if a vulnerability is found and reported by an outsider rather than an insider, the CFAA lets a vendor externalize its remediation costs onto the outsider, even where the outsider has done no damage to the vendor’s computer systems.
This turns the CFAA into a tool of revenge. Companies embarrassed by security breaches or exposed as being unwilling to address concerns responsibly reported to them by researchers turn to the courts to extract their pound of flesh from researchers who did nothing but alert them to existing problems.
As Pfefferkorn points out, the courts can’t protect researchers because the law can be read as defining researchers’ work as criminal violations. The DOJ can’t protect researchers because its internal changes only suggest the DOJ steer clear of prosecuting researchers who have acted in “good faith.” Good faith is in the eye of the beholder and there’s no reason to believe a company with an effective set of lobbyists won’t be able to talk the DOJ into going after good faith efforts.
Bug bounty programs offer little solace. Much like the DOJ’s internal policy alterations, bug bounty programs are still considered a form of largesse. If companies don’t like how a researcher found or reported a bug, the bounty program becomes a ploughshare hammered into a sword.
The advent of VDPs [vulnerability disclosure programs] and bug bounties has in some respects only perpetuated the problem of researchers bearing liability by enabling vendors to control outside research into their products while providing little legal assurance to the researcher in return. The terms of these programs are often poorly drafted, voluminous, and impose onerous requirements on researchers, making compliance difficult. At the same time, these terms often do not contain strong contractual protections from liability for researchers, and indeed tend to allocate legal risk to the participant.
All of this simply makes things worse… for everybody.
As a result of this hostile legal environment, good-faith researchers have been scared to undertake research projects that might expose them to liability. This is bad news for the rest of us.
The DOJ’s change is welcome. But its effectiveness is mitigated by a whole bunch of things, including the fact that its CFAA focus does nothing to deter bullshit lawsuits and prosecutions involving state computer crime laws.
The DOJ’s policy is undeniably an important step forward in restoring trust between the security community and the authorities charged with protecting the public. Nevertheless, it cannot fully assuage researchers’ fears. For one thing, this is a non-binding policy, not a law. Even if charging good-faith researchers is disfavored, a prosecutor would still have the discretion to do so. Additionally, the policy does not forbid investigating researchers over their work. Nor could it: after all, a determination that particular research counts as good faith (and so the researcher should be let off the hook) will surely require some amount of government scrutiny. Researchers may reasonably wonder how intrusive that process might be. Finally, the DOJ policy has no effect on prosecutions under state-level anti-hacking laws. State laws remain a source of potential criminal liability for security research.
Case in point: the Missouri governor’s decision to pursue criminal charges against a journalist who did nothing more than point out a security flaw in a government website.
So… how does this get fixed? Pfefferkorn’s paper offers several suggestions.
First, there needs to be a clear legal definition of “good faith,” as well as safe harbor protections for researchers who believe they have acted in good faith. This would involve giving affected entities time to respond to reported breaches, as well as provide a legal shield researchers could immediately erect, whether facing civil or criminal charges. This disclosure delay would not be indefinite: companies should be obliged to fix problems as soon as possible, rather than slow walk a response in hopes of pursuing a lawsuit when the (still-unfixed) security flaw is made public. Researchers spoken to by Pfefferkorn suggest a 24-48 hour waiting period.
Safe harbor presents its own problems. Read too exclusively, it will result in more shootings of messengers. Read too laxly and it may invite black hat hackers to invoke this shield when defending themselves against CFAA charges. The perfect middle ground is likely impossible to achieve. But just because the solution is not immediately apparent doesn’t mean nothing should change until this solution presents itself. Stasis is not acceptable. That has been standard M.O. for too long and all it has caused is pain.
The perfect is the enemy of the good enough. The foregoing proposals show longstanding agreement that something more must be done to exempt security researchers from legal liability, along with simultaneous disagreement about what exactly to do. Between Bambauer and Day’s proposal and the DOJ’s new policy, a dozen years elapsed.
The best fix for the moment may be to fix the law itself, expanding on the limited scope of the Supreme Court’s Van Buren decision.
To foreclose “shooting the messenger” civil lawsuits against good faith security researchers under the CFAA, this Article proposes amending the law so that the cost to remediate a vulnerability, standing alone, cannot satisfy the statute’s $5,000 jurisdictional threshold. Tightening up the “loss” calculus would stymie retaliatory litigation against socially beneficial (or at least benign) security research. At the same time, it would preserve victims’ ability to seek redress in cases where well-intended research activities (or instances of intentional malice) do cause harm.
Another suggestion is to do away with a private cause of action altogether, leaving CFAA legal actions entirely in the hands of the DOJ. While this does seem like a good way to dissuade the filing of bullshit, retaliatory lawsuits, it also places more pressure on the DOJ to ignore its own guidelines. When it comes to prosecutorial discretion, the less they have of it, the better. This “solution” simply changes who’s inflicting the pain.
Another option is fee-shifting. Like an anti-SLAPP but for security research, those who have been sued for bogus reasons would be allowed to seek recovery of fees if the court finds in their favor. This could deter frivolous litigation solely meant to cause financial harm to responsible researchers. Unfortunately, unlike anti-SLAPP motions, this may not be something defendants can use to exit bogus litigation before too much of their own money is spent. Still, it would be better than the current state of affairs, where the prevailing party is most often required to bear their own costs.
What is clear is that something needs to be done:
The time is ripe to make the legal landscape safer for security researchers. Van Buren’s “loss” dicta points to an encouraging direction for CFAA reform, and the DOJ’s surprise policy shift indicates that such reforms are feasible and timely. For Congress to tighten up the statutory standing requirements and add a fee-shifting option in civil CFAA cases would help further the project of protecting security research that the executive and judicial branches have begun. Those who responsibly disclose security vulnerabilities are not like those who choose to exploit them. Federal computer trespass law should acknowledge the difference.
The status quo — even with the incremental improvements of Van Buren and the DOJ’s policy changes — isn’t acceptable. People trying to make the internet safer for everyone are still at risk of having their good deeds punished.