Joe Mullin's Techdirt Profile

Joe Mullin

About Joe Mullin

Posted on Techdirt - 18 December 2025 @ 01:47pm

Thousands Tell The Patent Office: Don’t Hide Bad Patents From Review

We filed our own comment with the USPTO regarding their attempt to weaken the important inter partes review (IPR) process that has been hugely helpful in getting rid of bad patents. Over at EFF, Joe Mullin wrote up an analysis of some of the comments to the USPTO, which we’re running here as well.

A massive wave of public comments just told the U.S. Patent and Trademark Office (USPTO): don’t shut the public out of patent review.

EFF submitted its own formal comment opposing the USPTO’s proposed rules, and more than 4,000 supporters added their voices—an extraordinary response for a technical, fast-moving rulemaking. We comprised more than one-third of the 11,442 comments submitted. The message is unmistakable: the public wants a meaningful way to challenge bad patents, and the USPTO should not take that away.

The Public Doesn’t Want To Bury Patent Challenges

These thousands of submissions do more than express frustration. They demonstrate overwhelming public interest in preserving inter partes review (IPR), and undermine any broad claim that the USPTO’s proposal reflects public sentiment. 

Comments opposing the rulemaking include many small business owners who have been wrongly accused of patent infringement, by both patent trolls and patent-abusing competitors. They also include computer science experts, law professors, and everyday technology users who are simply tired of patent extortion—abusive assertions of low-quality patents—and the harm it inflicts on their work, their lives, and the broader U.S. economy. 

The USPTO exists to serve the public. The volume and clarity of this response make that expectation impossible to ignore.

EFF’s Comment To USPTO

In our filing, we explained that the proposed rules would make it significantly harder for the public to challenge weak patents. That undercuts the very purpose of IPR. The proposed rules would pressure defendants to give up core legal defenses, allow early or incomplete decisions to block all future challenges, and create new opportunities for patent owners to game timing and shut down PTAB review entirely.

Congress created IPR to allow the Patent Office to correct its own mistakes in a fair, fast, expert forum. These changes would take the system backward. 

A Broad Coalition Supports IPR

A wide range of groups told the USPTO the same thing: don’t cut off access to IPR.

Open Source and Developer Communities 

The Linux Foundation submitted comments and warned that the proposed rules “would effectively remove IPRs as a viable mechanism for challenges to patent validity,” harming open-source developers and the users that rely on them. Github wrote that the USPTO proposal would increase “litigation risk and costs for developers, startups, and open source projects.” And dozens of individual software developers described how bad patents have burdened their work. 

Patent Law Scholars

A group of 22 patent law professors from universities across the country said the proposed rule changes “would violate the law, increase the cost of innovation, and harm the quality of patents.” 

Patient Advocates

Patients for Affordable Drugs warned in their filing that IPR is critical for invalidating wrongly granted pharmaceutical patents. When such patents are invalidated, studies have shown “cardiovascular medications have fallen 97% in price, cancer drugs dropping 80-98%, and treatments for opioid addiction becom[e] 50% more affordable.” In addition, “these cases involved patents that had evaded meaningful scrutiny in district court.” 

Small Businesses 

Hundreds of small businesses weighed in with a consistent message: these proposed rules would hit them hardest. Owners and engineers described being targeted with vague or overbroad patents they cannot afford to litigate in court, explaining that IPR is often the only realistic way for a small firm to defend itself. The proposed rules would leave them with an impossible choice—pay a patent troll, or spend money they don’t have fighting in federal court. 

What Happens Next

The USPTO now has thousands of comments to review. It should listen. Public participation must be more than a box-checking exercise. It is central to how administrative rulemaking is supposed to work.

Congress created IPR so the public could help correct bad patents without spending millions of dollars in federal court. People across technical, academic, and patient-advocacy communities just reminded the agency why that matters. 

We hope the USPTO reconsiders these proposed rules. Whatever happens, EFF will remain engaged and continue fighting to preserve  the public’s ability to challenge bad patents. 

Republished from the EFF’s Deeplinks blog.

Posted on Techdirt - 30 July 2025 @ 01:14pm

Canada’s Bill C-2 Opens The Floodgates To U.S. Surveillance

The Canadian government is preparing to give away Canadians’ digital lives—to U.S. police, to the Donald Trump administration, and possibly to foreign spy agencies.

Bill C-2, the so-called Strong Borders Act, is a sprawling surveillance bill with multiple privacy-invasive provisions. But the thrust is clear: it’s a roadmap to aligning Canadian surveillance with U.S. demands. 

It’s also a giveaway of Canadian constitutional rights in the name of “border security.” If passed, it will shatter privacy protections that Canadians have spent decades building. This will affect anyone using Canadian internet services, including email, cloud storage, VPNs, and messaging apps. 

joint letter, signed by dozens of Canadian civil liberties groups and more than a hundred Canadian legal experts and academics, puts it clearly: Bill C-2 is “a multi-pronged assault on the basic human rights and freedoms Canada holds dear,” and “an enormous and unjustified expansion of power for police and CSIS to access the data, mail, and communication patterns of people across Canada.”

Setting The Stage For Cross-Border Surveillance 

Bill C-2 isn’t just a domestic surveillance bill. It’s a Trojan horse for U.S. law enforcement—quietly building the pipes to ship Canadians’ private data straight to Washington.

If Bill C-2 passes, Canadian police and spy agencies will be able to demand information about peoples’ online activities based on the low threshold of “reasonable suspicion.” Companies holding such information would have only five days to challenge an order, and blanket immunity from lawsuits if they hand over data. 

Police and CSIS, the Canadian intelligence service, will be able to find out whether you have an online account with any organization or service in Canada. They can demand to know how long you’ve had it, where you’ve logged in from, and which other services you’ve interacted with, with no warrant required.

The bill will also allow for the introduction of encryption backdoors. Forcing companies to surveil their customers is allowed under the law (see part 15), as long as these mandates don’t introduce a “systemic vulnerability”—a term the bill doesn’t even bother to define. 

The information gathered under these new powers is likely to be shared with the United States. Canada and the U.S. are currently negotiating a misguided agreement to share law enforcement information under the US CLOUD Act. 

The U.S. and U.K. put a CLOUD Act deal in place in 2020, and it hasn’t been good for users. Earlier this year, the U.K. home office ordered Apple to let it spy on users’ encrypted accounts. That security risk caused Apple to stop offering U.K. users certain advanced encryption features, , and lawmakers and officials in the United States have raised concerns that the UK’s demands might have been designed to leverage its expanded CLOUD Act powers.

If Canada moves forward with Bill C-2 and a CLOUD Act deal, American law enforcement could demand data from Canadian tech companies in secrecy—no notice to users would be required. Companies could also expect gag orders preventing them from even mentioning they have been forced to share information with US agencies.

This isn’t speculation. Earlier this month, a Canadian government official told Politico that this surveillance regime would give Canadian police “the same kind of toolkit” that their U.S. counterparts have under the PATRIOT Act and FISA. The bill allows for “technical capability orders.” Those orders mean the government can force Canadian tech companies, VPNs, cloud providers, and app developers—regardless of where in the world they are based—to build surveillance tools into their products.

Under U.S. law, non-U.S. persons have little protection from foreign surveillance. If U.S. cops want information on abortion access, gender-affirming care, or political protests happening in Canada—they’re going to get it. The data-sharing won’t necessarily be limited to the U.S., either. There’s nothing to stop authoritarian states from demanding this new trove of Canadians’ private data that will be secretly doled out by its law enforcement agencies. 

EFF joins the Canadian Civil Liberties Association, OpenMedia, researchers at Citizen Lab, and dozens of other Canadian organizations and experts in asking the Canadian federal government to withdraw Bill C-2. 

Originally posted to the EFF’s Deeplinks blog.

Posted on Techdirt - 16 May 2025 @ 01:02pm

The Kids Online Safety Act Will Make The Internet Worse For Everyone

The Kids Online Safety Act (KOSA) is back in the Senate. Sponsors are claiming—again—that the latest version won’t censor online content. It isn’t true. This bill still sets up a censorship regime disguised as a “duty of care,” and it will do what previous versions threatened: suppress lawful, important speech online, especially for young people.

KOSA Still Forces Platforms to Police Legal Speech

At the center of the bill is a requirement that platforms “exercise reasonable care” to prevent and mitigate a sweeping list of harms to minors, including depression, anxiety, eating disorders, substance use, bullying, and “compulsive usage.” The bill claims to bar lawsuits over “the viewpoint of users,” but that’s a smokescreen. Its core function is to let government agencies sue platforms, big or small, that don’t block or restrict content someone later claims contributed to one of these harms. 

This bill won’t bother big tech. Large companies will be able to manage this regulation, which is why Apple and X have agreed to support it. In fact, X helped negotiate the text of the last version of this bill we saw. Meanwhile, those companies’ smaller competitors will be left scrambling to comply. Under KOSA, a small platform hosting mental health discussion boards will be just as vulnerable as Meta or TikTok—but much less able to defend itself. 

To avoid liability, platforms will over-censor. It’s not merely hypothetical. It’s what happens when speech becomes a legal risk. The list of harms in KOSA’s “duty of care” provision is so broad and vague that no platform will know what to do regarding any given piece of content. Forums won’t be able to host posts with messages like “love your body,” “please don’t do drugs,” or “here’s how I got through depression” without fearing that an attorney general or FTC lawyer might later decide the content was harmful. Support groups and anti-harm communities, which can’t do their work without talking about difficult subjects like eating disorders, mental health, and drug abuse, will get caught in the dragnet. 

When the safest legal option is to delete a forum, platforms will delete the forum.

There’s Still No Science Behind KOSA’s Core Claims

KOSA relies heavily on vague, subjective harms like “compulsive usage.” The bill defines it as repetitive online behavior that disrupts life activities like eating, sleeping, or socializing. But here’s the problem: there is no accepted clinical definition of “compulsive usage” of online services.

There’s no scientific consensus that online platforms cause mental health disorders, nor agreement on how to measure so-called “addictive” behavior online. The term sounds like settled medical science, but it’s legislative sleight-of-hand: an undefined concept given legal teeth, with major consequences for speech and access to information.

Carveouts Don’t Fix the First Amendment Problem

The bill says it can’t be enforced based on a user’s “viewpoint.” But the text of the bill itself preferences certain viewpoints over others. Plus, liability in KOSA attaches to the platform, not the user. The only way for platforms to reduce risk in the world of KOSA is to monitor, filter, and restrict what users say.

If the FTC can sue a platform because minors saw a medical forum discussing anorexia, or posts about LGBTQ identity, or posts discussing how to help a friend who’s depressed, then that’s censorship. The bill’s stock language that “viewpoints are protected” won’t matter. The legal incentives guarantee that platforms will silence even remotely controversial speech to stay safe.

Lawmakers who support KOSA today are choosing to trust the current administration, and future administrations, to define what youth—and to some degree, all of us—should be allowed to read online. 

KOSA will not make kids safer. It will make the internet more dangerous for anyone who relies on it to learn, connect, or speak freely. Lawmakers should reject it, and fast. 

Reposted from the EFF’s Deeplinks blog.

Posted on Techdirt - 7 April 2025 @ 01:47pm

Site-Blocking Legislation Is Back. It’s Still A Terrible Idea.

More than a decade ago, Congress tried to pass SOPA and PIPA—two sweeping bills that would have allowed the government and copyright holders to quickly shut down entire websites based on allegations of piracy. The backlash was immediate and massive. Internet users, free speech advocates, and tech companies flooded lawmakers with protests, culminating in an “Internet Blackout” on January 18, 2012. Turns out, Americans don’t like government-run internet blacklists. The bills were ultimately shelved. 

Thirteen years later, as institutional memory fades and appetite for opposition wanes, members of Congress in both parties are ready to try this again. 

The Foreign Anti-Digital Piracy Act (FADPA), along with at least one other bill still in draft form, would revive this reckless strategy. These new proposals would let rights holders get federal court orders forcing ISPs and DNS providers to block entire websites based on accusations of infringing copyright. Lawmakers claim they’re targeting “pirate” sites—but what they’re really doing is building an internet kill switch.

These bills are an unequivocal and serious threat to a free and open internet. EFF and our supporters are going to fight back against them. 

Site-Blocking Doesn’t Work—And Never Will 

Today, many websites are hosted on cloud infrastructure or use shared IP addresses. Blocking one target can mean blocking thousands of unrelated sites. That kind of digital collateral damage has already happened in AustriaRussia​, and in the US.

Site-blocking is both dangerously blunt and trivially easy to evade. Determined evaders can create the same content on a new domain within hours. Users who want to see blocked content can fire up a VPN or change a single DNS setting to get back online. 

These workarounds aren’t just popular—they’re essential tools in countries that suppress dissent. It’s shocking that Congress is on the verge of forcing Americans to rely on the same workarounds that internet users in authoritarian regimes must rely on just to reach mislabeled content. It will force Americans to rely on riskier, less trustworthy online services. 

Site-Blocking Silences Speech Without a Defense

The First Amendment should not take a back seat because giant media companies want the ability to shut down websites faster. But these bills wrongly treat broad takedowns as a routine legal process. Most cases would be decided in ex parte proceedings, with no one there to defend the site being blocked. This is more than a shortcut–it skips due process entirely. 

Users affected by a block often have no idea what happened. A blocked site may just look broken, like a glitch or an outage. Law-abiding publishers and users lose access, and diagnosing the problem is difficult. Site-blocking techniques are the bluntest of instruments, and they almost always punish innocent bystanders. 

The copyright industries pushing these bills know that site-blocking is not a narrowly tailored fix for a piracy epidemic. The entertainment industry is booming right now, blowing past its pre-COVID projections. Site-blocking legislation is an attempt to build a new American censorship system by letting private actors get dangerous infrastructure-level control over internet access. 

EFF and the Public Will Push Back

FADPA is already on the table. More bills are coming. The question is whether lawmakers remember what happened the last time they tried to mess with the foundations of the open web. 

If they don’t, they’re going to find out the hard way. Again. 

Site-blocking laws are dangerous, unnecessary, and ineffective. Lawmakers need to hear—loud and clear—that Americans don’t support government-mandated internet censorship. Not for copyright enforcement. Not for anything.

Reposted from the EFF’s Deeplinks blog.

Posted on Techdirt - 27 March 2025 @ 01:28pm

New USPTO Memo Makes Fighting Patent Trolls Even Harder

The U.S. Patent and Trademark Office (USPTO) just made a move that will protect bad patents at the expense of everyone else. In a memo released February 28, the USPTO further restricted access to inter partes review, or IPR—the process Congress created to let the public challenge invalid patents without having to wage million-dollar court battles.

If left unchecked, this decision will shield bad patents from scrutiny, embolden patent trolls, and make it even easier for hedge funds and large corporations to weaponize weak patents against small businesses and developers.

IPR Exists Because the Patent Office Makes Mistakes

The USPTO grants over 300,000 patents a year, but many of them should not have been issued in the first place. Patent examiners spend, on average, around 20 hours per patent, often missing key prior art or granting patents that are overly broad or vague. That’s how bogus patents on basic ideas—like podcastingonline shopping carts, or watching ads online—have ended up in court.

Congress created IPR in 2012 to fix this problem. IPR allows anyone to challenge a patent’s validity based on prior art, and it’s done before specialized judges at the USPTO, where experts can re-evaluate whether a patent was properly granted. It’s faster, cheaper, and often fairer than fighting it out in federal court.

The USPTO is Blocking Patent Challenges—Again

Instead of defending IPR, the USPTO is working to sabotage it. The February 28 memo reinstates a rule that allows for widespread use of “discretionary denials.” That’s when the Patent Trial and Appeal Board (PTAB) refuses to hear an IPR case for procedural reasons—even if the patent is likely invalid. 

The February 28 memo reinstates widespread use of the Apple v. Fintiv rule, under which the USPTO often rejected IPR petitions whenever there’s an ongoing district court case about the same patent. This is backwards. If anything, an active lawsuit is proof that a patent’s validity needs to be reviewed—not an excuse to dodge the issue.

In 2022, former USPTO Director Kathi Vidal issued a memo making clear that the PTAB should hear patent challenges when “a petition presents compelling evidence of unpatentability,” even if there is parallel court litigation. 

That 2022 guidance essentially saved the IPR system. Once PTAB judges were told to consider all petitions that showed “compelling evidence,” the procedural denials dropped to almost nothing. This February 28 memo signals that the USPTO will once again use discretionary denials to sharply limit access to IPR—effectively making patent challenges harder across the board.  

Discretionary Denials Let Patent Trolls Rig the System

The top beneficiary of this decision will be patent trolls, shell companies formed expressly for the purpose of filing patent lawsuits. Often patent trolls seek to extract a quick settlement before a patent can be challenged. With IPR becoming increasingly unavailable, that will be easier than ever. 

Patent owners know that discretionary denials will block IPRs if they file a lawsuit first. That’s why trolls flock to specific courts, like the Western District of Texas, where judges move cases quickly and rarely rule against patent owners.

By filing lawsuits in these troll-friendly courts, patent owners can game the system—forcing companies to pay up rather than risk millions in litigation costs.

The recent USPTO memo makes this problem even worse. Instead of stopping the abuse of discretionary denials, the USPTO is doubling down—undermining one of the most effective ways businesses, developers, and consumers can fight back against bad patents.

Congress Created IPR to Protect the Public—Not Just Patent Owners

The USPTO doesn’t get to rewrite the law. Congress passed IPR to ensure that weak patents don’t become weapons for extortionary lawsuits. By reinforcing discretionary denials with minimal restrictions, and, as a result, blocking access to IPRs, the USPTO is directly undermining what Congress intended.

Leaders at the USPTO should immediately revoke the February 28 memo. If they refuse, as we pointed out the last time IPR denials spiraled out of control, it’s time for Congress to step in and fix this. They must ensure that IPR remains a fast, affordable way to challenge bad patents—not just a tool for the largest corporations. Patent quality matters—because when bad patents stand, we all pay the price.

Republished from the EFF’s Deeplinks blog.

Posted on Techdirt - 24 March 2025 @ 12:21pm

A Win For Encryption: France Rejects Backdoor Mandate

In a moment of clarity after initially moving forward a deeply flawed piece of legislation, the French National Assembly has done the right thing: it rejected a dangerous proposal that would have gutted end-to-end encryption in the name of fighting drug trafficking. Despite heavy pressure from the Interior Ministry, lawmakers voted Thursday night (article in French) to strike down a provision that would have forced messaging platforms like Signal and WhatsApp to allow hidden access to private conversations.

The vote is a victory for digital rights, for privacy and security, and for common sense.

The proposed law was a surveillance wish list disguised as anti-drug legislation. Tucked into its text was a resurrection of the widely discredited “ghost” participant model—a backdoor that pretends not to be one. Under this scheme, law enforcement could silently join encrypted chats, undermining the very idea of private communication. Security experts have condemned the approach, warning it would introduce systemic vulnerabilities, damage trust in secure communication platforms, and create tools ripe for abuse.

The French lawmakers who voted this provision down deserve credit. They listened—not only to French digital rights organizations and technologists, but also to basic principles of cybersecurity and civil liberties. They understood that encryption protects everyone, not just activists and dissidents, but also journalists, medical professionals, abuse survivors, and ordinary citizens trying to live private lives in an increasingly surveilled world.

A Global Signal

France’s rejection of the backdoor provision should send a message to legislatures around the world: you don’t have to sacrifice fundamental rights in the name of public safety. Encryption is not the enemy of justice; it’s a tool that supports our fundamental human rights, including the right to have a private conversation. It is a pillar of modern democracy and cybersecurity.

As governments in the U.S., U.K., Australia, and elsewhere continue to flirt with anti-encryption laws, this decision should serve as a model—and a warning. Undermining encryption doesn’t make society safer. It makes everyone more vulnerable.

This victory was not inevitable. It came after sustained public pressure, expert input, and tireless advocacy from civil society. It shows that pushing back works. But for the foreseeable future, misguided lobbyists for police national security agencies will continue to push similar proposals—perhaps repackaged, or rushed through quieter legislative moments.

Supporters of privacy should celebrate this win today. Tomorrow, we will continue to keep watch.

Republished from the EFF’s Deeplinks blog.

Posted on Techdirt - 18 March 2025 @ 03:46pm

California’s A.B. 412: A Bill That Could Crush Startups And Cement A Big Tech AI Monopoly

California legislators have begun debating a bill (A.B. 412) that would require AI developers to track and disclose every registered copyrighted work used in AI training. At first glance, this might sound like a reasonable step toward transparency. But it’s an impossible standard that could crush small AI startups and developers while giving big tech firms even more power.

A Burden That Small Developers Can’t Bear

The AI landscape is in danger of being dominated by large companies with deep pockets. These big names are in the news almost daily. But they’re far from the only ones – there are dozens of AI companies with fewer than 10 employees trying to build something new in a particular niche. 

This bill demands that creators of any AI model—even a two-person company or a hobbyist tinkering with a small software build—identify copyrighted materials used in training.  That requirement will be incredibly onerous, even if limited just to works registered with the U.S. Copyright Office. The registration system is a cumbersome beast at best—neither machine-readable nor accessible, it’s more like a card catalog than a database—that doesn’t offer information sufficient to identify all authors of a work,  much less help developers to reliably match works in a training set to works in the system.

Even for major tech companies, meeting these new obligations  would be a daunting task. For a small startup, throwing on such an impossible requirement could be a death sentence. If A.B. 412 becomes law, these smaller players will be forced to devote scarce resources to an unworkable compliance regime instead of focusing on development and innovation. The risk of lawsuits—potentially from copyright trolls—would discourage new startups from even attempting to enter the field.

A.I. Training Is Like Reading And It’s Very Likely Fair Use 

A.B. 412 starts from a premise that’s both untrue and harmful to the public interest: that reading, scraping or searching of open web content shouldn’t be allowed without payment. In reality, courts should, and we believe will, find that the great majority of this activity is fair use. 

It’s now bedrock internet law principle that some forms of copying content online are transformative, and thus legal fair use. That includes reproducing thumbnail images for image search, or snippets of text to search books

The U.S. copyright system is meant to balance innovation with creator rights, and courts are still working through how copyright applies to AI training. In most of the AI cases, courts have yet to consider—let alone decide—how fair use applies. A.B. 412 jumps the gun, preempting this process and imposing a vague, overly broad standard that will do more harm than good.

Importantly, those key court cases are all federal. The U.S. Constitution makes it clear that copyright is governed by federal law, and A.B. 412 improperly attempts to impose state-level copyright regulations on an issue still in flux. 

A.B. 412 Is A Gift to Big Tech

The irony of A.B. 412 is that it won’t stop AI development—it will simply consolidate it in the hands of the largest corporations. Big tech firms already have the resources to navigate complex legal and regulatory environments, and they can afford to comply (or at least appear to comply) with A.B. 412’s burdensome requirements. Small developers, on the other hand, will either be forced out of the market or driven into partnerships where they lose their independence. The result will be less competition, fewer innovations, and a tech landscape even more dominated by a handful of massive companies.

If lawmakers are able to iron out some of the practical problems with A.B. 412 and pass some version of it, they may be able to force programmers to research—and effectively, pay off—copyright owners before they even write a line of code. If that’s the outcome in California, Big Tech will not despair. They’ll celebrate. Only a few companies own large content libraries or can afford to license enough material to build a deep learning model. The possibilities for startups and small programmers will be so meager, and competition will be so limited, that profits for big incumbent companies will be locked in for a generation. 

If you are a California resident and want to speak out about A.B. 412, you can find and contact your legislators through this website

Originally published to the EFF’s Deeplinks blog.

Posted on Techdirt - 7 November 2024 @ 11:48am

Judge’s Investigation Into Patent Troll Results In Criminal Referrals

In 2022, three companies with strange names and no clear business purpose beyond  patent litigation filed dozens of lawsuits in Delaware federal court, accusing businesses of all sizes of patent infringement. Some of these complaints claimed patent rights over basic aspects of modern life; one, for example, involved a  patent that pertains to the process of clocking in to work through an app.

These companies – named Mellaconic IP, Backertop Licensing, and Nimitz Technologies – seemed to be typical examples of “patent trolls,” companies whose primary business is suing others over patents or demanding licensing fees rather than providing actual products or services. 

However, the cases soon took an unusual turn. The Delaware federal judge overseeing the cases, U.S. District Judge Colm Connolly, sought more information about the patents and their ownership. One of the alleged owners was a food-truck operator who had been promised “passive income,” but was entitled to only a small portion of any revenue generated from the lawsuits. Another owner was the spouse of an attorney at IP Edge, the patent-assertion company linked to all three LLCs. 

Following an extensive investigation, the judge determined that attorneys associated with these shell companies had violated legal ethics rules. He pointed out that the attorneys may have misled Hau Bui, the food-truck owner, about his potential liability in the case. Judge Connolly wrote: 

[T]he disparity in legal sophistication between Mr. Bui and the IP Edge and Mavexar actors who dealt with him underscore that counsel’s failures to comply with the Model Rules of Professional Conduct while representing Mr. Bui and his LLC in the Mellaconic cases are not merely technical or academic.

Judge Connolly also concluded that IP Edge, the patent-assertion company behind hundreds of patent lawsuits and linked to the three LLCs, was the “de facto owner” of the patents asserted in his court, but that it attempted to hide its involvement. He wrote, “IP Edge, however, has gone to great lengths to hide the ‘we’ from the world,” with “we” referring to IP Edge. Connolly further noted, “IP Edge arranged for the patents to be assigned to LLCs it formed under the names of relatively unsophisticated individuals recruited by [IP Edge office manager] Linh Deitz.” 

The judge referred three IP Edge attorneys to the Supreme Court of Texas’ Unauthorized Practice of Law Committee for engaging in “unauthorized practices of law in Texas.” Judge Connolly also sent a letter to the Department of Justice, suggesting an investigation into “individuals associated with IP Edge LLC and its affiliate Maxevar LLC.” 

Patent Trolls Tried To Shut Down This Investigation

The attorneys involved in this wild patent trolling scheme challenged Judge Connolly’s authority to proceed with his investigation. However, because transparency in federal courts is essential and applicable to all parties, including patent assertion entities, EFF and two other patent reform groups filed a brief in support of the judge’s investigation. The brief argued that “[t]he public has a right—and need—to know who is controlling and benefiting from litigation in publicly-funded courts.” Companies targeted by the patent trolls, as well as the Chamber of Commerce, filed their own briefs supporting the investigation. 

The appeals court sided with usupholding Judge Connolly’s authority to proceed, which led to the referral of the involved attorneys to the disciplinary counsel of their respective bar associations. 

After this damning ruling, one of the patent troll companies and its alleged owner made a final effort at appealing this outcome. In July of this year, the U.S. Court of Appeals for the Federal Circuit ruled that investigating Backertop Licensing LLC and ordering its alleged owner to testify was “an appropriate means to investigate potential misconduct involving Backertop.” 

In EFF’s view, these types of investigations into the murky world of patent trolling are not only appropriate but should happen more often. Now that the appeals court has ruled, let’s take a look at what we learned about the patent trolls in this case. 

Patent Troll Entities Linked To French Government

One of the patent trolling entities, Nimitz Technologies LLC, asserted a single patent, U.S. Patent No. 7,848,328, against 11 companies. When the judge required Nimitz’s supposed owner, a man named Mark Hall, to testify in court, Hall could not describe anything about the patent or explain how Nimitz acquired it. He didn’t even know the name of the patent (“Broadcast Content Encapsulation”). When asked what technology was covered by the patent, he said, “I haven’t reviewed it enough to know,” and when asked how he paid for the patent, Hall replied, “no money exchanged hands.” 

The exchange between Hall and Judge Connolly went as follows: 

Q. So how do you come to own something if you never paid for it with money?

A. I wouldn’t be able to explain it very well. That would be a better question for Mavexar.

Q. Well, you’re the owner?

A. Correct.

Q. How do you know you’re the owner if you didn’t pay anything for the patent?

A. Because I have the paperwork that says I’m the owner.

(Nov. 27, 2023 Opinion, pages 8-9.) 

The Nimitz patent originated from the Finnish cell phone company Nokia, which later assigned it and several other patents to France Brevets, a French sovereign investment fund, in 2013. France Brevets, in turn, assigned the patent to a US company called Burley Licensing LLC, an entity linked to IP Edge, in 2021. Hau Bui (the food truck owner) signed on behalf of Burley, and Didier Patry, then the CEO of France Brevets, signed on behalf of the French fund. 

France Brevets was an investment fund formed in 2009 with €100 million in seed money from the French government to manage intellectual property. France Brevets was set to receive 35% of any revenue related to “monetizing and enforcement” of the patent, with Burley agreeing to file at least one patent infringement lawsuit within a year, and collect a “total minimum Gross Revenue of US $100,000” within 24 months, or the patent rights would be given back to France Brevets. 

Burley Licensing LLC, run by IP Edge personnel, then created Nimitz Technologies LLC— a company with no assets except for the single patent. They obtained a mailing address for it from a Staples in Frisco, Texas, and assigned the patent to the LLC in August 2021, while the obligations to France Brevets remained unchanged until the fund shut down in 2022.

The Bigger Picture

It’s troubling that patent lawsuits are often funded by entities with no genuine interest in innovation, such as private equity firms. However, it’s even more concerning when foreign government-backed organizations like France Brevets manipulate the US patent system for profit. In this case, a Finnish company sold its patents to a French government fund, which used US-based IP lawyers to file baseless lawsuits against American companies, including well-known establishments like Reddit and Bloomberg, as well as smaller ones like Tastemade and Skillshare.

Judges should enforce rules requiring transparency about third-party funding in patent lawsuits. When ownership is unclear, it’s appropriate to insist that the real owners show up and testify—before dragging dozens of companies into court over dubious software patents. 

Reposted from the EFF’s Deeplinks blog.

Posted on Techdirt - 20 February 2024 @ 10:52am

Don’t Fall For The Latest Changes To The Dangerous Kids Online Safety Act 

The authors of the dangerous Kids Online Safety Act (KOSA) unveiled an amended version last week, but it’s still an unconstitutional censorship bill that continues to empower state officials to target services and online content they do not like. We are asking everyone reading this to oppose this latest version, and to demand that their representatives oppose it—even if you have already done so. 

KOSA remains a dangerous bill that would allow the government to decide what types of information can be shared and read online by everyone. It would still require an enormous number of websites, apps, and online platforms to filter and block legal, and important, speech. It would almost certainly still result in age verification requirements. Some of its provisions have changed over time, and its latest changes are detailed below. But those improvements do not cure KOSA’s core First Amendment problems. Moreover, a close review shows that state attorneys general still have a great deal of power to target online services and speech they do not like, which we think will harm children seeking access to basic health information and a variety of other content that officials deem harmful to minors.  

We’ll dive into the details of KOSA’s latest changes, but first we want to remind everyone of the stakes. KOSA is still a censorship bill and it will still harm a large number of minors who have First Amendment rights to access lawful speech online. It will endanger young people and impede the rights of everyone who uses the platforms, services, and websites affected by the bill. Based on our previous analyses, statements by its authors and various interest groups, as well as the overall politicization of youth education and online activity, we believe the following groups—to name just a few—will be endangered:  

  • LGBTQ+ Youth will be at risk of having content, educational material, and their own online identities erased.  
  • Young people searching for sexual health and reproductive rights information will find their search results stymied. 
  • Teens and children in historically oppressed and marginalized groups will be unable to locate information about their history and shared experiences. 
  • Activist youth on either side of the aisle, such as those fighting for changes to climate laws, gun laws, or religious rights, will be siloed, and unable to advocate and connect on platforms.  
  • Young people seeking mental health help and information will be blocked from finding it, because even discussions of suicide, depression, anxiety, and eating disorders will be hidden from them. 
  • Teens hoping to combat the problem of addiction—either their own, or that of their friends, families, and neighbors, will not have the resources they need to do so.  
  • Any young person seeking truthful news or information that could be considered depressing will find it harder to educate themselves and engage in current events and honest discussion. 
  • Adults in any of these groups who are unwilling to share their identities will find themselves shunted onto a second-class internet alongside the young people who have been denied access to this information. 

What’s Changed in the Latest (2024) Version of KOSA 

In its impact, the latest version of KOSA is not meaningfully different from those previous versions. The “duty of care” censorship section remains in the bill, though modified as we will explain below. The latest version removes the authority of state attorneys general to sue or prosecute people for not complying with the “duty of care.” But KOSA still permits these state officials to enforce other part of the bill based on their political whims and we expect those officials to use this new law to the same censorious ends as they would have of previous versions. And the legal requirements of KOSA are still only possible for sites to safely follow if they restrict access to content based on age, effectively mandating age verification.   

KOSA is still a censorship bill and it will still harm a large number of minors

Duty of Care is Still a Duty of Censorship 

Previously, KOSA outlined a wide collection of harms to minors that platforms had a duty to prevent and mitigate through “the design and operation” of their product. This includes self-harm, suicide, eating disorders, substance abuse, and bullying, among others. This seemingly anodyne requirement—that apps and websites must take measures to prevent some truly awful things from happening—would have led to overbroad censorship on otherwise legal, important topics for everyone as we’ve explained before.  

The updated duty of care says that a platform shall “exercise reasonable care in the creation and implementation of any design feature” to prevent and mitigate those harms. The difference is subtle, and ultimately, unimportant. There is no case law defining what is “reasonable care” in this context. This language still means increased liability merely for hosting and distributing otherwise legal content that the government—in this case the FTC—claims is harmful.  

Design Feature Liability 

The bigger textual change is that the bill now includes a definition of a “design feature,” which the bill requires platforms to limit for minors. The “design feature” of products that could lead to liability is defined as: 

any feature or component of a covered platform that will encourage or increase the frequency, time spent, or activity of minors on the covered platform, or activity of minors on the covered platform. 

Design features include but are not limited to 

(A) infinite scrolling or auto play; 

(B) rewards for time spent on the platform; 

(C) notifications; 

(D) personalized recommendation systems; 

(E) in-game purchases; or 

(F) appearance altering filters. 

These design features are a mix of basic elements and those that may be used to keep visitors on a site or platform. There are several problems with this provision. First, it’s not clear when offering basic features that many users rely on, such as notifications, by itself creates a harm. But that points to the fundamental problem of this provision. KOSA is essentially trying to use features of a service as a proxy to create liability for speech online that the bill’s authors do not like. But the list of harmful designs shows that the legislators backing KOSA want to regulate online content, not just design.   

For example, if an online service presented an endless scroll of math problems for children to complete, or rewarded children with virtual stickers and other prizes for reading digital children’s books, would lawmakers consider those design features harmful? Of course not. Infinite scroll and autoplay are generally not a concern for legislators. It’s that these lawmakers do not likesome lawful content that is accessible via online service’s features. 

What KOSA tries to do here then is to launder restrictions on content that lawmakers do not like through liability for supposedly harmful “design features.” But the First Amendment still prohibits Congress from indirectly trying to censor lawful speech it disfavors.  

We shouldn’t kid ourselves that the latest version of KOSA will stop state officials from targeting vulnerable communities.

Allowing the government to ban content designs is a dangerous idea. If the FTC decided that direct messages, or encrypted messages, were leading to harm for minors—under this language they could bring an enforcement action against a platform that allowed users to send such messages. 

Regardless of whether we like infinite scroll or auto-play on platforms, these design features are protected by the First Amendment; just like the design features we do like. If the government tried to limit an online newspaper from using an infinite scroll feature or auto-playing videos, that case would be struck down. KOSA’s latest variant is no different.   

Attorneys General Can Still Use KOSA to Enact Political Agendas 

As we mentioned above, the enforcement available to attorneys general has been narrowed to no longer include the duty of care. But due to the rule of construction and the fact that attorneys general can still enforce other portions of KOSA, this is cold comfort. 

For example, it is true enough that the amendments to KOSA prohibit a state from targeting an online service based on claims that in hosting LGBTQ content that it violated KOSA’s duty of care. Yet that same official could use another provision of KOSA—which allows them to file suits based on failures in a platform’s design—to target the same content. The state attorney general could simply claim that they are not targeting the LGBTQ content, but rather the fact that the content was made available to minors via notifications, recommendations, or other features of a service. 

We shouldn’t kid ourselves that the latest version of KOSA will stop state officials from targeting vulnerable communities. And KOSA leaves all of the bill’s censorial powers with the FTC, a five-person commission nominated by the president. This still allows a small group of federal officials appointed by the President to decide what content is dangerous for young people. Placing this enforcement power with the FTC is still a First Amendment problem: no government official, state or federal, has the power to dictate by law what people can read online.  

The Long Fight Against KOSA Continues in 2024 

For two years now, EFF has laid out the clear arguments against this bill. KOSA creates liability if an online service fails to perfectly police a variety of content that the bill deems harmful to minors. Services have little room to make any mistakes if some content is later deemed harmful to minors and, as a result, are likely to restrict access to a broad spectrum of lawful speech, including information about health issues like eating disorders, drug addiction, and anxiety.  

The fight against KOSA has amassed an enormous coalition of people of all ages and all walks of life who know that censorship is not the right approach to protecting people online, and that the promise of the internet is one that must apply equally to everyone, regardless of age. Some of the people who have advocated against KOSA from day one have now graduated high school or college. But every time this bill returns, more people learn why we must stop it from becoming law.   

We cannot afford to allow the government to decide what information is available online. Please contact your representatives today to tell them to stop the Kids Online Safety Act from moving forward. 

Republished from the EFF’s Deeplinks blog.

Posted on Techdirt - 27 December 2023 @ 01:03pm

Stupid Patent of the Month: Selfie Contests

Patents are supposed to be an incentive to invent. Too often, they end up being a way to try to claim “ownership” of what should be basic building blocks of human activity, culture, and knowledge. This is especially true of software patents, an area EFF has been speaking out about for more than 20 years now. 

This month’s Stupid Patent, No. 8,655,715, continues the tradition of trying to use software language to capture a monopoly on a basic human cultural activity — in this case, contests. 

A company called Opus One, which does business under the name “Contest Factory,” claims this patent and a related one cover a huge array of online contests. So far, they’ve filed five lawsuits against other companies that help build online contests, and even threatened a small photo company that organizes mostly non-commercial contests online. 

The patents held by Contest Factory are a good illustration of why EFF has been concerned about out-of-control software patents. It’s not just that wrongly issued patents extort a vast tax on the U.S. economy (although they do—one study estimated $29 billion in annual direct costs). The worst software patents also harm peoples’ rights to express themselves and participate in online culture. Just as we’re free in the physical world to sign documentssort photosstore and label informationclock in to workfind people to date, or teach foreign languages, without paying extortionate fees to others, we must also be free to do so online. 

Patenting Contests

Claim 1 of the ‘715 patent has steps that claim: 

  • Receiving, storing, and accessing data on a computer; 
  • Sorting it and generating “contest data”; 
  • Tabulating votes and picking a winner.

The patent also uses other terms for common activities of general purpose computers, such as “transmitting” and “displaying” data. 

In other words, the patent describes everyday use of computers, plus the idea of users participating in a contest. This is a classic abstract idea, and it never should have been eligible for a patent. 

In a 2017 article in CIO Review, the company acknowledges how incredibly broad its claims are. Contest Factory claims it patented “voting in online contests long before TV contest shows with public voting components made their appearance,” and that it holds patents “associated with online contests and integrating online voting with virtually any type of contest.” 

Lawsuit Over Radio Station Contest 

In its most recent lawsuit, Contest Factory says that a Minneapolis radio station’s “Mother’s Day Giveaway” for a mother/daughter spa day infringed its patent. The radio station asked people to post mother-daughter selfies online and share their entry to collect votes. 

Contest Factory sued Pancake Labs (complaint), the company that helped the radio station put the contest online. Contest Factory also claimed a PBS contest in which viewers created short films and voted on them was an example of infringement. 

For the “Mother’s Day Giveaway” contest, the patent infringement accusation reads in part that, “the executable instructions … cause the generation of a contest and the transmission of the first and second content data to at least one user to view and rate the content.” 

Contest Factory has sued over quite a few internet contests, dating back more than a decade. Its 2016 lawsuits, based on the ‘715 patent and two earlier related patents, were filed against three small online marketing firms: Vancouver-based Strutta, Florida-based Elettro, and California-based Votigo, for contests that go back to 2011. We don’t know how many more companies or online communities have been threatened in all. 

Sharing user-generated content like photos—cooperatively or competitively—is the kind of sharing that the digital world is ideal for. When patent owners demand a toll for these activities, it doesn’t matter whether they’re patent “trolls” or operating companies seeking to extract settlements from competitors. They threaten our freedoms in unacceptable ways. 

The government shouldn’t be issuing patents like these, and it certainly shouldn’t be making them harder to challenge

  • Opus One d/b/a Contest Factory v. Pancake Labs complaint
  • Opus One d/b/a Contest Factory v. Telescope complaint 
  • Opus One d/b/a Contest Factory v. Elletro complaint 
  • Opus One d/b/a Contest Factory v. Votigo complaint 
  • Opus One d/b/a Contest Factory v. Strutta complaint

Originally posted to the EFF’s Stupid Patent of the Month Series.