matthew.lane's Techdirt Profile

matthew.lane

About matthew.lane

Posted on Techdirt - 26 July 2024 @ 01:38pm

The Kids Online Safety Act And The Tyranny Of Laziness

There is some confusion about whether the Kids Online Safety Act (KOSA) regulates content or design on digital platforms like Instagram or TikTok. It’s easy to see why that is, because the bill’s authors claim they are attempting to make the bill about design. This is a good move on their part, as regulations on design can allow us to stop bad behavior from tech companies without endangering speech.

Unfortunately, KOSA didn’t nail it.

That’s because KOSA is trying to regulate the design of content recommendation systems, i.e. the digital chutes that all our online speech filters through, which are unique to each online platform. In KOSA’s case, it’s proven impossible so far to separate the design of content recommendation systems from the speech itself. The duty of care and insistence that it covers “personal recommendation systems” means the bill will inevitably impact the speech itself.

The reason is pretty simple: tech companies are inherently lazy — those with decision making authority will want to comply with regulations in the cheapest and easiest way possible. This means they will take shortcuts wherever possible, including building censorship systems to simply make difficult-to-manage content go away. That will almost certainly include politically targeted content, like speech related to LGBTQ+ communities, abortion, and guns. And they will conduct this censorship with a lazily broad brush, likely sweeping up more nuanced content that would help minors with problems like eating disorders or suicide.

The difference between the aspirations of KOSA and its inevitable impacts work like this: KOSA wants systems engineers to design algorithms that put safety first and not user engagement. While some companies are already pivoting away from pure engagement focused algorithms, doing so can be really hard and expensive because algorithms aren’t that smart. Purely engagement focused algorithms only need to answer one question — did the user engage? By asking that one question, and testing different inferences, the algorithms can get very good at delivering content to a user that they will engage with.

But when it comes to multi-purpose algorithms, like those that want to only serve positive content and avoid harmful content, the task is much harder and the algorithms are unreliable. Algorithms don’t understand what the content they are ranking or excluding is or how it will impact the mental health and well-being of the user. Even human beings can struggle to predict what content will cause the kinds of harm described by KOSA.

To comply with KOSA, tech companies will have to show that they are taking reasonable steps to make sure their personalized recommendation systems aren’t causing harm to minors’ mental health and well-being. The only real way to do that is to test the algorithms to see if they are serving “harmful” content. But what is “harmful” content? KOSA leans on the FTC and a government-created Kids Online Safety Council to signal what that content might be. This means that Congress will have significant influence over categorizing harmful speech and platforms will use those categories to implement keywords, user tags, and algorithmically-estimated tags to flag this “harmful” content when it appears in personal recommendation feeds and results. This opens the door to government censorship.

But it gets even worse. The easiest and cheapest way to make sure a personal recommendation system doesn’t return “harmful” content is to simply exclude any content that resembles the “harmful” content. This means adding an additional content moderation layer that deranks or delists content that has certain keywords or tags, what is called “shadowbanning” in popular online culture.

There are three major problems with this. The first is obviously that the covered platforms will have created a censorship machine that accepts inputs from the government. A rogue FTC could use KOSA explicitly for censorship, by claiming that any politically targeted content leads to the harms described in the bill. We cannot depend on big tech to fight back against this, because being targeted by an administration comes with a cost and making an administration happy might come with some benefits.

Big tech may even eventually benefit from this relationship because content moderation is impossible to do well: too often there are nuanced decisions where content moderators simply have to make the choice they estimate to be the least harmful. In some ways, KOSA allows tech companies to push the responsibility for these decisions onto the FTC and the Kids Online Safety Council. Additionally, tech companies are likely to over-correct and over-censor anything they think the government may take action against, and take zero responsibility for their laziness, just like they did after SESTA-FOSTA.

The second problem is that these systems will leak across the Internet. While they are targeted at minors, the only way to tell if a user is a minor is to use expensive and intrusive age verification systems. Covered platforms will want to err on the side of compliance unless they have explicit safe harbors, which aren’t exactly in the bill. So users may accidentally get flagged as minors when they aren’t. Worse, even the accounts that users proactively choose to “follow” aren’t safe from censorship under KOSA because the definition of “personal recommendation system” includes those that “suggest, promote, or rank content” based on personal data. Almost all feeds of content a user is following are ranked based on the algorithm’s estimation of how engaging that content will be to you. A user is less likely to read a post that says “I ate a cookie today” than one that says “I got married today” because users don’t like scrolling through boring content. And so even much of what we want to see online will more than likely be wrapped into the KOSA censorship system that tech companies create to comply.

The third problem is that these sorting systems will not be perfect and will lead to mistakes on both sides: “harmful” content will get through and “harmless” content will be shadowbanned. In some ways this could make the problems KOSA is explicitly attempting to solve worse. For example, imagine the scenario in which a young user posts a cry for help. This content could easily get flagged as suicide or other harmful content, and therefore get deranked across the feeds of all those that follow the person and care for them. No one may see it.

This example shows how challenging content moderation is. But KOSA is creating incentives to avoid solving these tricky issues, and instead do whatever the government says is legally safe. Again, KOSA will remove some degree of culpability for the tech companies by giving them someone else to blame so long as they are “complying” when bad things happen.

Content moderation is hard and KOSA contains the worst of all worlds in the places it touches content moderation by neither trying to understand what it is regulating nor giving tech companies clear guidance that would do no harm. Congress would be better off stripping out the content moderation parts of the bill and leaving them for the future after it has engaged in a good deal of fact finding and discussions with experts.

There are problems here that Congress assumes can be solved simply by telling tech companies to do better, when I’ve never met a content moderation professional who could tell me how they could actually solve these problems. We shouldn’t allow Big Tech to pass the buck to government boards at the expense of online speech — the next generation deserves a better internet than the one KOSA creates.

Matt Lane is Senior Policy Counsel at Fight for the Future.

Posted on Techdirt - 29 November 2023 @ 12:31pm

Let’s Not Flip Sides On IP Maximalism Because Of AI

Copyright policy is a sticky tricky thing, and there are battles that have been fought for decades among public and corporate interests. Typically, it’s the corporate interests that win — especially the content industry. We’ve seen power, and copyrights, collect among a small group of content companies because of this. But there is one significant win that the public interest has been able to defend all these years: Fair Use.

Fair use’s importance has only grown over the years. Put simply, fair use allows people limited use of copyrighted material without permission. Fair use’s foundations are in commentary, criticism, and parody. However, fair use has arguably filled in important gaps to allow us to basically exist on social media. That’s because there are open questions on what is and isn’t copyright infringement, and things as simple as retweeting or linking could theoretically get us in trouble. Fair use also allows a lot of art to exist, because a lot of art critiques or comments on older art. On the flip side, when fair use was ruled to not cover music sampling it basically killed a lot of creative sampling in hip hop music. Now popular sample-based music is relatively tame and tends to use the same library of samples.

Fair use (probably) also protects the creator industry. Many people make a living streaming video games or making content around playing video games. All of that could violate copyright laws. We don’t know the extent of risk here, because it hasn’t been fully tested, but we do know that videogame makers have claimed videogame streaming content as copyrighted material. We also know that in Japan, which doesn’t have fair use, that a streamer got two years in jail for posting Let’s Play videos. A lot of creators also make “react” content, which also relies on fair use protection.

Blowing up Fair Use

Considering the importance of fair use, and the historically bad behavior of the content industry towards ordinary people, it’s surprising that a lot of public interest advocates want to blow it up to hurt AI companies. This is unfortunate, but not particularly surprising. Content industry lobbying has inflated copyright protections into a pretty big sledgehammer, and when you really want to smash something you often look for a sledgehammer. For example, copyright and right of publicity (a somewhat related state-level IP regime) were the first tools people turned to to protect victims when revenge porn first became a big problem.

Similarly, some public interest advocates are turning to copyright to stop AI from being trained on content without permission. However, that use is almost certainly a fair use (if it’s copyright infringement at all) and that’s a good thing. The ability of people to use computers to analyze content without permission is extremely useful, and it would be bad to weaken or destroy fair use just to stop companies from doing that in a socially problematic way. The best way to stop bad things is with policy purposefully made to address the whole problem. And these uses of copyright law often plays into the hands of powerful interests — the copyright industry would love the chance to turn the public interest advocacy community against itself in order to kill fair use.

I’m not saying that there aren’t issues with AI that need to be addressed, especially worker exploitation. AI art generators can be especially infuriating for artists: they use a lot while giving back little. In fact, these generators are arguably being built to replace artists rather than to provide artists with new tools. It can be attractive to throw anything in the way to slow it down. But copyright, especially copyright maximalism, has done a terrible job of preventing artist exploitation.

Porting “on a computer” to copyright

One of the biggest public interest fights in patent law has been against “on a computer” software patents that clogged up the system and led to a number of patent infringement suits against small businesses for silly claimed inventions. The basics of the problem is this: it was initially allowed to claim an invention in doing something that was already known, but on a computer. These on a computer patents have been greatly restricted through Supreme Court rulings (which special interests would like to overturn). However, the bad effects of software patents still exist today, as do patent trolls seeking to exploit them.

This current fight over copyright in training data reminds of this same problem. For example, if a writer wanted to study romance novels to find out what is popular it would be perfectly acceptable under copyright policy for them to read and analyze a lot of popular romance novels and to use that analysis to take the most successful parts of those novels to create a new novel. It is also perfectly acceptable under copyright law for an artist to study a particular artist and replicate that artists style in their own works. But using an AI to do that analysis, doing it “on a computer,” is now suspect.

This is short sighted for a number of reasons, but one I’d like to highlight is how this shrinking of fair use is difficult to contain. We are talking about an area in which the question of whether loading files into RAM is “copying” under copyright law (and therefore needs permission or is a violation) is an actual policy debate that public interest advocates have to fight. If using content as training data becomes a copyright violation, what’s the limiting principle? What kinds of computer analysis would no longer be protected under fair use?

I should also point out that IP maximalization is the easiest way to build oligopolies. Big companies will be able to figure out how to navigate the maze of rights necessary to build a model, and existing models will likely be grandfathered in (with a few lawsuits to get through). However, it will be impossible for any new company or new open source model to be created. Dealing with rights at scale is a problem so significant that even the rightsholder industry has trouble tracking them. And information about rights has been withheld to leverage better deals due to the risk (and high costs) of accidentally infringing someone’s rights.

Matthew Lane is a public interest advocate in DC focusing on tech and IP policy. This post was originally published to his Substack.

Posted on Techdirt - 5 October 2023 @ 11:29am

KOSA Won’t Make The Internet Safer For Kids. So What Will?

I’ve been asked a few times now what to do about online safety if the Kids Online Safety Act is no good. I will take it as a given that not enough is being done to make the Internet safe, especially for children. I think there is enough evidence to show that while the Internet can be a positive for many young people, especially marginalized youth that find support online, there are also significant negatives that correlate to real world harms that lead to suffering.

As I see it, there are three separate but related problems:

  1. Most Internet companies make money off engagement, and so there can be misaligned incentives especially when some toxic things can drive engagement.
  2. Trust & Safety is the linchpin of efforts to improve online safety, but it represents a significant cost to companies without a direct connection to profit.
  3. The tools used by Trust & Safety, like content moderation, have become a culture war football and many – including political leaders – are trying to work the refs.

I think #1 tends to be overstated, but X/Twitter is a natural experiment on whether this model is successful in the long run so we may soon have a better answer. I think #2 is understated, but it’s a bit hard to find government solutions here – especially those that don’t run into First Amendment concerns. And #3 is a bit of a confounding problem that taints all proposed solutions. There is a tendency to want to use “online safety” as an excuse to win culture wars, or at least tack culture war battles onto legitimate attempts to make the Internet safer. These efforts run headfirst into the First Amendment, because they are almost exclusively about regulating speech.

KOSA’s main gambit is to discourage #1 and maybe even incentivize #2 by creating a sort of nebulous duty of care that basically says if companies don’t have users’ best interests at heart in six described areas then they can be sued by the FTC and State AGs. The problem is that the duty of care is largely directed at whether minors are being exposed to certain kinds of content, and this invites problem #3 in a big way. In fact, we’ve already seen politically connected anti-LGBTQ organizations like Heritage openly call for KOSA to be used against LGBTQ content and Senator Blackburn, a KOSA co-author, connected the bill with protecting “minor children from the transgender.” This also means that this part of KOSA is likely to eventually fall to the First Amendment, as the California Age Appropriate Design Code (a bill KOSA borrows from) did.

So what can be done? I honestly don’t think we have enough information yet to really solve many online safety problems. But that doesn’t mean we have to sit around doing nothing. Here are some ideas of things that can be done today to make the Internet safer or prepare for better solutions in the future:

Ideas for Solving Problem #1

  • Stronger Privacy: Having a strong baseline of privacy protections for all users is good for many reasons. One of them is breaking the ability of platforms to use information gathered about you to keep you on the platform longer. Many of the recommendation engines that set people down a bad path are algorithms powered by personal information and tuned to increase engagement. These algorithms don’t really care about how their recommendations affect you, and can send you in directions you don’t want to go but have trouble turning away from. I experienced some of this myself when using YouTube to get into shape during the pandemic. I was eventually recommended videos that body shamed and recommended pretty severe diets to “show off” your muscles. I was able to reorient the algorithm towards more positive and health-centered videos, but it took some degree of effort and understanding how things worked. If the algorithm wasn’t powered by my entire history, and instead had to be more user directed, I don’t think I’d be offered the same content. And if I did look for that content, I’d be able to do so more deliberately and carefully. Strong privacy controls would force companies to redesign in that way.
  • An FTC 6(b) study: The FTC has the authority to conduct wide-ranging industry studies that don’t need a specific law enforcement purpose. In fact, they’ve used their 6(b) authority to study industries and produce reports that help Congress legislate. This 6(b) authority includes subpoena power to get information that independent researchers currently can’t. KOSA has a section that allows independent researchers to better study harms related to the design of online platforms, and I think that’s a pretty good idea, but the FTC can start this work now. A 6(b) study doesn’t need Congressional action to start, which is good considering the House is tied up at the moment. They can examine how companies work through safety concerns in product design, look for hot docs that show they made certain design decisions despite known risks, or look for mid docs that show they refused to look into safety concerns.
  • Enhance FTC Section 5 Authority: The FTC has already successfully obtained a settlement based on the argument that certain harmful design choices violate Section 5’s prohibition of “unfair or deceptive” business practices. The settlement required Epic to turn off voice and text chat in the game Fortnite for children and teens by default. Congress could enhance this power by clarifying that Section 5 includes dangerous online product design more generally and require the FTC to create a division for enforcement in this area (and also increase the FTC’s budget for such staffing). A 6(b) study would also lay the groundwork for the FTC to take more actions in this area. However, any legislation should be drafted in a way that does not undercut the FTC’s argument that it already has much of this authority, as doing so would discourage the FTC from pursuing more actions on its own. This is another option that likely does not need Congressional action, but budget allocations and an affirmative directive to address this area would certainly help.
  • NIH/other agency studies: Another way to help the FTC to pursue Section 5 complaints against dangerous design, and improve the conversation generally, is to invest in studies from medical and psychological health experts on how various design choices impact mental health. This can set a baseline of good practices from which any significant deviation could be pursued by the FTC as a Section 5 violation. It could also help policy discussions coalesce around rules concerning actual product design rather than content. The NTIA’s current request for information on Kids Online Health might be a start to that. KOSA’s section on creating a Kids Online Safety Council is another decent way of accomplishing this goal. Although, the Biden administration could simply create such a Council without Congressional action, and that might be a better path considering the current troubles in the House. I should also point out that this option is ripe for special interest capture, and that any efforts to study these problems should include experts and voices from marginalized and politically targeted communities.
  • Better User Tools: I’ve written before on concerns I had with an earlier draft of KOSA’s parental tools requirements. I think that section of the bill is in a much better place now. Generally, I think it’s good to improve the resources parents have to work with their kids to build a positive online social environment. It would also be good to have tools for users to help them have a say in what content they are served and how the service interacts with them (i.e. turning off nudges). That might come from a law establishing a baseline for user tools. It might also come from an agency hosting discussions on and fostering the development of best practices for such tools. I will again caution though that not all parents have their kids’ best interests at heart, and kids are entitled to privacy and First Amendment rights. Any work on this should keep that in mind, and some minors may need tools to protect themselves from their parents.
  • Interoperability: One of the biggest problems for users who want to abandon a social media platform is how hard it is to rebuild their network elsewhere. X/Twitter is a good example of this, and I know many people that want to leave but have trouble rebuilding the same engagement elsewhere. Bluesky and Mastodon are examples of newer services that offer some degree of interoperability and portability of your social graph. The advantages of that are obvious, creating more competition and user choice. This is again something the government could support by encouraging standards or requiring interoperability. However, as Bluesky and Mastodon have shown, there has been a problem with interoperable platforms and content moderation because it’s a large cost not directly related to profit. This remains a problem to be solved. Ideally a strong market for effective third party content moderation should be created, but this is not something the government can be involved in because of the obvious First Amendment problems.

Ideas for Solving Problem #2

  • Information sharing: When I went to TrustCon this year the number one thing I heard was that T&S professionals need better information sharing – especially between platforms. This makes perfect sense: it lowers the cost of enforcement and improves the quality of enforcement. The kind of information we are talking about are emerging threats and the most effective ways of dealing with them. For example, coded language people are adopting to get around filters to catch sexual predation on platforms with minors. There are ways that the government can foster this information sharing at the agency level by, for example, hosting workshops, roundtables, and conferences geared towards T&S professionals on online safety. It would also be helpful for agencies to encourage “open source” information for T&S teams to make it easier for smaller companies.
  • Best Practices: Related to other solutions above, a government agency could engage the industry and foster the development of best practices (as long as they are content-agnostic), and a significant departure of those best practices could be challenged as a violation of Section 5 of the FTC Act. Those best practices should include some kind of minimum for T&S investment and capabilities. I think this could be done under existing authority (like the Fortnite case), although that authority will almost certainly be challenged at some point. It might be better for Congress to affirmatively task agencies with this duty and allocate appropriate funding for them to succeed.

Ideas for Solving Problem #3

  • Keeping the focus on product design: Problem #3 is never going away, but the best way to minimize its impacts AND lower the risk of efforts getting tossed on First Amendment grounds is to keep every public action on online safety firmly grounded in product design. That means every study, every proposed rulemaking, and every introduced bill needs to be first examined with a basic question: “does this directly or indirectly create requirements based on speech, or suggest the adoption of practices that will impact speech.” Having a good answer to this question is important, because the industry will challenge laws and regulations on First Amendment grounds, so any laws and regulations must be able to survive those challenges.
  • Don’t Undermine Section 230: Section 230 is what enables content moderation work at scale, and online safety is mostly a content moderation problem. Without Section 230 companies won’t be able to experiment with different approaches to content moderation to see what works. This is obviously a problem because we want them to adopt better approaches. I mention this here because some political leaders have been threatening Section 230 specifically as part of their attempts to work the refs and get social media companies to change their content moderation policies to suit their own political goals.

Matthew Lane is a Senior Director at InSight Public Affairs.

Posted on Techdirt - 2 June 2023 @ 10:47am

AI Will Never Fit Into A Licensing Regime

Yes I’m aware that Nvidia and Adobe have announced they will license training data. I don’t know what those agreements will look like, but I can’t imagine that they make any sense in terms of traditional licensing arrangements. Rather, I’m guessing they just brute forced things to build goodwill among artist communities and perhaps to distinguish themselves from other AI companies. I sincerely doubt these arrangements will help artists though, and I fear these licensing conversations will distract from better conversations on how to balance interests. To explain my thoughts on this, I first have to start from the beginning.

How AI Training Works

AI training is very complicated, but can be explained pretty simply using the lie-to-children model. There’s an older video by CGP Grey (and footnote video) that does a beautiful job, but I will try to summarize. AI training is the solution to the problem of trying to make a computer good at something complicated. Let’s say you want to teach a computer to play chess (or Go). The simplest, least efficient, way to start is to tell the computer if “human move1 = moves this pawn, move that pawn” and so on. Writing the program like this would take forever, and the computer would be bad at repeated games of chess because a human could quickly figure out what the computer is doing. So the programmer has to find a more efficient set of rules that also has better success at adapting to and beating human players. But there is a limit, the program keeps getting more complex and great chess players will still find the difficulty lacking. A big part of the problem is that humans and computers just think differently, and it’s really hard to write a program to get a computer to act like humans do when it comes to things that humans are really good at.

Fortunately, there are several methods of getting computers to figure it out on their own. While these methods function different, they generally involve giving the computer a goal, a lot of data, a way to test itself, and a tremendous amount of processing power. The computer keeps iterating until it passes the test and the human keeps tweaking the goal, the data, and the test based on what kind of results they are getting. After a while this learning starts to resemble how humans learn, but done in a way that only computers can do. The results produced by this learning is often called a “black box”, because we don’t know exactly how the computer uses the model it creates to solve the problems we give it.

AI image generation training, at it’s simplest, is giving the model an image and text pair and telling it that some of the words describe a style and some of the words describe objects within the picture. The goal set for the model is to understand and be able to reproduce the style or the objects (or both) based on the words. Give it a picture of The Starry Night and a description, and the model will start to learn the style concepts based on the words “Van Gogh” and “post-impressionism.” It will also start to understand the objects of city, moon, stars, and how they would look at night. It takes a lot of images to train towards a functional understanding of these concepts, and after each image is ingested into the model it’s basically trash. It isn’t stored in the model, only the concepts are. And those concepts should ideally not be tied to a single image (see overfitting).

This brute force learning is not that different from how humans learn art. A human has to learn the practical techniques of how to make art, but they also should be looking at other artists’ work. When I was learning pottery, there was a point where my instructor said now that you can produce a pot it’s time to figure out your style. That involved looking at lots of pictures of pottery. For computers, that’s the only step that really matters. Teaching a human to create art in this way would be like locking someone in a room with every Monet and a set of paints and not letting them out until they have produced impressionist art. Importantly, current AI simply reproduces learned styles. It would not create post-impressionism in protest if given the same task.

The war between impressionism and post-impressionism

How Licensing Works (Traditionally)

Licensing can be a pretty complex process in which copyright law, a lot of lawyers, mutual business interests, and the threat of lawsuit get together to produce a contract that everyone thinks should have been better for them. There is also usually art involved.

To continue oversimplifying things, lets just say that every time a song is streamed someone, somewhere, moves a bead on an abacus. At the end of the month an amount is paid based on whatever the abacus says. An ad campaign uses a photo? Another abacus bead moves based on where it’s displayed, how many people see it, etc.. A song gets used in a tire commercial? More abacuses. The important thing is that an artist’s work is getting reproduced, in whole or in part, in a quantifiable way. That quantity relates back to agreed to terms and the artist is paid based on those terms.

How AI Licensing Doesn’t Work for Artists

Let’s ignore the fact that AI training is likely fair use. Let’s ignore that the audience for the works is a computer. Let’s ignore that the works are only used to teach the computer concepts. Let’s ignore the fact that those works are never (ideally) reproduced or displayed to users of the AI. Even ignoring all that, there is still the problem that the number of times each work is used is one (ideally). Stable Diffusion’s first release ingested 2.3 billion images. So in abacus terms each work moves one lonely bead to make up 1/2.3 billionth (and counting) of a share in a theoretical license fee.

The next problem is what the share is in. The Copyright Office has so far said that AI generated elements of a work can’t be copyrighted. Grimes recently tweeted that she would split 50/50 any royalties using her voice. But royalties are based on licensing, which is based on copyright. Theoretically streaming companies and other distributors could pay royalties on a copyright-less song, but would they? And would listeners pay for a song in the public domain? Maybe. The more likely answer is that works produced by AI have no value on their own in any meaningful sense that would enable artists to get a piece of.

OK, but what about giving artists a share in some right for their contribution to the model? I don’t think this works for a few reasons, but there are two main theories behind this proposal that we have to take in turn. The first theory is related to the open question of what claims an artist has for a song that sounds like them, but isn’t based on any song in their catalog. Answering that question the wrong way could matter for music generally. Creating a rule that gives an artist rights in songs that sound like them, but otherwise aren’t infringing, would mean Led Zeppelin would have a claim over Greta Van Fleet’s catalog. We currently don’t give rights for even heavy influence, or for impersonation. The same is true in other arts as well.

The second theory is based on the fact that AI can be used to infringe copyright in a traditional sense. While output is ideally original, sometimes overfitting occurs and the model will output works that are close enough to works it trained on to be considered infringing under current copyright law. This raises many parallels to the Betamax case, where the Supreme Court ruled that distributing recordable media and recording devices did not result in contributory infringement even if it was used for infringement. Congress reacted to this ruling with the Audio Home Recording Act, which created a generic royalty for each device or recordable media sold. I don’t think that’s the answer here because overfitting is a bug, and AI developers are (and should) working towards fixing that bug.

Either way there isn’t even a good way to divvy up what money might be available. AI training requires a lot of data, so any particular artist’s share would be extremely dilute. One might argue that shares should be paid out based on importance to the model, but that is likely impossible to figure out in any meaningful way. AI functions as a black box, and it’s hard to quantify how much of each work is in any particular concept it invokes when responding to a prompt. That holds true even when artists’ names are used in prompts. Yes the artist might have greater influence in the output (as intended), but all the other works were still necessary to teach the model what a cat is and what a bicycle is and how a cat might ride a bicycle through Dali’s The Persistence of Memory.

Regardless, none of this solves the problem artists are facing: that they are competing with a computer that is faster, cheaper, dumber, weirder, more chaotic, and less open to feedback. A residual of five cents a month is not going to fix that, even if the artist can collect. For example, Stable Diffusion has released their AI model as open source, and many people (myself included) run it on their own computers for free. There’s no way to put the genie back in the bottle.

Other Solutions

None of this dismisses the fact that there should be conversations on how to navigate this new space in a way that preserves art and artists. The simplest answer might be that artists are smart and talented people and they are already figuring it out. A new survey shows that 60% of musicians are using AI to create. Special effects artists Corridor Crew have figured out a workflow to basically do AI rotoscoping. Professional photographer Aaron Nace (owner of Phlearn) has a video teaching how to integrate AI into creative photography projects. Disney animator Aaron Blaise has a video encouraging artists to embrace new AI technology. Artists will adapt to using AI as a tool and will produce better work than people who don’t understand art and have no idea what they are doing. And their use of AI will likely be a small enough part of their works that they will still be able to rely on copyright if that is their model.

The next simplest answer is that denying copyright to low effort uses of AI is probably the best way to protect artists long term. One of the biggest threats artists face from AI is that powerful studios, labels, etc., will use AI to cut them out of the process. If that work can’t be copyrighted, because no human contributed anything of artistic merit, then the copyright industry won’t be able to turn AI into a cash cow by cutting their artist costs.

Finally, yes maybe we should have a conversation about whether there is a line in training models heavily towards the style of living artists. But these kinds of threats are way more present among “hobbyist” communities (NSFW warning for like half the content here) that play with the technology for their own interests. Larger AI developers all seem to be training away from that towards more general models that are easier to use and based on simple prompts.

Matthew Lane is a Senior Director at InSight Public Affairs. Originally posted to Substack. Republished here with permission.

Posted on Techdirt - 5 May 2023 @ 12:12pm

How Bluesky’s Invite Tree Could Become A Tool To Create Better Social Norms Online

At this moment, Bluesky has caught lightning in a bottle. It’s already an exciting platform that’s fun and allows vulnerable communities to exist. This sense of safety has allowed folks to cut loose, and people are calling it a “throwback to an earlier internet era.” I think that’s right, and in some respects that retro design is what is driving its success. In fact, one aspect of its design was used pretty extensively to protect some of the internet’s early underground communities.

As an Old, I realize I need to tell a story from the internet of yore to give context. Before streaming, there was rampant downloading of unlicensed music. This was roughly divided into two camps: those that just didn’t want to pay for music, and those that wanted to discover independent music. I’d argue the first camp were not true music fans since they just refused to pay artists. The other camp was more likely to have run out of discretionary income because of their love for artists. Music discovery was simply not something that could be done on the cheap before streaming because you only had radio, MTV (for a bit), and friends’ collections to hear new music. My strategy was to find a cool record shop and ask what they were listening to. I’d also vibe-check the album art and take a chance (something I still do). Even then it wasn’t enough, and I wasted a lot of money. Enter OiNK.

OiNK filled a unique niche in internet culture around music fandom. It would expressly discourage (and sometimes ban) popular artists. It also encouraged community discovery and discussion. At any given moment you could grab something from the top 10 list and know it was the coolest of the cool in independent music (even though you’ve probably never heard of the band). It was probably where hipsters started to get annoying. We were like Navi from Legend of Zelda to our friends: “Hey, Listen!” Trent Reznor called it “the world’s greatest record store.”

OiNK also had a problem. Even though many independant and upcoming artists liked – and even profited from – the discovery these sites and forums enabled, it was still something the industry as a whole was bringing the hammer down on. OiNK’s solution to this problem was to be invite only. Not only was it invite only, if you invited someone that was trash you would be punished for it. Invites were earned by participating in the community in positive ways, and your standing was lowered if your invitee was not great. A leach if you will. This somehow just worked.

The invite system was brutal, but it created a sense of community and community policing that made the place great. Importantly, these community standards existed with anonymity – something many try to argue is not possible. The person who gave me an invite had me promise I would read the rules and follow them, and they would check in on me. By being a bad user I wouldn’t just let myself down, I would let them down.

Bluesky, intentional or not, uses its invite system in a similar way. Currently invites are scarce and public. That’s created a huge incentive to only invite people that will make Bluesky more enjoyable. It also increases the stakes when someone crosses the line. When things go wrong, I’ve seen those that have invited the people responsible want to be part of the solution. I’ve also seen people who crossed the line want to rejoin and agree to norms for the sake of a positive Bluesky community. People seem to have a real stake in making Bluesky the good place. As someone who used to manage a small online community, I cannot express how cool that is if it continues at scale.

That isn’t to say this system is without flaws. There has always been a problem in every community about what to do with righteous anger. I’ll refer to this as the Nazi-punching problem. Punching Nazis might be a social good generally, but specifically it’s never that simple. There really is no way to sort the righteous from the cruel, especially at scale, and real people are rarely cartoonishly evil. But there is still an inclination in communities of a certain size to engage in what is perceived as justifiable conflict, which can escalate quite rapidly. That creates a moderation problem compounded by the sophistication of trolls in working the refs and compounded again by the consequences of any actions echoing up invite chains. When the repercussions of conflicts are felt by both sides, it’s often the marginalized communities that feel it greater. Edgelords targeting individuals while hiding behind decorum is something they try to do on every platform ever.

Fortunately, this problem might be solved by another feature of Bluesky. While the invite system encourages people to build communities with a stake in the project, the AT Protocol allows users to build the moderation tools they need to then protect their own communities. Unfortunately, these tools aren’t online yet and we don’t know how they will work. I think we will soon see things like ban lists that people could subscribe to that cuts out toxicity root and branch. That would be so much easier than #blocktheblue, which is very much a pain in practice. Beyond that there will probably be custom algorithms that are weighted towards certain communities and content that people can switch between.

There is a part of me that is slightly uncomfortable at the power some of these tool providers will have. It will probably lead to fragmentation of Bluesky into more distinct communities that can, at their option, venture out into more troubled waters. But at the same time, there was something good about the days when communities were small enough that people could grow inside them. Maybe we shouldn’t be forced to interact with people that specifically want to annoy us. Maybe having a stake in the community you are in, at a size you can appreciate, is good actually. And having a choice in algorithms is infinitely better than being forced to read what people who pay $8 have to say.

Matthew Lane is a Senior Director at InSight Public Affairs.

Posted on Techdirt - 6 December 2022 @ 12:25pm

How KOSA’s ‘Parental Tools’ Mandate Will Almost Certainly Lead To Abuse

There is a serious problem in the way many tech-focused bills are drafted these days. Whether it’s a lack of trust or simply a desire to punish, those working on tech-bills are not talking to the right industry people about how things actually work in practice. This leads to simple mistakes like requiring something that seems like a good idea but runs counter to how systems are designed and how they function. When these mistakes are bad enough, they can result in serious security and safety problems.

I previously wrote a post explaining how issues in the drafting of the Kids Online Safety Act will likely result in harm to LGBTQ and other communities if State Attorneys General seek to exploit it for political purposes (something already being encouraged). In going back through the text, I found another significant problem that could impact the security and safety of users. There is a section that requires covered platforms to develop parental tools that would apply to minor accounts, and enable them by default if they believe a user is a child. The problem is that there seems to be little thought into how this would actually be implemented.

This might make sense if we were talking about devices. The assumption is that a parent buys the device and assists in the setup. The device can offer the option to parents at that time and explain the process in the manual. If someone is being unfairly restricted by the device (i.e. not a minor) they most likely can buy their own device. It may also make sense for most paid services, like Netflix or Amazon Prime. There, the subscriber can set up a primary account with controls over sub-accounts. Again, an unfairly restricted person could hopefully get a separate primary account.

However, there are a lot of services where there is no initial contact point with the presumptive parent. Email, for example, is generally a free service that anyone can sign up for. People can then sign up for many other services for free using no more than an email address. Most social media platforms are free, as are most messaging applications and community forums. Most video game distribution platforms, like Epic Games, are free, and many video games, like Fortnite, run on a freemium model. This creates a lot of problems. How do these platforms identify parents and offer them parental tools? How will minors’ accounts be flagged? How will those reaching the age of majority be able to end parental control? How will platforms prevent abuse of these tools by bad actors? How will they retroactively activate parental tools on existing accounts?

Parental tools are by necessity something that enables one user to control another user. This control has to be against the wishes of the minor user when necessary. It also often comes with some degree of surveillance so that the parent knows when the minor may be in trouble. The parental tools mandated in KOSA fit this description. The tools have to allow the parent to control privacy/account settings, restrict purchases, track time on the platform, and other “control options that allow parents to address the harms described in section 3(b).” This last requirement likely has to include some surveillance, because the harms described in section 3(b) are things like bullying, harassment, and sexual exploitation that are most likely to occur in communications, including private communications.

Needless to say, access to these parental tools comes with a good deal of power over accounts that may be lifetime accounts that become very important to users. This puts covered platforms in a difficult situation, because they are required to do something that, if done incorrectly, puts people at risk. Here are some plausible scenarios that could cause problems:

  • A minor’s parents are divorced. One is abusive and has lost custody rights. The abusive parent requests or otherwise obtains parental control over the minor’s account.
  • A trans teen leaves an unsupportive home at the age of majority. The abusive parents submit a claim to the company stating the teen is still a minor and requesting control over their account.
  • An abusive ex to a custodial parent submits a claim to a platform that they are a custodial step-parent of a minor and requests control as a way to continue their abuse of either the parent or the minor.
  • An abusive ex (or any other kind of creep) fabricates information that a user is a minor and claims to be a custodial parent, asking for control over the account as a means to stalk the user.
  • A bad actor, such as a groomer or a hacker, improperly gains control of an account through the parental tools and uses that control to advance their nefarious activities.

Each of these scenarios creates a tough situation for companies. How do they comply with the law when the information needed to do so (i.e. custody rights) is difficult to obtain and sorting through these issues is time intensive and expensive? Platforms rarely require proof of identity and age, and ignoring the privacy concerns, requiring platforms to gather this information still doesn’t answer the question of custody rights.

One answer might be to just create the tools and put them in an account setting so that a knowledgeable parent could log on with their child and set everything up, but otherwise it would go unused. But the law seems to prevent that. First, there is a requirement that the parental tools be enabled by default if the platform reasonably believes a user is a child. Second, there is a no-dark-patterns section that would probably apply because the requirement is for “readily-accessible and easy-to-use” parental tools. The law appears to require either an affirmative offer or a way of gaining parental control over a minor’s account even if it is against their wishes. For example, maybe a parent wants to restrict a minor’s Fortnite time but they refuse to allow the parent access to their account, either logging out when the parent is around or playing at a friend’s instead.

The problem all comes down to a simple question: how are platforms supposed to offer the parental tools? Everything flows from that. Let’s assume an account gets flagged as a child user, and the most restrictive required parental tools are enabled by default. What happens next? Is the account locked until a parent is present? How does the platform know that it’s a parent? How does the platform know it’s a parent with custody rights? Will the “parent” have to walk through a tool setup before the account works again? How will mistakenly flagged accounts be cleared? Will they have to submit a driver’s license? Pay stub? Mail? How will the platform know those documents aren’t forged? Which way does the platform need to err in order to be reasonable under the law? In favor of enabling control over users to prevent harm to children, or in favor of not allowing control to prevent harm by bad actors? And do the tools let the parent log in to the child’s full account, and therefore have access to everything, or is it a separate login that displays only the parental tools and thereby allows the child some degree of privacy?

Offering parental tools creates a cascade of tough questions that need to be thoroughly thought through so that such systems can be safely designed. Some covered platforms are structured in a way where this can be done relatively easily. For others it may be impossible. KOSA is completely blind to that, and seems to be drafted under the assumption that this is all quite doable by a wide range of internet connected platforms and services. KOSA is well-intentioned, but it’s simply not drafted with these security and safety concerns in mind. It shouldn’t pass.

Matthew Lane is a Senior Director at InSight Public Affairs.

Posted on Techdirt - 8 September 2022 @ 01:33pm

Congress’ Kids Online Safety Act Puts Kids At Risk Through Fuzzy Language

The Kids Online Safety Act (KOSA) was voted out of committee with a long list of amendments. Advocates had been warning about some severe unintended consequences that could arise out of this bill, the most concerning of which was forcing tech companies to out LGBTQ+ minors to their parents — potentially against their wishes. The amendments were supposed to fix these issues and more. But did they?

The short answer is there was an honest attempt but I believe it falls short, and I think it falls short for a specific reason.

The background of the bill

In order to understand why this bill has significant problems, we first have to cover some basics and separate the intended and unintended harms from the bill. 

Let’s start with what the bill wants to do, which is set a floor of protections for minors. It does that by creating a duty of care to act in the best interest of the minor. The bill then goes on to loosely define what that means and what category of harms online platforms need to be shielding minors against, requiring the creation of certain tools parents can use to monitor their kids, etc.. It also gives platforms plenty of homework, like creating an annual report identifying the risks they think minors will encounter on their platform and what they are doing to mitigate those harms.

So why did I say this bill has intended harms? Well drafting a bill is hard, you have to describe what you mean when you say a company “shall act in the best interests of a minor” to “take reasonable measures” to “prevent and mitigate mental health disorders” or “addiction”. The more granular you get the more confusing it gets and the more broad it’s stated the harder it is to apply to specific facts. 

Let’s say I’m playing a game with VOIP and someone calls me a slur. Was that because the game company failed to take reasonable measures? If I want to play a game during all my free time is it because the game is really good or because it was intentionally made to provoke “compulsive usage”? What even are “reasonable measures”? Especially when many of the things the bill describes impacts people differently.

KOSA’s intentional fuzzy language

KOSA’s authors are basically outsourcing to courts how to apply the bill’s fuzzy language to actual facts. Practically, this means that if the bill is passed all platforms will attempt to comply with what they think the text means. Then at least one of the platforms will almost certainly get sued for falling short. Those companies will then have to go through a lot of discovery and judges will just muddle through it. 

This will be a lengthy, painful, expensive, and time consuming process. But I think it’s intentional. Many in Congress think that platforms are not doing enough to protect kids, even though they should have the resources to do so. They either don’t see, or don’t care about, the large amount of resources already going into trust and safety divisions to protect all users, including minors. They see a problem that needs to be immediately solved, and believe a strong regulatory response will give platforms enough of a kick in the pants to figure it out. This is the famous “nerd harder” complaint that often gets leveled at Silicon Valley.

If you look at KOSA through this lens, everything kind of makes sense. It doesn’t matter that it sets up a bunch of expensive new compliance efforts that may or may not be productive. It doesn’t matter that it may kill off some companies or force consolidation. It doesn’t even matter that some platforms will try to bar minors from their platform completely (of course we all know that kids will figure out how to get on the platforms anyways). It’s a big extrinsic shock that they hope will shake things up enough so that platforms will finally nerd hard enough.

After all, the enforcement of KOSA is limited to the FTC and state AGs. We can trust them to only bring cases that will advance the welfare of children right?

KOSA’s extremely bad unintended harms

In Normal Times™, this is how the debate on whether to pass KOSA would go: this bill is a mess and will be too painful (and expensive) to sort out — vs. — we really don’t care, the platforms can afford it, and we think it will do something to at least make the world slightly better.

But these aren’t normal times, and advocates have been warning that not only will this bill be painful to sort out, it provides an avenue of attack from ideologues using the legal system to go after marginalized communities. This is a real threat that no lawmaker (especially Democrats) should be complicit in, especially considering that the overturning of Roe has become a starter pistol for using the legal system towards culture wars and extreme ideological ends.

The main avenue of attack built into the original KOSA was towards the LGBTQ community, and the feedback given at the time was that it will out kids to parents that might not be tolerant and could result in things like minors being thrown out of homes or sent to conversion therapy. This is what advocates warned the drafters of, and what new language sought to fix.

So was this fixed? Sort of. They added a provision saying that the bill shouldn’t be interpreted to require the disclosure to parents of things like browsing behavior, search history, messages, content of communications. The tools that platforms are required to provide to parents now seems solely directed at high level things like time used, purchases, etc.. But, there is a sort of dangling requirement that there are “control options that allow parents to address the harms described in” the big section describing the harms they want to stop. What option stops bullying? I’d like to know (maybe it will allow me to stop being T-bagged in multiplayer games).

Sorting that out may or may not sweep some sensitive data back in and expose kids. Sometimes kids keep secrets to protect themselves from their parents. This makes sense to me, I had a friend growing up sent to one of the reform schools Paris Hilton warned us about. However, I’m overall less concerned about forced outing than I was before the amendments.

I’m now more concerned this bill invites a broad attack against platforms allowing a kid to see any pro-LGBTQ content. The culture wars’ Eye of Sauron has turned to harassment and vile behavior towards this community, especially trans persons, and they are doing so under the banner of protecting children.

Unfortunately, the language these people are using to vilify the LGBTQ community is everywhere in the bill. Being trans has been called a mental health disorder, and this bill says platforms are required to protect minors from that. Seeing a drag queen, period, has also been described as sexual exploitation, grooming, and sexual abuse. Again, barred in KOSA. Gender affirming care has been referred to as self-harm, which again platforms are required to protect against under KOSA.

The bill’s fuzzy language, which may have been seen by the drafters as an asset, is now a huge liability. And it’s not just limited to anti-LGBTQ content. For example, a minor seeking information about how to receive a safe abortion could also be described as self-harm.

The bill’s authors might think that they are safe from their bill being used in these culture wars because enforcement is limited to the FTC and State Attorneys General. While I worry less about the FTC (now) it’s easy to imagine certain state AGs getting before the right judge and successfully barring minors from access to basic information they need to understand what they are going through and how to receive help, if they need it. Just look to Florida, where governor Desantis has filed a complaint against a restaurant and bar that allowed kids at a drag brunch and said that parents that allow their children to see a drag performance could be targeted by child protective services. 

This bill is throwing a hand grenade into the middle of a particularly dark moment of our legal system. I don’t think that’s wise, or very smart politically when the odds are actually quite high someone decides to take this bill up on its offer. 

Matthew Lane is a Senior Director at InSight Public Affairs.

Posted on Techdirt - 26 April 2022 @ 03:30pm

Yes, Of Course Drug Patents Drive Up Drug Prices; Why Is This Even Up For Debate?

The idea that there is a link between the exclusivity period on patents and higher drug prices is about as noncontroversial as a view can be. It is the easy question on an ECON 101 exam on monopolies, supply and demand. Yet, somehow, this has come under attack thanks to big PhRMA and their minions. Unfortunately they have found a sympathetic advocate in the Senate who believes the unbelievable.

Sen. Thom Tillis has taken a host of actions trying to unlink the obvious connection between patents and high drug prices, and he is trying to force both the FDA and the USPTO to agree with him that the link is a “false narrative.” This assertion, of course, is ludicrous. Patents are the backbone of the pharmaceutical industry, and the reason why drug companies make significantly more than other large public companies. Patents give these companies a guaranteed monopoly period, in which monopoly profits are intended to reward them for the risk and investment spent in bringing new drugs to market. These monopoly periods eventually expire, as required by the constitution, and the resulting influx of competition lowers drug prices by about 80% on average.

This social contract is well understood, and almost everyone thinks this is good for society. So I will give Sen. Tillis the benefit of the doubt and interpret his statement as suggesting that there is no link between gaining an extra, and unintended, monopoly period and high drug prices. But even here the body of evidence to the contrary is extensive. I collected a lot of this evidence in a recent tweet thread. However, it is important to understand a little more about the background of what has quickly become an industry practice.

The story of patent thicketing starts with AbbVie. AbbVie created the patent thicket in much the same way Apple created the smartphone – there may have been others before, but none were as successful or as emulated since. AbbVie’s drug Humira was the best selling drug for almost 10 years, but they faced a problem. Internal estimates showed they would lose their exclusivity as early as 2017. They needed a strategy to stall generic entry for as long as possible, and they hired the extremely controversial consulting company McKinsey to help come up with a plan. While several strategies were presented, the patent strategy quickly became the most successful. As one biotech patent attorney put it: if you have a $16 billion-a-year drug, “every month is a good month that you’re on market alone. So you’re going to spend whatever it takes to be as aggressive as possible and get as many patents as possible.”

The strategy is simple even if it sounds like it should be impossible: find as many ways to patent an existing product as possible. This can include creating a staggered rollout of patent applications around formulations, dosing regimen, route of administration (for example, using the drug in an injector pen), dosing regimen for new indications (i.e. new diseases the drug can treat), and manufacturing processes. As one AbbVie internal document put it: “in the eye of biosimilar makers, how would they manufacture Humira?” AbbVie just needs to patent those manufacturing processes, even if they aren’t using them, and biosimilars won’t be able to make the drug.

All of these documents show that drug companies are trying to get new patents, with later expiration dates, on existing drugs for purely financial reasons. This alone proves Sen. Tillis wrong. But is there widespread failure of the patent system that demands a response? Again there is ample evidence that there is.

One study showed that 78% of new patents were associated with existing drugs, not new ones. The same study found that most of the companies who were successful doing this once would try again, with 50% becoming serial offenders. Another study found that most of the patents used to block generic and biosimilar entry represented minimal-to-no additional benefits to patients using the drug. 

These delays have costs. One study found that Medicare spent an average of $109 million a year extra due to delayed generic entry. The main cause of delayed competition? Patent litigation. Another study found that one year of improved patent examinations on secondary and tertiary drug patents, to catch bad patents before they issue, saves $8.7 billion in the future. And when Humira went generic in Denmark in 2018, residents saw their prices drop by 82.8%. In the US we’ve faced regular price increases on Humira because it remains patent protected.

These abuses of the patent system may carry additional costs to innovation and safety. A recent study of R&D competition around Covid drugs found that whenever a firm finds it profitable to invest in developing a minor modification, R&D for radical follow-on innovation goes down. This could mean that the incentives created by the availability of patents for existing drugs may actually lower R&D investment in new drugs, as resources chase lower risk and more immediate profits. Some researchers have even found startling signs of negative innovation, or innovation that promotes riskier and less beneficial treatments. This happens when the better treatments are unpatentable. The study shows a company pursuing a treatment that overdoses patients because more appropriate doses were considered obvious under prior art. Overdosing, however, was patent eligible because it was considered non-obvious.

This evidence shows a widespread problem in need of a policy response. Indeed, those calling for reform now include the New York Times, the Department of Health and Human Services, former Trump cabinet member Alex Azar, and many researchers and public interest advocates. Whoever is advising Sen. Tillis on this issue needs to include the full evidence on drug prices and patents. Especially since the Senator appears to be engaging in good faith around efforts to improve patent quality and stop various abuses of the patent system. But without good information, I fear bad policy may result.

Matthew Lane is a Senior Director at InSight Public Affairs where he specializes in competition and IP issues.

Posted on Techdirt - 21 July 2021 @ 01:42pm

The Key To Lowering Drug Prices Is Improving Patent Quality

This post is one of a series of posts we’re running this week in support of Patent Quality Week, exploring how better patent quality is key to stopping efforts that hinder innovation.

Patents are increasingly a hot topic in drug price policy conversations. So much so, that one might wonder if this newfound attention is deserved. For example, a recent Senate Judiciary Subcommittee hearing examining anticompetitive conduct in prescription drug markets ended up focusing heavily on Pharma?s blatant abuse of U.S. patent laws. Indeed, it seemed at times that patent thicketing had eclipsed the many other anticompetitive ?shenanigans? that Pharma uses to delay competition.

So why is there such a growing spotlight on patents?

First, it?s important to realize just how big the drug price problem is. Prescription drug spending remains a critical issue in the United States as millions of American patients and the U.S. healthcare system struggle to keep pace with the growing price tag for medical innovations with limited financial reprieve from low-cost alternatives. In 2020, the total US drug spending was estimated at $358.7 billion and the Centers for Medicare & Medicaid Services (CMS) projects national spending on healthcare to reach $6.2 trillion by 2028 ? the bulk of the cost resting on shoulders of the federal government and American households (mainly through taxes and insurance premiums).

One of the key drivers of these rising costs are the habit of drug makers of blocking competition on older drugs that have proven themselves to be blockbusters. And the best modern strategy for doing that is creating a patent thicket. As Committee Chairman Senator Dick Durbin (D-IL) pointed out, ?[T]he top-12 best-selling drugs in America each have an average of 71 patents and 78 percent of all new patents are for drugs that are already on the market.?

The reason behind this is two-fold. Older tactics have had successful antitrust cases filed against them, but patent thicketing is somewhat protected by the Noerr-Pennington Doctrine which states that (except for some limitations) people can petition their government even for anticompetitive reasons. That means it is up to the government to resist anticompetitive gaming of its regulations. The second reason is that the patent office is failing at just that. Dr. Rachel Moodie, vice president for Biosimilars Patents and Legal for Fresenius Kabi, a leading health care company, gave testimony stating, ?[W]e see the U.S. Patent system as being an outlier now compared to other systems around the world? the way that the patent system is working right now is that it?s easy to circumvent certain rules that allow you to repetitively claim a similar invention over and over again.?

What is the result of this patent thicketing?

Drug manufacturer AbbVie has filed over 240 patent applications for a single drug, Humira, and received over 130 granted patents. ?This patent thicket has allowed Humira to control the marketplace in the U.S., leading to Humira claiming the number 1 spot as the world?s bestseller since 2012 ? while other countries have had access to more affordable biosimilars. AbbVie itself has had to cut prices by 80% in some markets due to competition.

AbbVie isn?t alone. A study by I-MAK found the practice of patent thicketing pervasive among the top 12 best selling drugs by revenue.

Just how big of a deal is patent thicketing?

The 2020 US revenues of just three drugs ? Humira, Enbrel and Revlimid ? represent 8.2% of total drug spending in that year. All three of these drugs should be facing competition now or be close to the end of their monopoly terms. They were approved in 2002, 1999, and 2005 respectively. Patent terms only extend 20 years and drugs have historically averaged a little over 14 years of protection on the market due to the length of the approval process (this includes patent term restoration passed by Congress to give some of this time back). Humira has a deal with biosimilar manufacturers that allows them to come to market in 2023, but Enbrel and Revlimid?s final patents don?t expire until 2029 and 2036. Add Imbruvica, a drug we could have seen competition this decade but won?t, and just those four drugs represent almost 10% of all US drug spending.

Competition, on the other hand, works when allowed to. A list by Fierce Pharma of the top 20 drugs by worldwide sales in 2020 indicates just how well competition works to lower the price of some of Big Pharma?s most sought after drugs. As competition from biosimilars and generics hits the marketplace, sales of the industry?s top performing drugs correspondingly drop. For example, as competition emerged against Johnson & Johnson?s ulcerative colitis drug, Stelara, the company had to cut its prices to remain competitive. The same report by Fierce Pharma also anticipates the number two drug, Keytruda, soon taking over the number one spot as Humira?s patent is expiring in 2023, opening it up to competition by biosimilars.

What does this have to do with patent quality?

Drug patent thickets are largely made up of low quality patents whose applications were only filed because of the benefit they provide in keeping competition away from top selling drugs. This means that any patent quality efforts are also efforts to reduce drug prices. For example, the USPTO?s inter partes review process (IPR) has been instrumental in cancelling low-quality patents and allowing new drug competition. This is one of the best tools created by the America Invents Act to cut through these dense patent thickets. IPRs were substantially weakened under the last administration, but a Congress that cares about drug pricing could restore and strengthen this tool to great effect.

Matthew Lane is the executive director of the Coalition Against Patent Abuse

Posted on Techdirt - 5 November 2020 @ 12:07pm

People With Silly Patents Would Really Like It If It Was Harder To Cancel Them

A large group of patent holders sent a letter to Congress expressing concern that, since the US Patent and Trademark Office (USPTO) Director Iancu might soon be leaving, recent policies making it harder to challenge bad patents might be reversed. The letter concerns a process created somewhat recently, called inter partes review (IPR), that allows the USPTO to take a second look at the patents they issue based on a public request.

This is important because 43% of all issued patents challenged in court are ultimately found to be invalid, albeit at great expense due to the high costs of patent litigation. An IPR, by contrast, offers a far faster and less expensive way to challenge patents than using the courts, with the average IPR costing around $350,000 compared to litigation costs just shy of $1 million when defending against infringement claims brought by an NPE. It is no surprise that many who profit off patents do not like a process that makes it easier to find out if those patents are valid.

The letter states that “Director Iancu has clearly changed the dialogue surrounding patents, defined the patent system by the brilliance of inventors, the excitement of invention, and the incredible benefits they bring to our economy and society as a whole.” While a lot of this is true, celebrating the brilliance of inventors and the benefits of patents ignores the very real direct and indirect costs of the current patent system. Patents can issue for inventions that don’t actually work or exist. This was true of Theranos, a company built around patents with technology that didn’t work or exist. Patents can also be used to try to win big paydays on seemingly unrelated products. This happened again with Theranos, whose patents were bought and used against a company making covid-19 tests.

Then there are also many, many, silly patents that get issued that usually don’t matter because very few people want the thing the patent describes. It would be weird if these inventors got to dictate patent policy. But here we are, as a large number of the inventors signed to this letter have very silly patents (feel free to find your own favorites):

  • US 7,814,680 – Overshoe Unit For Indoor Use (it’s a shoe that you put on your shoe)

  • US 5,178,576 – Apparatus And Method For Manipulating A Spring Toy (expired due to non-payment of maintenance fee)

  • US 9,009,870 – Garment Pocket For Rapid Extraction And Deployment Of A Concealed Weapon (does what it says it does)

  • APP 16/199,080 – Peeball (“a potty-training slide apparatus for boys that temporarily clips onto the toilet seat and provides an ornamental target (target has a hole through it) that boys aim and pee through and on the back side of the target is a permanently affixed ornamental slide that the urine travels down and into the toilet water.”)

  • US 9,278,737 – Remote Control Fishing Robot (when you just don’t feel like fishing yourself)

  • US 6,923,299 – Wallet For Retaining a Plurality of Credit Cards (for holding all the credit cards you used to pay for the other weird stuff on this list)

To quote Thomas Jefferson, “these monopolies produce more embarrasment than advantage to society.”

While those with silly patents have a low chance doing any harm, even if their patent were invalid, the changes the inventors advocate for can be life or death for others. These patent policies would make it harder to challenge weak drug patents that could be holding up generic competition. Cancelling drug patents and enabling competition can save patients 79%, on average, for small molecule drugs. While biosimilar competition is nascent it is projected to save patients 15%-45% or more over the next five years, possibly more. For some, this is the difference between being able to afford a treatment and not being able to afford a treatment.

It would be a travesty if those that filed these silly patents swayed policymakers to make it harder to cancel all bad patents. Many of the inventors on this letter haven’t even had their listed patents challenged in an IPR–of the 240 listed, only 18 have had any IPRs instituted–making their perspective even less relevant. This makes sense, as many of these patents describe products that are probably not economically viable due to low demand. Indeed, our casual search found several that were allowed to lapse without paying maintenance fees, a sure sign that the inventions did not produce value.

The policies being championed by the letter are already having a large effect. Procedural denials, meaning denials based on something other than the actual merits of the petition, are spiraling upward. The inventor letter makes it seem that these denials are good because they happen when a court challenge of a patent is moving faster than the IPR challenge. The letter claims that since IPR was intended to be an alternative, not an addition, it makes sense to do away with these cases. But in practice these procedural denials are being applied nonsensically and for many other reasons.

For example, sometimes drug patent challenges are so complicated that petitioners have to file multiple petitions at the same time just to get around word counts. The USPTO’s Trial Practice Guide Update says this can be fine “when the patent owner has asserted a large number of claims in litigation.” However, the USPTO is using the “in litigation” language as an excuse to deny all but one of the petitions when there isn’t parallel litigation. So much for IPR being an alternative! These denials happened to challenges to patents on the important diabetes medication Lantus, which costs $357, and a Narcan injector that can save the lives of those overdosing on opioids, costing $126 for two doses. Narcan is only expensive because of the injector patent, the active ingredient – Naloxone – is available as a low cost generic.

The USPTO has also gotten rid of a petition because of the trial date of a completely different company. This concerned a drug used to treat schizophrenia , Invega Sustenna, that costs patients $1,853.

Another denial, concerning vaccine patents, was because the USPTO refused to allow a petitioner to step into the shoes of another company that settled. This was even though the petitioner could have not known that the other company would settle and withdraw their challenge.

Patents are legal instruments with real consequences. When patents represent true innovation, those consequences are usually positive. Patents often incentivize innovation, especially when inventions are difficult to discover but easy to copy. When patents do not represent true innovation, when they should never have been granted, they can be a drag. They can be used to hold up competition or harass other innovators. Common sense dictates there should be a quick and inexpensive system for sorting out bad patents from those that are good. However, any such system is a threat to those that make money off patents that could be cancelled. These voices should be taken with the huge grain of salt they deserve.

More posts from matthew.lane >>