Open access has been discussed many times here on Techdirt. There are several strands to its story. It’s about allowing the public to access research they have paid for through tax-funded grants, without needing to take out often expensive subscriptions to academic titles. It’s about saving educational institutions money that they are currently spending on over-priced academic journals, and which could be better spent elsewhere. It’s about helping to spread knowledge without the friction that traditional publishing introduces, ideally moving to licenses that allow academic research papers to be distributed freely and without restrictions.
But there’s another aspect that receives less attention, revealed here by a new paper that looks at how open access articles are used in a particular and important context – that of Wikipedia. There is a natural synergy between the two, which both aim to make access to knowledge easier. The paper seeks to quantify that:
we analyze a large dataset of citations from Wikipedia and model the role of open access in Wikipedia’s citation patterns. We find that open-access articles are extensively and increasingly more cited in Wikipedia. What is more, they show a 15% higher likelihood of being cited in Wikipedia when compared to closed-access articles, after controlling for confounding factors. This open-access citation effect is particularly strong for articles with low citation counts, including recently published ones. Our results show that open access plays a key role in the dissemination of scientific knowledge, including by providing Wikipedia editors timely access to novel results. These findings have important implications for researchers, policymakers, and practitioners in the field of information science and technology.
What this means in practice is that for the general public open access articles are even more beneficial than those published in traditional titles, since they frequently turn up as Wikipedia sources that can be consulted directly. They are also advantageous for the researchers who write them, since their work is more likely to be cited on the widely-read and influential Wikipedia than if the papers were not open access. As the research notes, this effect is even more pronounced for “articles with low citation counts” – basically, academic work that may be important but is rather obscure. This new paper provides yet another compelling reason why researchers should be publishing their work as open access as a matter of course: out of pure self interest.
You may recall that, back during the last net neutrality open comment period, the FCC’s comment system was overrun by millions of faked comments, including from many dead people. Not surprisingly, it was eventually determined that legacy broadband companies funded the fake comment submissions, which they felt they needed to do because actual activists were actually effective in getting the public to speak out in favor of net neutrality.
But of course, now we live in the “age of generative AI,” and it’s worth wondering just how that’s going to impact all of this. Amusingly, there are already academic journals suggesting that the government should sort and maybe even respond to regulatory open comments using AI as a tool.
But what about commenters themselves using AI to generate the comments in question?
We just recently wrote about how the US Patent Office is seeking comments on a dangerously problematic plan to make it much harder to kill bad patents by reforming the IPR (inter partes review) process to allow the patent director to just flat out reject challenges for certain classes of inventors, including many patent trolls. In that post, we linked to an EFF page urging people to send in their own comments against this proposal. But, really, the EFF is just linking people directly to the page on the Federal Register where you can comment. While they suggest some language, on the whole they expect users to write their own comment.
It appears the patent trolls who want this change have decided to use AI.
A few folks forwarded me copies of an email they received from “US Inventor” which is, effectively, a lobbying trade group for patent trolls, telling their members to submit a comment, and pointing them to an app on Streamlit. I’ll note that the email is ridiculous:
As you may know, the 2011 America Invents Act (AIA) created the Patent Trial and Appeal Board (PTAB). The PTAB is a nonjudicial administrative tribunal within the USPTO. The sole purpose of this court is to invalidate the same patents the USPTO previously granted, thus, creating a dictatorial power for the USPTO Director over America’s most important property right.
The statistics tell the truth about the destruction the PTAB has caused for inventors, small business owners, and startups. A staggering 69% of trials resulted in all claims being unpatentable, with an additional 15% leading to some claims being unpatentable. In total, 84% of patents reviewed by the PTAB result in the cancellation of claims.
I mean, no, as the Supreme Court itself made clear, just as the US PTO can grant a patent, so too can they review it, and then revoke it, if they realize they made a mistake. That’s not “dictatorial power.” And, let’s not even get started on the idea that patents are “America’s most important property right.” I mean, people with brains read this. Don’t insult them.
As for the second paragraph, a normal person would read that and think, wow, it sure looks like the IPR process that enables the PTAB to review patents is pretty damn important since it appears to be catching a very large number of mistakenly granted patents, which would otherwise gum up the work of actual innovation by blocking innovators from bringing products to market.
Because, remember, if the patent is a good, valid, patent, then the PTAB will not cancel claims. The only reason the PTAB cancels claims is if they are bad claims that should not have been granted in the first place, and would create an innovation-destroying monopoly power to block products that should be on the market. So even in this email, the troll lobby is effectively admitting that their real problem with the IPR/PTAB process is that it’s getting rid of their bad patent claims, that never should have been granted.
Anyway, the email sends people to a form that says will help you craft a comment. They don’t say it’s using AI, but it is.
You put in a link to a bio or a LinkedIn profile, some “information about you or your company” and then it generates a pro-patent troll argument for you. It gives you three attempts to do this.
The person who forwarded me the email also tried generating the letters, using the LinkedIn addresses of random people, and the “comments” it generated… were clearly just using AI to read someone’s LinkedIn bio to add some “pro-patent color” at the beginning, then some random AI-generated nonsense in the middle, before appending the identical text at the end of the comment.
Sometimes, the results are (in true generative AI fashion) total gibberish. One of the samples I saw definitely attacked the PTAB, but not for the IPR process. Instead, one claimed that the PTAB was changing claims to appear even broader than the inventor meant (which, um, is not happening).
Of course, the end of each of the “generated” comments is identical, and appears to be pre-filled in the comment generator app. No matter what information you put into the generator, at the end it pushes specific “policy alterations” that the submitter (who did not write the comment) claims should be considered.
Now, there’s an argument that making it easier to generate stronger comments during public comment periods might be a good thing in general. Having read some terrible comments submitted by the public, I could see some general value in letting people scribble in their general thoughts (like they normally submit) and having a tool turn it into something more substantive and useful. But… it also likely means that open comment systems for the federal government are even more likely to be overrun with questionable comments, probably many of which were not even generated by humans.
And that seems like it could be problematic for the overall process of regulatory bodies seeking public comments in the first place.
Well, here’s some welcome news! It appears the EU Commission may have learned something from the less-than-wholehearted support it received following the introduction of its CSA (Child Sexual Abuse) bill.
The proposal hoped to curb the spread of CSAM (child sexual abuse material) by mandating (among other things) client-side scanning of user content. All well and good if the communications aren’t encrypted. But many of them are, thanks to companies offering end-to-end encryption by default to better secure users’ content and communications.
Sure, the bill had its defenders. One in particular (EU Commissioner for Home Affairs Yiva Johansson) has offered multiple incoherent defenses of the proposal that would, in effect, criminalize encryption (at worst) or make encryption completely useless as a security option (at best).
Most EU member nations were reluctant to embrace these extremities. There were, of course, a few exceptions. Spain, for example, thought the far-reaching, extremely broad proposal didn’t go far enough when it came to increasing the government’s powers and its surveillance options. On the other side, the EU Commission saw flat-out rejections from a couple of countries, both of which pointed out the CSA law would violate other existing EU privacy laws.
A recent leak of EU members’ positions on the bill likely factored into this recent decision by the EU Commission to scrub the anti-encryption wording from the CSA proposal. Joseph Hall of the Internet Society posted the alterations to Twitter, noting that this was a “huge win for encryption, confidentiality, and integrity in the EU.”
The changes can be seen starting on page 5 of the updated CSA proposal [PDF]. Here’s where the EU Commission changes tack and decides it’s time to leave encryption alone:
This Regulation shall not lead to any general obligation to monitor the information which providers of hosting services transmit or store, nor to actively seek facts or circumstances indicating illegal activity.
This Regulation shall not prohibit, make impossible, weaken, circumvent or otherwise undermine cybersecurity measures, in particular encryption, including end-to-end encryption, implemented by the relevant information society services or by the users. This Regulation shall not create any obligation to decrypt data.
Breaking/backdooring/criminalizing encryption is off the table for the time being. This proposal still seems like it’s a long way from adoption, but with just a couple of paragraphs, it has suddenly become a whole lot more palatable.
The PCY (presidency of the council, a rotating office shared by all EU members) has also appended a footnote to the paragraph forbidding the weakening of encryption which, if adopted, would take anti-encryption proposals off the table for far longer.
PCY comment: the following recital could be included: “Cybersecurity measures, in particular encryption technologies, including end-to-end encryption, are critical tools to safeguard the security of information within the Union as well as trust, accountability and transparency in the online environment. Therefore, this Regulation should not adversely affect the use of such measures, notably encryption technologies. Any weakening or circumventing of encryption could potentially be abused by malicious third parties. In particular, any mitigation or detection measures should not prohibit, make impossible, weaken, circumvent or otherwise undermine cybersecurity measures irrespective of whether the data is processed at the device of the user before the encryption is applied or while the data is processed in transit or stored by the service provider.”
This recital adds facts that have been conveniently overlooked by those who support undermining encryption to combat CSAM. The recital would also expand this protection against government interference to cover more than just the end-to-end variety.
This is the direction this legislation needs to go. Fighting CSAM is a noble and important goal. But as noble and important as it is, it still doesn’t justify subjecting everyone in the EU to decreased security and worthless faux encryption options. Encryption protects far more than criminals. And I’m heartened to see the push back against this draconian proposal is finally paying off.
There are some questions about whether or not Section 230 protects AI companies from being liable for the output from their generative AI tools. Matt Perrault published a thought-provoking piece arguing that 230 probably does not protect generative AI companies. Jess Miers, writing here at Techdirt, argued the opposite point of view (which I found convincing). Somewhat surprisingly, Senator Ron Wyden and former Rep. Chris Cox, the authors of 230 have agreed with Perrault’s argument.
The Wyden/Cox (Perrault) argument is summed up in this quote from Cox:
“To be entitled to immunity, a provider of an interactive computer service must not have contributed to the creation or development of the content at issue,” he told me. “So when ChatGPT creates content that is later challenged as illegal, Section 230 will not be a defense.”
At a first pass, that may sound compelling. But, as Miers noted in her piece, the details get a lot trickier once you start looking at them. As she points out, it’s already well established that 230 protects algorithmic curation and promotion (this was sorta, partly, at issue in the Gonzalez case, though by the time the Supreme Court heard the case, it was mostly dropped, in part because the lawyers backing Gonzalez realized that their initial argument probably would make search engines illegal).
Further, Miers notes, that 230 cases have already been found to protect algorithmically generated snippets that summarize content elsewhere, even though those are “created” by Google, based on (1) the search input “prompt” from the user, and (2) the giant database of content that Google has scanned.
And, that’s where the issue really gets tricky, and where those insisting that generative AI companies are clearly outside the scope of 230 feel like they haven’t quite thought through all of this: where is the line that you can draw between these two things? At what point do we go from one tool, Google, that scraped a bunch of content and creates a summary in response to input, to another tool, AI, that scrapes a bunch of content and creates “whatever” in response to input?
Well, the two Senators who hate the internet more than anyone else, the bipartisan “destroy the internet, and who cares what damage it does” buddies: Senator Richard Blumenthal and insurrectionist supporting Senator Josh Hawley have teamed up to introduce a bill that explicitly says AI companies get no 230 protection. Leaving aside the question of why any Democrat would be willing to team up with Hawley on literally anything at this moment, this bill is… well… weird.
First, just the fact that they had to write this bill suggests (perhaps surprisingly?) that Hawley and Blumenthal agree with Miers more than they agree with Wyden, Cox, or Perrault. If 230 didn’t apply to AI companies, why would they need to write this bill?
But, if you look at the text of the bill, you quickly realize that Hawley and Blumenthal (this part is not surprising) have no clue how to draft a bill that wouldn’t suck in a ton of other services, and strip them of 230 protections (perhaps that’s their real goal, as both have tried to destroy Section 230 going back many years).
The definition of “Generative Artificial Intelligence” is, well, a problem:
GENERATIVE ARTIFICIAL INTELLIGENCE.—The term ‘generative artificial intelligence’ means an artificial intelligence system that is capable of generating novel text, video, images, audio, and other media based on prompts or other forms of data provided by a person.’
First off, AI is quickly getting built into basically everything these days, so this definition is going to capture much of the internet within a few years. But, going back to the search example discussed above. Where the courts had said that 230 protected Google’s algorithmically generated summaries.
With this bill in place, that’s likely no longer true.
Or… as social media tools build in AI (which is absolutely coming) to help you craft better content, do all of those services then lose 230 protection? Just for helping users create better content?
And, of course, all of this confuses the point of Section 230, which, as we keep explaining, is just a procedural fast pass to get frivolous cases tossed out.
Just to make this point clear, let’s look at what happens should this bill become law. Say someone does a Google search on something, and finds that the automatically generated summary is written in a way that they feel is defamatory, even though it’s just a computerized attempt to summarize what others have written, in response to a prompt. The person sues Google, which is no longer protected by 230.
With Section 230, Google would be able to get the case kicked out with minimal hassle, as they’d file a relatively straightforward motion to dismiss pointing to 230 and get the case dismissed. Without that, they can still argue that the case is bad because, as an algorithm, Google could not have had the requisite knowledge to say anything defamatory. But, this is a more complicated (and more expensive) legal argument to make, and one that might not get tossed out on a motion to dismiss, but which would have to go through discovery, and to the more involved summary judgment stage, if not go all the way to trial.
In the end, it’s likely that Google still wins the case, because it had no knowledge at all as to whether the content was false, but now the process is expensive and wasteful. And, maybe it doesn’t matter for Google, which has buildings full of lawyers.
But, it does matter for basically every AI startup out there. Or any other company making use of AI to make their products better and more useful. If those products spew out some nonsense, even if no one believes it, must we have to fight a court battle over it?
Think back to the case we just recently spoke about regarding OpenAI being sued for defamation. Yes, ChatGPT appeared to make up some nonsense, but there remains no indication that anyone believed the nonsense. Only the one reporter saw it, and seemed to recognize it was fake. If he had then published the content, perhaps he would be liable for spreading something he knew was fake. But if it’s just ChatGPT writing it in response to that guy’s prompts, where is the harm?
In other words, even in the world of generative AI, there are still humans in the loop, and thus there can still be liability placed on the party responsible for (1) creating, via their prompts, and (2) spreading (if they publish it more widely) the violative content.
It still makes sense, then, for 230 to protect the AI tools.
Without that, what would AI developers do? How do you train an AI tool to never get anything wrong in producing content? And, even if you had some way to do that, wouldn’t that ruin many uses of AI? Lots of people use AI to deliberately generate fiction. I keep hearing about writers using it as a brainstorming tool. But if 230 doesn’t protect AI, then it would be way too risky for any AI tool to even offer to create “fiction.”
Yes, generative AI feels new and scary. But again, this all feels like an overreaction. The legal system today, including Section 230, seems pretty well equipped to handle specific scenarios that people seem most concerned about.
The DOJ must not have much confidence in its case against Backpage executives Michael Lacey and James Larkin. This prosecution is now more than a half-decade old and the government still hasn’t found a way to lock up the many Backpage employees and founders it arrested.
The Backpage site was seized in 2018. This followed years of selective prosecution all over the nation — none of which had resulted in criminal or civil charges sticking against the supposed haven for sex traffickers. Three years after that, the trial finally started, with the DOJ arguing (often without using these exact words) that Backpage not only knew sex trafficking was happening via the site, but aided and abetted this criminal act.
Ignored by the DOJ was the fact that law enforcement viewed Backpage as a valuable tool for tracking down sex traffickers. It also ignored the help Backpage had directly provided to investigators hoping to locate traffickers and their victims.
Before the trial, the judge concluded she would allow evidence showing that people were trafficked using the site, but would not allow prosecutors to linger on the details of the abuse suffered by victims.
“It seemed the government abused that leeway,” Brnovich said. The judge said one government witness testified about being raped more than once, which raises a “whole new emotional response from people.”
The DOJ can’t seem to keep itself from talking about stuff a judge told it to stop talking about. So, it’s pretty rich the DOJ thinks the judge should prevent Backpage and its legal team from talking about all sorts of things that seem very relevant to their defense. Elizabeth Nolan Brown has the details (and a ton of court documents) at Reason.
In a series of motions filed yesterday, the government seeks to prevent the Backpage defendants’ legal team from making basically any reasonable attempt to defend against the charges against them.
Most egregiously, prosecutors want to bar them from mentioning the First Amendment. But the First Amendment is at the center of this case, which revolves around user-generated ads posted to a digital classified-advertising platform. The very crux of the matter is online content and speech.
The DOJ is unwilling to participate in a fair fight. It has already shown it will ignore the court’s orders when it presents its side. And now it wants to prevent Backpage from discussing anything that might weaken the government’s case.
The First Amendment tops the list of subjects the DOJ wants to forbid from making a court appearance. But that’s only one of the DOJ’s motions. There’s a whole lot more the DOJ feels Backpage should just shut up about.
A flurry of filed motions suggest Backpage should do nothing more than sit quietly and allow the DOJ to talk its founders into prison. One motion says the defendants shouldn’t be allowed to make any statements about “the legality or illegality of any advertisement.” Another says Backpage should be forbidden from referencing Section 230 of the CDA whatsoever. The argument there is that this immunity does not apply to federal criminal prosecutions. But that conveniently ignores the value it has for Backpage, which could use it to illustrate that it’s being prosecuted for content contributed by third parties, despite the fact that it could not be sued over this very same content.
Yet another motion asks that Lacey, Larkin, and other Backpage employees be blocked from mentioning their actions were guided by their legal team, which had assured them their business model did not violate the law. This is the government asking the court to tie Backpage’s hands, allowing it to make unchallenged allegations about criminal intent.
And, ironically, the government wants the Backpage legal team blocked from mentioning the DOJ’s spectacular one-week flameout during its first prosecutorial attempt. The government expects the defendants to abide by a laundry list of court-imposed restrictions when it previously demonstrated it couldn’t be bothered to comply with a single request by the presiding judge.
The DOJ has been criticized for its trial by ambush tactics before. This isn’t an ambush, though. This is the government tacitly agreeing to a fair fight and then asking the impartial observer to strip its opponents of all of its weapons before the fight begins. This is complete bullshit. Hopefully, the judge will toss these just as quickly as the DOJ filed them. If the DOJ wants to use its considerable power to punish people for things other people did, the least it can do is allow those it’s trying to punish to fully defend themselves.
Get the most out of your devices with this powerful, compact charging station. This 7-in-1 charging dock has universal compatibility that works with most iPhones, iPads, and many other devices throughout your home. It also features a super-speed 30W Fast PD port and intelligent identification technology to keep the device safe and healthy. With its multi-protection design and sturdy removal dividers, this charging station makes it one of the must-have chargers out there. It’s on sale for $57.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
To be honest, I’m somewhat amazed that more copyright lawsuits haven’t been filed against Twitter yet. There have been multiple reports of how the company’s DMCA takedown response systems have been broken/ignored since Musk took over. Without looking for it, I’ve seen full length high def movies show up in my Twitter feed (including movies still in theaters).
Still, it’s a bit surprising that the first such lawsuit is not from a Hollywood studio, but rather a big giant list of music publishers. And I’m pretty sure that Twitter has a strong case, if Elon bothers to hire competent copyright attorneys.
The backstory here is that music publishers (who are different than the record labels, even if some are connected to labels) have been demanding that Twitter license content for years. And, for years, Twitter correctly pointed out that it abides by the DMCA, and takes down copyright-infringing works when it receives a proper takedown notice. This is exactly what the law allows them to do, and it’s not as if Twitter is where people go to listen to music (and what music does get posted is generally hosted elsewhere and posted in a promotional manner). So, really, the idea that Twitter had to get a license from the publishers was always a stretch.
Still, almost immediately after Elon announced his bid for Twitter, the music publishers started agitating for him to license compositions. But, this is Elon Musk we’re talking about. The man won’t even pay his rent, or his cloud computing bills. Did anyone actually think he would pay for publisher licenses he doesn’t even need? So, it was little surprise when there were reports earlier this year that the talks had “stalled.”
And now there’s a lawsuit. But it doesn’t seem like a particularly strong one:
This is a civil action seeking damages and injunctive relief for Twitter’s willful copyright infringement. Twitter fuels its business with countless infringing copies of musical compositions, violating Publishers’ and others’ exclusive rights under copyright law. While numerous Twitter competitors recognize the need for proper licenses and agreements for the use of musical compositions on their platforms, Twitter does not, and instead breeds massive copyright infringement that harms music creators.
I mean, first of all… what? I’ve been an avid Twitter users from 2008 through 2022 and I honestly can’t recall ever encountering music in any significant way, or if I did, it was links to licensed sources such as Spotify, Apple Music, YouTube or whatever.
The only reason to do such a license is if you’re actually hosting music (and even then the DMCA should protect you, but most sites choose to get a license mainly to get the industry to stop constantly screaming at them and so that they don’t have to constantly play DMCA takedown whac-a-mole).
And, some of this is just nonsense:
Twitter knows perfectly well that neither it nor users of the Twitter platform have secured licenses for the rampant use of music being made on its platform as complained of herein. Nonetheless, in connection with its highly interactive platform, Twitter consistently and knowingly hosts and streams infringing copies of musical compositions, including ones uploaded by or streamed to Tennessee residents and including specific infringing material that Twitter knows is infringing. Twitter also routinely continues to provide specific known repeat infringers with use of the Twitter platform, which they use for more infringement.
The standard here has to be specific, actual knowledge of infringing works, not general knowledge that some people on the platform sometimes post infringing works. And while the paragraph above alleges “specific infringing material that Twitter knows is infringing,” it’s not actually that simple. That’s the same sort of argument that Viacom made against YouTube and failed with. In that case, Viacom also insisted that YouTube had to know these works were infringing and the court said that’s not how it works. And it’s even more limited in this case because the publishers say that Twitter “knows” that its “users” have not secured licenses, but does not suggest how they know this at all. It’s entirely possible that some of the users have, in fact, secured licenses. Or, as noted, that they’re just posting videos from elsewhere that is licensed. The publishers know this, so this is just misleading nonsense.
Twitter profits handsomely from its infringement of Publishers’ repertoires of musical compositions. The audio and audio-visual recordings embodying those compositions attract and retain users (both account holders and visitors) and drive engagement, thereby furthering Twitter’s lucrative advertising business and other revenue streams.
I doubt this very much. First, again, who goes to Twitter for the music? Second, (also, again) the vast majority of music is linked to on other sites, not hosted by Twitter. Yes, Twitter hosts some video, and yes, Elon expanded how much can be posted, but it’s still a stretch to argue that Twitter is “profiting” from music on its platform.
This is just typical National Music Publishers Association (NMPA) nonsense, in which they falsely insist that no one does anything for any reason except to seek out their music, and that they should be paid for every listen.
Still, there are some things in here that suggest that Musk, in ways that only an incompetent Musk would do, has made his own situation worse. The key bits:
Twitter has repeatedly failed to take the most basic step of expeditiously removing, or disabling access to, the infringing material identified by the infringement notices. Twitter has also continued to assist known repeat infringers with their infringement. Those repeat offenders do not face a realistic threat of Twitter terminating their accounts and thus the cycle of infringement continues across the Twitter platform.
If that’s actually what’s happening, then that would be problematic. The complaint does point to an example of “a known repeat infringer” which at least raises some questions:
The screenshot below illustrates Twitter’s monetization of infringing content. This infringing tweet is from a known repeat infringer who has been the subject of at least nine infringement notices to Twitter, identifying at least fourteen infringing tweets, which contained unauthorized copies of Publishers’ musical compositions. Directly below the infringing tweet is a paid “Promoted” tweet selected by Twitter. To the right of the infringing tweet is a paid “Promoted” account recommended by Twitter. Twitter’s account recommendations also include another known repeat infringer, Twitter Account A, identified in paragraph 166 below.
I’m at least a little confused by this. From what I see there, it’s not at all clear that the original tweet is hosted audio. It’s possible, but normally when there’s a video player it shows with the indicators of a video player. And, honestly, the fact that there are other promoted tweets or recommendations is mostly meaningless for the copyright issues at play.
As for the repeat infringer question, the DMCA requires that companies have a “reasonably implemented” repeat infringer policy, but does not specify exactly how it works, so just claiming that there are repeat infringers on the site, without more info, does not prove that Twitter would be liable for infringement (it could be, I’m just noting that the complaint is pretty weak on this point). The legal battles around this are always about whether or not a particular policy is reasonably implemented, and without more info it’s difficult to know if Twitter’s would be.
Later in the lawsuit there are lots of complaints about how long it takes Twitter to review DMCA takedowns, which might be indicative of a real problem… but might not be:
The precise extent of Twitter’s lengthy delays will be the subject of discovery and analysis, including through a review of Twitter’s records. In the meantime, by way of an example, the musical composition “What a Wonderful World,” written by Bob Thiele and George David Weiss and performed by Louis Armstrong, is a timeless classic, chosen by Rolling Stone in September 2021 as one of the top 200 songs of all time. Unauthorized audio and audio-visual recordings that embody “What a Wonderful World” are rampant on the Twitter platform, and Twitter has failed repeatedly to take them down in an expeditious manner. Across all the NMPA Notices sent to Twitter that identified the musical composition for “What a Wonderful World” by name, along with precise URLs for the tweets containing the infringing uses of that composition, Twitter failed to take down at least 240 infringing tweets incorporating “What a Wonderful World” within 14 days after the NMPA Notice was sent. Even more troubling, over 120 of those tweets were still available at least a month after the associated NMPA notice was sent to Twitter, and more than two dozen tweets were still available on Twitter over two months after NMPA sent a notice identifying them as infringing.
Seems like an odd choice to use, as an example, a song that is literally 56 years old, which at the time it was published had a maximum copyright term of 56 years? Yes, the song is still under copyright thanks to endless copyright term extensions, but… still. You’d think they’d pick another song.
Also, the lawsuit misrepresents Twitter’s marketing claims about Twitter and music, which tend to be about communities of fans, not posting actual music (again, that’s not really a Twitter thing).
Twitter has been outspoken about how important music is to Twitter and users of its platform. In its marketing, blogs, or tweets, Twitter stated:
a. “[M]usic is the largest community” on Twitter’s platform, where “people are more likely to follow a music-related account than any other type of account on Twitter.”
b. The Twitter platform is “the ultimate connection to the music world for fans and brands.”
c. “Every day, more than 30 million tweets are published about music around the world . . . [which is] more than 20,000 every minute.”
Twitter even has its own “@TwitterMusic” account on its platform dedicated to top music trends, which has a massive following of 11.5 million users
I mean, literally none of that has anything to do with infringing content. It’s mostly about music fans and connecting with artists. Not listening to music on the platform. It’s just designed to sound bad, despite being wholly unrelated to the actual copyright question.
Now, there are some things that Elon has done that may cause him trouble in court. Recently departed trust & safety boss Ella Irwin (stupidly) announced that the company wouldn’t suspend users unless “it is clear the user knew the content was illegal.”
While that may seem commendable in some ways, it might conflict with the DMCA’s requirements regarding repeat infringer policies. At least, the NMPA sure claims it does:
Twitter has told users of its platform that “[w]e don’t suspend users for posting reported content unless it is clear that the user knew the content was illegal.” But Twitter’s practice is unreasonable and contrary to law. Infringement occurs as a matter of law. Direct infringement is a strict liability offense, without any requirement that the infringer know the content they post is illegal.
Except… that’s not entirely accurate by the NMPA either. While the courts have definitely moved in that direction, some still do recognize the concept of innocent infringement (and, frankly, copyright law would be a lot more reasonable if the courts went back to understanding this).
There are other Elon decisions that the complaint calls out, but some are silly and have nothing to do with the copyright questions:
Instead of grounding decisions on sound policy development and reasonable implementation, Twitter has outsourced trust and safety decisions to Twitter polls, i.e., votes among users of the Twitter platform, through a feature on the platform used for polling.
But… there is another thing the lawsuit calls out which MANY copyright lawyers freaked out about last month, when a Twitter user appeared to complain that they were being unfairly hit with copyright claims and Elon told the user to try “turning on subscriptions.”
I saw multiple copyright lawyers freak out about this and try to warn Musk that this tweet would show up in copyright lawsuits. At the time, I looked into the issue and… while it looks bad, it’s not as bad as it seems. The “Figen” account does not appear to actually be infringing on copyrights. It actually is linking to the original uploads by the original users (those might be infringing, but most did appear to be from the original creator of the work). This is a confusing bit of how Twitter works, when you can “repost” someone else’s video, but you’re really just linking to their upload.
Still, this incident shows up in the lawsuit (somewhat obliquely):
By way of another example, a user tweeted that Twitter should not suspend accounts for receiving multiple copyright notices but rather should only disable the copyrighted videos. That user asserted that the user does not earn money from the videos they share, or understand that they are copyrighted, and that copyright owners should ask Twitter users to remove the videos rather than submit notices to Twitter. Twitter replied publicly to this user, but without asking the user not to infringe, without referring the user to Twitter’s Copyright policy, and without telling the user that copyright infringement is unlawful regardless of whether the user makes money from it or realizes that a particular video is infringing. Instead, Twitter suggested that the user “consider turning on subscriptions”—a feature of Twitter Blue that garners revenue for Twitter, enables the user to receive payments from other users of the Twitter platform, and, because the infringing tweets are behind a paywall, makes it more difficult for copyright owners to find.
So, this one goes both ways. If you understand that Figen wasn’t actually infringing, then Elon’s statement isn’t so bad. But it’s not even clear that Elon realized this user wasn’t actually infringing. And if he did believe the account was infringing then… yeah… that’s bad. But, also, it’s not at all a surprise this showed up in a lawsuit.
And then there’s this:
I mean, this is another case where Elon is correct, but that plays badly if you’re in a lawsuit for ignoring DMCA takedowns, and of course the NMPA calls it out.
Twitter’s most senior executive has previously described the Digital Millennium Copyright Act (“DMCA”)—a statute that, among other things, provides for notice and takedown of infringing copyrighted material—as a “plague on humanity.”… This statement and others like it exert pressure on Twitter employees, including those in its trust and safety team, on issues relating to copyright and infringement.
So, anyway, this is not a particularly strong lawsuit, but it’s not a joke either. It’s got many aspects where Elon and his inability to shut the fuck up clearly made things worse. But it does seem like the kind of copyright lawsuit that Twitter could win if it had competent copyright litigators to handle it.
Which means, the question is: can Elon actually hire a competent copyright litigator these days?
The auto industry has spent several years trying to dismantle efforts in Massachusetts to make auto repair more affordable and convenient. And they just got help from US auto safety regulators.
Techdirt has obtained a copy of a letter sent by the The National Highway Traffic Safety Administration (NHTSA) to major auto manufacturers, effectively giving them the greenlight to ignore Massachusetts’ recently passed “right to repair” law, which required that all new telematics-equipped vehicles be accessible via a standardized, transparent platforms that allows owners and third-party repair shops to access vehicle data via a mobile device.
The auto industry has spent several years falsely claiming that the law creates serious new privacy and security risks to consumers, even going so far as to run sleazy ads claiming the updated law somehow aided sexual predators. Major car makers also sued to kill the law; a case that’s still ongoing.
Massachusetts Attorney General Andrea Campbell had indicated that the state would begin enforcing the law while the case wound its way through the courts, but in its letter, NHTSA parroted numerous false claims that the new law would create “unreasonable” security threats to U.S. car buyers, insisting federal authority pre-empts state lawmakers attempts to dismantle the auto industry’s repair monopoly:
“While NHTSA has stressed that it is important for consumers to continue to have the ability to choose where to have their vehicles serviced and repaired, consumers must be afforded choice in a manner that does not pose an unreasonable risk to motor vehicle safety.”
The problem is that every industry that’s attempting to monopolize repair to boost their own revenues claims that more accessible, affordable repair provisions create unique security and privacy threats.
Apple claims that more affordable, diverse repair options will turn states into “meccas for hackers.” Companies from Sony to John Deere similarly claim that the ability to affordably, conveniently repair everything from game consoles to tractors will similarly result in a vast parade of terrible security vulnerabilities.
But the claim that right to repair reform somehow makes consumers less safe simply isn’t true; a 2021 FTC report on right to repair issues noted that manufacturers routinely over-emphasized or manufactured such concerns for lobbying impact.
“The record contains no empirical evidence to suggest that independent repair shops are more or less likely than authorized repair shops to compromise or misuse customer data,” the FTC found.
Unsurprisingly, right to repair activists weren’t impressed by NHTSA’s last minute attempt to protect the automotive industry under the pretense of consumer safety. Especially given the agency’s continued inaction when it comes to the increasingly fatal impact of misrepresented and undercooked self-driving technology by companies like Tesla.
“After doing basically nothing to stop manufacturers from beta testing with 2000 pound machines driving through school zones everyday, now NHTSA says its too risky if I can look at the data myself?” lamented PIRG right to repair campaign Director Nathan Proctor.
Proctor argues that automakers have built and continue to protect a lucrative repair monopoly under the pretense that doing absolutely anything else poses a unique safety risk, an argument he says is nonsensical given the industry’s own, repeated habit of implementing sloppily built over the air auto updates and subscription features that can just as easily cause consumer harm.
“Is it now the position of NHTSA that we need a benevolent monopoly on access to data transmitted from cars, and we should just trust them?” Proctor asked. “Why is our government trying to regulate a monopoly into force?”