Mike Masnick's Techdirt Profile

Mike Masnick

About Mike Masnick Techdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Twitter at www.twitter.com/mmasnick

Posted on Techdirt - 13 January 2023 @ 02:30pm

Latest Twitter Problem: Its API Has Been Down And No Info Has Been Provided

Last night, I saw a bunch of folks complaining that the various apps through which they accessed Twitter, were no longer working. People using Tweetbot, Twitteriffic, Tweeten, and others all noted that they were blocked from actually using those services to read Twitter. It quickly became clear that Twitter’s API was completely down. There was plenty of speculation that (in a repeat of an earlier era when Twitter greatly limited its API access out of a fear of losing control of the service to third party developers) that Elon Musk was doing this on purpose to try to stop third party app developers from offering ad-free access to Twitter. However, that still seems like pretty broad speculation.

Jack Dorsey, in the past, admitted that the decision a decade ago to cut off their API was one of the biggest mistakes the company made, and in recent years had tried to rectify that with a better developer program and more open API. There were some concerns last month when Elon Musk shut down the Twitter Toolbox program somewhat abruptly, which was a useful tool for many third party app developers.

So while I wouldn’t go so far as to say that this is a deliberate move by Musk to cut off those services, the lack of communication is perhaps even worse. App developers say they’ve had no communications from the company and Musk (so far, as I write this) has said nothing publicly.

None of that is good if you’re trying to cultivate a strong community with the developers who make your service better and more usable. And, given how Twitter has burned developers in the past, it seems particularly worrisome. If it’s intentional by Musk, that’s obviously problematic. But if it’s not intentional, and it’s just that something broke… and no one bothered to communicate with the various organizations that use the API, well, that might even be worse?

Of course, I will note that the Mastodon API remains available for all to use (and just in the past few weeks some really, really cool new services have been developed for it that go above and beyond some of the ones I’ve mentioned in the past). Perhaps this kind of scenario will cause more of them to explore providing new tools, apps, and services for that platform as well.

Posted on Techdirt - 13 January 2023 @ 12:27pm

New Study: No, Of Course Russian Twitter Trolls Didn’t Impact The 2016 Election

Right after the 2016 election that saw Donald Trump elected President, there was this collective wail among many who were unable to comprehend how this could have happened, searching for someone to blame. Two targets quickly emerged: social media and Russia. Often the two were combined into “Russian trolls on social media.” As we’ve noted, those Russian trolls certainly existed, and certainly were trying to influence the election, but it seemed dubious to us that they had any real effect. As we noted the day after the election, it was silly to claim that social media magically made people vote for Trump.

In the time since then, we’ve seen more and more evidence showing that the impact of social media was really not at all what many people seem to believe. We’ve talked about the studies that have, repeatedly, shown that cable news had way more of an impact than anything that came out of social media, not just for the election, but also for COVID disinfo.

Now there’s a very interesting new study, published in Nature with a long list of researchers (George Eady, Tom Paskhalis, Jan Zilinsky, Richard Bonneau, Jonathan Nagler, and Joshua Tucker), looking at whether or not Russian trolls on social media had any real impact on the 2016 election and the summary is no, they did not.

There is widespread concern that foreign actors are using social media to interfere in elections worldwide. Yet data have been unavailable to investigate links between exposure to foreign influence campaigns and political behavior. Using longitudinal survey data from US respondents linked to their Twitter feeds, we quantify the relationship between exposure to the Russian foreign influence campaign and attitudes and voting behavior in the 2016 US election. We demonstrate, first, that exposure to Russian disinformation accounts was heavily concentrated: only 1% of users accounted for 70% of exposures. Second, exposure was concentrated among users who strongly identified as Republicans. Third, exposure to the Russian influence campaign was eclipsed by content from domestic news media and politicians. Finally, we find no evidence of a meaningful relationship between exposure to the Russian foreign influence campaign and changes in attitudes, polarization, or voting behavior. The results have implications for understanding the limits of election interference campaigns on social media.

Basically, yes, the trolls showed up and tried to sow discontent. But, the people who interacted with it were always going to vote for Trump anyway, and again, existing media was way, way, way more influential than the Russian trolls on social media.

The full report is all sorts of fascinating, and again shows how little impact the Russian trolls actually had. Especially compared to existing news media and US politicians.

Charts comparing exposure to Russian influence campaigns versus news medi and politicians.  News media dominated, followed by politicians, and Russian foreign influence is just a blip.

The research does show that those who identified as “strongly Republican” were way more likely to encounter/interact with Russian propaganda, but that’s little surprise since that was a key (but not only) target of Russian propaganda. But, again, those individuals were never going to vote for Hillary Clinton in the first place. The study used various models to determine the impact on voting and found it basically negligible.

 As estimates in the first panel indicate, the relationship between the number of posts from Russian foreign influence accounts that users are exposed to and voting for Donald Trump is near zero (and not statistically significant). This is the case whether the outcome is measured as vote choice in the election itself; the ranking of Clinton and Trump on equivalent survey questions across survey waves; and with the broader measure capturing whether voting behavior more generally favored Trump or Clinton through voting abstentions, changes in vote choice, or voting for a third party. The signs on the coefficients in each case are also negative, both for the count and binary measure, a result that would be inconsistent with a relationship of exposure being favorable to Trump. It is also worth noting that none of the other explanatory variables (with the exception of sex in some models) used as controls appear to be statistically significant predictors of the change in voting preferences

As the researchers conclude:

Taking our analyses together, it would appear unlikely that the Russian foreign influence campaign on Twitter could have had much more than a relatively minor influence on individual-level attitudes and voting behavior for four related reasons. First, we find that exposure to posts from Russian foreign influence accounts was concentrated among a small group of users, with only 1% of users accounting for 70% of all exposures. Second, exposure to Russian foreign influence tweets was overshadowed by the amount of exposure to traditional news media and US political candidates. Third, respondents with the highest levels of exposure to posts from Russian foreign influence accounts were those arguably least likely to need influencing: those who identified themselves as highly partisan Republicans, who were already likely favorable to Donald Trump. Fourth, we did not detect any meaningful relationships between exposure to posts from Russian foreign influence accounts and changes in respondents’ attitudes on the issues, political polarization, or voting behavior. Each of these findings is not independently dispositive. Jointly, however, we find concordant evidence between exposure to Russian disinformation—which is both lower and more concentrated than one might expect to be impactful—and the absence of a relationship to changes in attitudes and voting behavior.

The researchers do note that there are some limitations to their research (focused just on tweets, and just on identified Russia influence campaigns), but it does seem noteworthy.

This is a really useful addition to the research out there, though it’s not going to stop the, ahem, disinformation that social media magically impacted the election from continuing to spread. Even if that’s disinformation about disinformation.

Posted on Techdirt - 13 January 2023 @ 09:25am

The Anti-Twitter Files: January 6th Committee Report Shows How Twitter Leaned Over Backwards To Protect Trump & Conservatives

For all the talk of the “Twitter Files,” as we’ve detailed, they’ve mostly been, at best, misleading, and frequently actively wrong. One of the big reveals, we were told, was that the Files were going to expose the political machinations of how Twitter banned former President Trump. And, indeed, Bari Weiss’s “Part Five” of the Twitter Files, back in mid-December, purported to reveal the big secret reckoning. But if you haven’t heard much about it since then, it’s because… they were a complete flop when it came to anything of interest. Basically, it was exactly what some of us said the day it happened: a difficult decision with a number of competing factors going into it. One that could have gone either way, but recognizing the gravity of what happened on January 6th, and the genuine concern that Trump would continue to whip his fans into an insurrectionist frenzy, one that you can see a reasonable argument for making.

And while Musk (falsely) insisted that the big reveal was that Trump didn’t actually violate Twitter’s policies, that’s also a misreading of what happened. What we’ve learned is that Trump and other Republican leaders were actually given special treatment over the years, because they tended to violate policies way more often than Democrats. But, knowing that Republicans would flop to the ground and fake injury any time they were faced with even having to take the slightest bit of responsibility for violating policies, all the big social media platforms went above and beyond to better protect the high profile accounts of Republican rule breakers.

And while many people tried to paint the decision to finally ban Trump as some sort of “proof” that the company leadership was a bunch of left-leaning censors, the reality seemed to be quite different. Even Weiss’ big reveal was simply that there was strong and heated internal debate about what to do, with many employees (mostly not directly engaged in content moderation issues) calling for the company to ban him, while executives and trust & safety folks questioning whether or not that would be appropriate.

Right at the end of last year, though, as the House Select Committee investigated January 6th was wrapping up, some of the details of what they discovered about Twitter’s debate was leaked to Rolling Stone, and presents an even more detailed picture of how the company strongly resisted calls to ban Trump.

In the draft summary, written by the committee’s “purple” or social media team, staffers were more pointed about what they saw as the failures of big social media companies.  

“The sheer scale of Republican post-election rage paralyzed decisionmakers at Twitter and Facebook, who feared political reprisals if they took strong action,” the summary concluded.

The report shows that, again contrary to the public narrative pushed by Musk and friends, Twitter’s leadership wasn’t as deeply engaged in the various political happenings:

And even days after the insurrection, former Twitter employees told the committee that executives were still slow to recognize the risk Trump could pose in inciting future violence. After Trump tweeted that he would not attend Joe Biden’s inauguration, Safety Team employees testified that they saw “the exact same rhetoric and the exact same language that had led up to January 6th popping underneath” his tweets, leading to fears of another act of mass violence.

Some of the people who worked on that social media report, separately wrote an article for Tech Policy Press, talking about some of what they saw, which didn’t make it into any public report. They note that their research debunked the widely held notion that the social media companies acted with their bottom line in mind in refusing to limit disinformation, and again found that fear of angering Republicans was a key motivating factor:

At the outset of the investigation, we believed we might find evidence that large platforms like Facebook, Twitter, and YouTube resisted taking proactive steps to limit the spread of violent and misleading content during the election out of concern for their profit margins. These large platforms ultimately derive revenue from keeping users engaged with their respective services so that they can show those users more advertisements. Analysts have argued that this business model rewards and incentivizes divisive, negative, misleading, and sometimes hateful or violent content. It would make sense, then, that platforms had reason to pull punches out of concern for their bottom line.

While it is possible this is true more generally, our investigation found little direct evidence for this motivation in the context of the 2020 election. Advocates for bold action within these companies – such as Facebook’s “break glass” measures or Twitter’s policies for handling implicit incitement to violence – were more likely to meet resistance for political reasons than explicitly financial ones. 

As the report’s researchers found, Twitter was extremely resistant to putting in place policies that might make Republicans mad:

For example, after President Trump told the Proud Boys to “stand back and stand by”’ during the first presidential debate in 2020, implicit and explicit calls for violence spread across Twitter. Former members of Twitter’s Trust and Safety team told the Select Committee that a draft policy to address such coded language was blocked by then-Vice President for Trust & Safety Del Harvey because she believed some of the more implicit phrases, like “locked and loaded,” could refer to self-defense. The phrase was much discussed in internal policy debates, but it was not chosen out of thin air – it was frequently invoked following the shooting by Kyle Rittenhouse in Kenosha the previous summer. But the fact it appeared in only a small fraction of the hundreds of tweets used to inform the policy led staff to the conclusion that Harvey’s decision was meant to avoid a controversial crackdown on violent speech among right-wing users. Ironically, elements of this policy were later used to guide the removal of a crescendo of violent tweets during the January 6th attack when the Trust & Safety team was forced to act without leadership from their manager, whose directive to them was, according to one witness, to “stop the insurrection.” 

The authors noted, explicitly, that people reading the Twitter Files to say that Twitter was controlled by a bunch of coastal liberals trying to silence conservatives have it quite backwards:

One clear conclusion from our investigation is that proponents of the recently released “Twitter Files,” who claim that platform suspensions of the former President are evidence of anti-conservative bias, have it completely backward. Platforms did not hold Trump to a higher standard by removing his account after January 6th. Rather, for years they wrote rules to avoid holding him and his supporters accountable; it took an attempted coup d’état for them to change course. Evidence and testimony provided by members of Twitter’s Trust & Safety team make clear that those arguing Trump was held to an unfair double standard are willfully neglecting or overlooking the significance of January 6th in the context of his ban from major platforms. In the words of one Twitter employee who came forward to the Committee, if Trump had been “any other user on Twitter, he would have been permanently suspended a very long time ago.” 

None of this should be a surprise to anyone who has been reading Techdirt throughout all of this. For years, we’ve pointed out that the whining from “conservatives” that social media was biased against them was nothing more than an attempt to “work the refs” and basically lean on the decision makers to make sure the opposite was true. It was designed to make sure that the trust & safety teams at these companies were so frightened about the potential for politicians and the media to make a big deal out of any decision that it effectively gave them free rein to ignore the rules and push the boundaries, and the companies (beyond just Twitter) were too scared of the potential reaction to react.

This is especially ironic, given all the nonsense we’re hearing now about how the FBI was supposedly “censoring” people via Twitter. The truth is that it was actually Republican politicians, media, and influencers who scared Twitter away from taking actions against rule violators who were deemed to be prominent conservatives.

Posted on Techdirt - 12 January 2023 @ 12:04pm

If You Want A Summary Of All The Ways In Which Elon Is A Hypocrite In How He’s Running Twitter, Watch This Video

If you’ve been reading Techdirt over the past few months, literally nothing in this latest Cody Johnston video will be surprising or new, but it does do a really nice job of laying it all out in a pretty clear way in just 52 minutes of humorous exposition:

It sounds like Part II will be looking at the Twitter Files, which we’ve also debunked multiple times here, so I look forward to that as well.

The key point that Cody makes in the video, which we keep trying to highlight here, is that the issue is not that Musk isn’t free to run Twitter however he wants. He is. He can. The issue is that in speed running the content moderation learning curve not only is he going back on basically everything he said (hilariously to loud applause from his biggest cultish fans), but he’s only doing so when the “bad stuff” seems to impact him personally.

While the company used to have a trust & safety staff that focused on making the site “safe” for as many people as possible, almost all of the decisions we’ve seen to date under Musk are simply about making the site a safe space, personally, for Elon Musk. That is, people who are advocating violence or doxxing people Musk doesn’t know? Those seem free to continue, and are encouraged to drum up as much engagement as possible. But if Musk himself feels personally inconvenienced then, magically, he must do something.

The hypocrisy in these decisions is one thing. The fact that Musk seems to view the moderation decisions solely through the lens of what makes him feel better, personally, is what’s really telling. For years, we’ve highlighted that most critics of trust & safety efforts basically think the “right way” to do trust & safety is what they think is best for themselves. Musk is in the rare position where he can actually let that play out.

The reality for most other sites, though, is that they’re forced to face actual trade-offs about how to make the site more broadly trustworthy and safe. Musk doesn’t seem to realize that’s part of what’s necessary to make a site long-term sustainable. So, Twitter becomes his personal playground, but not one that the rest of us should want to play in.

Posted on Techdirt - 12 January 2023 @ 09:24am

Biden WSJ Tech Op-ed: More Of The Same Confused Stuff He Said Last Time

President Biden has a new Congress, specifically with an already dysfunctional House of Representatives likely to explode at a moment’s notice. But he’s still pushing his own slightly confused tech agenda, which is a mix of accurately diagnosing some problems, misdiagnosing others, and being vastly confused about potential solutions for all of them. It’s unfortunate that it appears there is still no real growth among Biden or (apparently) his top tech advisors on actually understanding the various issues at work. It’s as if he’s been frozen in time, and looking solely at the issues as they were a decade ago, rather than what they are today.

The American tech industry is the most innovative in the world. I’m proud of what it has accomplished, and of the many talented, committed people who work in this industry every day. But like many Americans, I’m concerned about how some in the industry collect, share and exploit our most personal data, deepen extremism and polarization in our country, tilt our economy’s playing field, violate the civil rights of women and minorities, and even put our children at risk.

The issue here is that this framing is (again) a mix of partially correct, partially wrong, and partially misleading. There are reasonable concerns about the data collection aspect, but, as we’re seeing there are increasingly ways to avoid giving those platforms your data. With social media companies like Meta and Twitter losing steam, we have opportunities to move away from the big giants collecting all your data.

Regarding “deepening extremism and polarization,” the data… just doesn’t support that at all. I know the mainstream media narrative keeps repeating it, but the actual studies don’t support it at all. And we shouldn’t be making policy based on mainstream media narratives over actual facts and evidence. Of course, perhaps the reason the mainstream media keeps pushing this narrative is that the mainstream media has been shown by studies to be way more guilty of deepening extremism and polarization.

And I’m not at all clear what Biden means by “tilting our economy’s playing field.” The tech industry, for the most part has enabled way more people to have access to way more information and opportunities than in the past. Perhaps he’s arguing that some big companies have too much power, and on that I’d mostly agree (though that’s not what he says here), but as we’ve seen basically all of the tech giants have been in free fall this past year, and it seems like the market is doing its job in humbling them and putting forward competitors.

The issues regarding “civil rights of women and minorities” and putting “our children at risk” again… I’m not sure directly what this is referring to, but the evidence about children and social media has not shown any definitive increase in risk or harms, contrary to the media narrative.

Biden’s solutions, then, are still confused, based on media narratives more than facts, and an outdated view of the internet.

First, we need serious federal protections for Americans’ privacy. That means clear limits on how companies can collect, use and share highly personal data—your internet history, your personal communications, your location, and your health, genetic and biometric data. It’s not enough for companies to disclose what data they’re collecting. Much of that data shouldn’t be collected in the first place. These protections should be even stronger for young people, who are especially vulnerable online. We should limit targeted advertising and ban it altogether for children.

I mean, yes, the US needs a federal privacy law, but the devil’s in the details, and Biden (and his team) don’t seem to care much about the details and the problems with the details. Just as an example, he talks about “limiting targeted advertising and banning it altogether for children.” And, on a first pass that sounds good. But how do you do that in practice?

The only way to “ban” targeted advertising for children is to know the age of everyone on your website. And that means you actually have to get way more intrusive in collecting data on your users in order to know who is, and who is not, a child. And… that goes against the idea of protecting privacy. If you can’t explain how you handle those trade-offs, it’s difficult to take your big policy proposals seriously, because it makes it sound like you and your team haven’t actually understood the issues.

Second, we need Big Tech companies to take responsibility for the content they spread and the algorithms they use. That’s why I’ve long said we must fundamentally reform Section 230 of the Communications Decency Act, which protects tech companies from legal responsibility for content posted on their sites. We also need far more transparency about the algorithms Big Tech is using to stop them from discriminating, keeping opportunities away from equally qualified women and minorities, or pushing content to children that threatens their mental health and safety.

Yeah, yeah. We’ve heard it all before. Again and again. And it was wrong then and it remains wrong today. Reforming Section 230 to make websites “legally responsible” for the content posted on their sites does not fix any of the problems he describes. Because, first off, most content remains fully protected under the 1st Amendment, including most abuse, harassment, and hate speech. Removing Section 230 won’t change that, and without any underlying tort, making platforms “responsible” won’t matter because there won’t be any content that violates the law.

Second, even if there is content that violates the law, under the 1st Amendment, you can’t just magically say that platforms are “responsible” or “liable” for it. The 1st Amendment has long required that for a third party distributor of the content they need to have knowledge of that content. And not just knowledge that the content exists, but knowledge that the content violates the law. And… that’s just not going to happen in most cases. You don’t get that knowledge until after a court has decided on the underlying question.

Even worse — and this is the important part that Section 230 haters keep ignoring — because of that knowledge standard, taking away Section 230’s protections actually encourages websites to take less responsibility, because the more responsibility they take, the more monitoring they do, the more likely they are to be found to have knowledge, and thus liable.

So Biden’s big plan here would actually encourages websites to turn a blind eye to bad stuff on their platform.

It is frustrating beyond belief that no one in the administration seems to understand this simple way in which the law works. They’ve had plenty of time to talk to actual experts, but they seem to have ignored them all, in favor of a small crew of activists who have never understood the law at all, and spread blatant falsehoods and misinformation about Section 230.

Biden’s solution would make the problems he describes worse, not better. It would tie the hands of websites who are actively trying to stop abuse and harassment on their platforms and it would make it nearly impossible for smaller upstarts (you know, the ones successfully chipping away at the big companies Biden was just complaining about) to exist.

As for the “transparency about algorithms,” we’ve explained multiple times why that’s problematic as well. First, it’s a censorship bill in disguise, which raises 1st Amendment issues. Second, it’s demanding transparency into editorial decision making, which everyone recognizes would be a problem if the Biden administration demanded to know what stories Fox News or the NY Times decided were the most important. That’s not something the government has a right to discover.

Third, all of these things, it would limit the ability of new upstarts that are giving control back to the end users to exist. Right now we’re seeing amazing growth of the fediverse, but without Section 230 and with certain transparency demands, I don’t see how anyone in the US could feel comfortable running a Mastodon instance, for example.

Third, we need to bring more competition back to the tech sector. My administration has made strong progress in promoting competition throughout the economy, consistent with my July 2021 executive order. But there is more we can do. When tech platforms get big enough, many find ways to promote their own products while excluding or disadvantaging competitors—or charge competitors a fortune to sell on their platform. My vision for our economy is one in which everyone—small and midsized businesses, mom-and-pop shops, entrepreneurs—can compete on a level playing field with the biggest companies. To realize that vision, and to make sure American tech keeps leading the world in cutting-edge innovation, we need fairer rules of the road. The next generation of great American companies shouldn’t be smothered by the dominant incumbents before they have a chance to get off the ground.

More competition is a worthy goal. But, here’s the thing: as I detailed in my end of the year post a couple weeks ago, we’re already seeing that. And many of the policies Biden is pushing will make that much harder, not easier. It’s almost as if the only people talking to Biden on this are focused on their own little silos without realizing how the “privacy laws” and the “get rid of 230” push will actually do more to lock in big companies and kill off upstart competitors.

This isn’t how policy should be made.

Of course, the fact is, these are almost the identical policy statements Biden made upon taking office two years ago. And at that time, his party had effective control over both houses of Congress… and couldn’t get any of those tech policy initiatives through (he did get many other things through, but none of these policies). And, in that time, the tech world has also changed a lot, though it sure feels like no one in DC has noticed. Meta is collapsing, shedding users and revenue, and throwing away billions on a metaverse concept almost no one seems interested in. Google seems stymied by an inability to figure out where things go from here, and we’re seeing innovations jumping out from much smaller operations like OpenAI. And, of course, the growth of new alternatives, including those not controlled by the big companies, gives hope that we’re on the right path.

Maybe long wasteful antitrust battles distract those companies for a while, but those lawsuits are already ongoing (and so far haven’t been going all that well).

All of this just seems to be missing what’s actually happening in the tech space today and how to encourage it. It doesn’t look at the ways in which copyright and patent laws are holding competition back. Or how an outdated CFAA is allowing the big companies to stomp out smaller upstarts. These are real issues that could be dealt with. Instead, we get this same nonsense about 230 and antitrust.

And the fact is that with this current dysfunctional Congress, it seems even less likely that Biden’s confused wishlist is going to get anywhere.

Posted on Techdirt - 11 January 2023 @ 03:28pm

One More Year Until Steamboat Willie’s Mickey Mouse Enters The Public Domain: Will Mickey Really Be Free?

As you’re probably aware, now that it’s January, we’re running our annual public domain game jam, for games based on works from 1927. This is the 5th year we’ve done this, ever since the public domain (finally) returned to the US after decades with no works ever reaching the public domain, due to never-ending copyright term extension. Many people have noted that the terms seemed to extend just as Disney’s Mickey Mouse was about to enter the public domain. And while some scholars dispute the claim that Disney was the main lobbying force behind extensions, it’s uncanny how often the extensions seemed timed to Mickey’s unshackling.

A few years ago, though, it became clear that even Disney had given up on the idea of copyright term extension in the US (elsewhere, however…). After all, even one of the most extreme pro-copyright Copyright Registrars had suggested that perhaps it was time to scale back copyright terms (though only in the slightest of ways). The battle over the Sonny Bono Copyright Term Extension Act, followed by the battle over SOPA has (at least) taught the legacy copyright industries that they can’t just slip through never-ending extensions any more.

That didn’t stop a weird flood of articles last summer bemoaning the horror that would come from Disney losing the copyright on the Steamboat Willie version of Mickey Mouse, as it’s set to do on January 21st, 2024. Right before the New Year, the NY Times had a slightly more balanced article looking at what to expect on the freeing of Steamboat Willie Mickey in one year’s time.

For the first time, however, one of Disney’s marquee characters — Mickey himself — is set to enter the public domain. “Steamboat Willie,” the 1928 short film that introduced Mickey to the world, will lose copyright protection in the United States and a few other countries at the end of next year, prompting fans, copyright experts and potential Mickey grabbers to wonder: How is the notoriously litigious Disney going to respond?

As the article notes, this definitely isn’t a free-for-all for Mickey. The Steamboat Willie version is quite different from the Mickey most people know of today. It is true that Disney won’t be able to stop people from showing or sharing the original animation, but the company itself put it up on YouTube well over a decade ago anyway, so it’s free for all to see.

But there are other parts of the article that clearly suggest that Disney is prepping itself to use trademark law to scare off would-be adapters. This has always been something of a concern, and the article suggests that Disney itself has been quietly getting things ready for this kind of legal attack. As we’ve explained dozens of times, trademark and copyright law are different. Trademark law is really about not confusing or tricking the consumer into believing a product was made by someone else. So, really, the issue is in not making content that anyone might think would have come from Disney, which might wipe out a fair bit of content, but still leave plenty of open space.

But, also, trademark is about commerce, and the trademark holder has to be making use of the trademark in commerce in order for it to remain valid. But, as the article notes, over the past fifteen years or so, Disney has been gradually ramping up its commerce related to the Steamboat Willie version of Mickey.

In 2007, Walt Disney Animation Studios redesigned its logo to incorporate the “Steamboat Willie” mouse. It has appeared before every movie the unit has released since, including “Frozen” and “Encanto,” deepening the old character’s association with the company. (The logo is also protected by a trademark.) In addition, Disney sells “Steamboat Willie” merchandise, including socks, backpacks, mugs, stickers, shirts and collectibles.

My sense is that Disney will be cautiously litigious around Mickey. That is, I’m guessing that the aggressive IP enforcement team will be told not to go after just random uses of the Steamboat Willie version of Mickey, but anything borderline will bring down the lawyers screaming trademark infringement.

Of course, there’s another side to this not covered in the NY Times piece, which is that it’s unlikely Disney’s copyright in the Steamboat Willie version of Mickey is even valid in the first place. Beyond the fact that Steamboat Willie was a parody of Buster Keaton’s Steamboat Bill Jr. (which came out just a few months earlier, and will also be going into the public domain next January), a bunch of researchers have found pretty strong evidence that Disney screwed up the copyright filings for the film anyway, meaning it likely technically went into the public domain decades ago. It’s just that no one wanted to fight Disney’s litigation team on it.

Posted on Techdirt - 11 January 2023 @ 09:24am

Elon Musk’s Commitment To Only Pretending To Be Committed To Free Speech Still Stands

Elon Musk insisted that a key reason he took over Twitter was in support of “free speech.” As we noted, it was pretty clear that he never really understood what free speech actually means. Musk likes to say that his focus as the owner of Twitter has been to allow all legal speech, but as we’ve shown, Musk himself has been shown to have a transparently thin skin, and an unwillingness to take any kind of criticism. So, it was hardly a surprise that, even as he brought back serial fabulists and literal Nazis to the platform, he ramped up efforts to remove his critics — especially those in the media.

While there were reports that he had let those media accounts back on the site, that’s not actually the case. Most of the accounts were told that they needed to delete the “violating” tweet. CNN’s Donie O’Sullivan, who does not believe his account actually violates any rules, pulled a page from Musk’s playbook and took a poll on Mastodon to see if he should actually delete the tweet. The results were… pretty conclusive:

Donie O'Sullivan Mastodon poll on whether he should go back to Twitter, which "would require" him to delete a tweet that Donie insists does not violate Twitter policy. 

Final vote: "Yes" 4%   "No" 96%

Then we have Steve Herman, from Voice of America, who was banned that same night as the purge of other journalists. He “appealed” his suspension, but notes on Mastodon that Twitter has rejected his appeal, saying his tweet that merely highlighted that the ElonJet account still existed on Facebook was against the company’s policy on posting “people’s private information without their express authorization or permission.” Herman’s tweet did exactly none of that. It told people where to go to find a different account that… also did not violate that policy. But, no matter, he’s still banned.

Steve Herman post to Mastodon saying: "#Twitter just informed me the appeal of my permanent suspension is denied and still claims that referencing #ElonJet social media accounts violated a certain person’s privacy."

If you want to see the specific images he posted, they’re below:

Now, again, to be explicitly clear: Elon has every right to do this as the site owner. He can make whatever policies he wants, no matter how nonsensical they are. But at the very least, one would hope that people would begin to recognize that Musk making these arbitrary decisions based (apparently) solely on what makes him feel unsafe, is kind of a weak approximation of the previous regime seeking to moderate based on what would make the largest percentage of users feel safe.

And, it seems like a point you could argue that Twitter’s efforts to be as widely welcoming to users as possible did a lot more to support free speech than Elon’s efforts to kick journalists off because he had a shit fit and claimed they put his life in danger (which the facts simply do not support).

Posted on Techdirt - 10 January 2023 @ 10:50am

As Elon Fires More Trust & Safety Staff, Twitter’s Moderation Efforts Fall Apart

Despite having already fired a huge percentage of Twitter’s trust & safety team handling issues around content moderation, including the teams handling child sexual abuse material and election denialism, last week Elon apparently fired another chunk of the team. Just in time for organizers of the insurrection in Brazil to make use of social media to help them organize.

Researchers in Brazil said Twitter in particular was a place to watch because it is heavily used by a circle of right-wing influencers — Bolsonaro allies who continue to promote election fraud narratives. Several influencers have had their accounts banned in Brazil and now reside in the United States. Bolsonaro himself was on vacation in Florida on Sunday.

Still, as the article notes, the planning seemed to happen on many platforms, so it’s not as if Twitter was the only one. But perhaps more serious is the issue of child sexual abuse material. There has been this weird narrative making the rounds that Twitter, under the previous regime, did not take the issue seriously. And that since Elon took over, it has done much more to stop CSAM. Both parts of this narrative appear to be false.

Experts who used to work with Twitter specifically on this issue say that the teams working on it have been mostly fired, as Elon insists that automation will somehow work in their place (note: automation is important in finding repeat content that has been added to various databases, but… not good at all at catching new content). It does not sound like things are going well.

The ex-employee outlined to CNA how automated machine-learning models often struggle to catch up with the evolving modus operandi of perpetrators of child sexual abuse material.

Trading such content on Twitter involves treading a fine line between being obvious enough to prospective “buyers” yet subtle enough to avoid detection.

In practice, this means speaking in codewords that are ever-changing, to try and evade enforcement. 

And so abusers are able to stay ahead of Twitter’s efforts:

With fewer content moderators and domain specialists in Twitter to keep track of such changes, there’s a danger that abusers will take the opportunity to coordinate yet another new set of codewords that automated systems and a smaller team cannot quickly pick up, said the ex-employee.

We’ve also heard some other disturbing claims from inside Twitter, including that Twitter has cut back, significantly, on the support that its trust & safety staff get, such as important and necessary counseling support for frontline workers who deal with these issues. This is, of course, always the awful tradeoff with these kinds of roles and jobs. You need some people in the process, but it’s a terrible job which can create real post-traumatic stress for those employees.

The same article notes that the automated takedowns are actually causing other problems, like suppressing victims speaking out about what happened to them:

“A victim drawing attention to their plight, having no easy way to do so and in a compromised situation or state of mind, might easily use problematic hashtags and keywords,” they said.

Failure to distinguish such uses of language could conversely end up silencing and re-victimising those suffering from child sexual abuse, they said.

Indeed, after that article came out, an NBC investigation showed that (again, contrary to the narrative), it does not appear that new Twitter is particularly effective in dealing with the issue of CSAM.

The accounts seen by NBC News promoting the sale of CSAM follow a known pattern. NBC News found tweets posted as far back as October promoting the trade of CSAM that are still live — seemingly not detected by Twitter — and hashtags that have become rallying points for users to provide information on how to connect on other internet platforms to trade, buy and sell the exploitative material. 

In the tweets seen by NBC News, users claiming to sell CSAM were able to avoid moderation with thinly veiled terms, hashtags and codes that can easily be deciphered. 

Some of the tweets are brazen and their intention was clearly identifiable (NBC News is not publishing details about those tweets and hashtags so as not to further amplify their reach).  While the common abbreviation “CP,” a ubiquitous shortening of “child porn” used widely online, is unsearchable on Twitter, one user who had posted 20 tweets promoting their materials used another searchable hashtag and wrote “Selling all CP collection,” in a tweet published on Dec. 28. The tweet remained up for a week until the account appeared to be suspended following NBC News’ outreach to Twitter. A search Friday found similar tweets still remaining on the platform. Others used keywords associated with children, replacing certain letters with punctuation marks like asterisks, instructing users to direct message their accounts. Some accounts even included prices in the account bios and tweets.

CSAM is a massive issue across any social media platform. There is no “solution” to it that will stop it from happening, but it’s an ever evolving challenge that many companies work on, using ever changing approaches to deal with the fact that the perpetrators are constantly adapting as well. Twitter used to be one of the leading companies in responding to this challenge, but now it appears that the opposite is true.

Posted on Techdirt - 9 January 2023 @ 10:45am

Kevin McCarthy’s First Order Of Business: Waste A Ton Of Time Misleading The Public Over The Bogus Twitter Files

It took a week of nonsense, in which we got to see just how dysfunctional this session of the House of Representatives will be, but late last week, Kevin McCarthy sold just enough of what was remaining of his soul to get the Speaker of the House gavel. And, apparently, part of the many favors he doled out to convince the nonsense peddlers who were demanding “concessions” was to create a panel to investigate the incredibly misleading nothingburgers of the Twitter Files.

The new panel, the Select Subcommittee on the Weaponization of the Federal Government, is partly a response to revelations from Elon Musk in the internal documents he branded the “Twitter Files.”

We’ve already discussed how much nothingness is in the Twitter Files so far released, and unless they’re somehow saving “the good stuff” for drop #69 to appease Musk’s sophomoric sensibilities, it seems unlikely there’s any actual meat there. Even in the rare case where the files have turned up something marginally interesting, I have no faith that this new panel will be willing (or able) to present it accurately or fairly. Instead, get ready for months of grandstanding hearings, misleading leaks and releases, and a bunch of other nonsense.

But, as I’ve been saying all along, these are the same Republicans who would be completely losing their shit (rightly so, by the way!) if Democrats set up a similar panel demanding that Fox News reveal its close contacts with the Trump White House, or the details of its editorial decision making process.

The 1st Amendment protects editorial decision making, whether its Fox News pushing bogus stories to help Republicans in the election or Twitter choosing to limit the spread of election misinformation (and, I should note, these are not equivalent, at all).

But, alas, in these stupid stupid times we live in, the party of petty snowflake grievances and no actual policy positions will grievance away.

Posted on Techdirt - 6 January 2023 @ 07:39pm

Copyright Has Kept De La Soul’s Classic 1st Album Off Streaming… Until Now

For years, we’ve written about the copyright nonsense around sampling in hip hop music, and how it was treated with very, very different rules than things like cover songs and paying homage to previous artists in other forms of music. As we’ve mentioned for over a decade, filmmaker Kembrew McCleod did a full (fascinating) exploration of this in the documentary “Copyright Criminals” which is worth watching if you can find it. The trailer is here:

The group De La Soul features prominently in the movie, as their first album, 3 Feet High and Rising, has long been the quintessential example of an album that had so many samples that it would be effectively impossible to get the official licenses necessary to release it today.

Because of this, that classic hip hop album has not been available on various streaming platforms in an era where (unfortunately or not) not being on streaming more or less means the album doesn’t exist. It’s obviously been frustrating for the band. In 2014, they made everything they had ever created free for people to download from their website… but just for 24 hours. In fact, back in 2015, De La Soul did a Kickstarter project to create new music for themselves to sample as a commentary on their inability to sample others (I could have sworn I wrote about the project back then, but search is failing me in finding it). Both of these moves came with statements from the group talking about how much they want their music out there and how much they want to support their fans, but copyright law and the record labels kept getting in the way.

All of this is silly, frankly. Much of the time, samples are unrecognizable from the original. They should, easily, be covered by de minimis use or fair use. Yet, perhaps because of the nature of the music — and who frequently creates it — courts were much quicker to insist that every single sample needs to be licensed, no matter how short, no matter how transformed, no matter how unrecognizable, and even no matter how much a sample might actually help promote the original.

Still, for over a decade, people have talked about finally trying to make 3 Feet High and Rising available legally (of course, if you just ignore copyright laws, it’s always been possible to find it). Three years ago there were reports that it was finally going to come to streaming. Except that ran into problems as the plan from Tommy Boy records was apparently done without agreement with the group, and where most of the money would go to the label.

In 2021, Tommy Boy was sold to Reservoir for $100 million, and then it appears that Reservoir cut a deal with De La Soul to allow the group to acquire the rights to their own masters.

And, finally, that brings us around to the present, where De La Soul, owning its own masters, is going to put them on various streaming services (and reissue some of the albums as well).

It’s good that De La Soul controls their own masters and is able to get this music out there for more people to listen to legally, but it’s somewhat ridiculous that it’s taken all this time and had to go through so much nonsense. Of course, it’s still not entirely clear to me that the rights issues regarding the samples are all cleared. I recall a similar effort by the Beastie Boys to rerelease Paul’s Boutique that resulted in a lawsuit over some of the samples as well.

But hopefully, we can get past all that… and just let people enjoy the music.

More posts from Mike Masnick >>