No, Elon Isn’t Blocking Kamala From Getting Followers, And Congress Shouldn’t Investigate

from the calm-down-people dept

Gather ’round, children, and let me tell you a tale of rate limiting, misinterpreted screenshots, and how half the internet lost its mind over a pretty standard Twitter error. This error was then interpreted through an extremely partisan political prism, leading previous arguments to flip political sides based on who was involved.

The desire to attack editorial discretion knows no political bounds. Partisan attacks on free speech seem to flip the second the players switch.

I think it’s become pretty clear over the past couple of years that I’m no fan of how Elon Musk runs ExTwitter. He makes terrible decision after terrible decision. Indeed, he seems to have a knack for doing the wrong thing pretty consistently.

But this week there’s been a hubbub of anger and nonsense that I think is totally unfair to Musk and ExTwitter. Musk did come out in support of Donald Trump a couple weeks back and has gone quite far in making sure that everyone on the platform is bombarded with pro-Trump messages. I already called out the hypocrisy of GOP lawmakers who attacked the former management of Twitter for “bias” as they did way, way less than that.

But, as you might have heard, on Sunday Joe Biden dropped out of the Presidential race and effectively handed his spot over to Kamala Harris. The “@BidenHQ” account on ExTwitter was renamed and rebranded “@HarrisHQ.” Not surprisingly, a bunch of users on the site clicked to follow the account.

At some point on Monday, some people received a “rate limiting” error message, telling them that the user was “unable to follow more people at this time.”

Image

Lots of people quickly jumped to the conclusion that Musk was deliberately blocking people from following Harris. And, yes, I totally understand the instinct to believe that, but there’s little to suggest that’s actually what happened.

First off, rate limiting is a very frequently used tool in trust & safety efforts to try to stop certain types of bad behavior (often spamming). And it’s likely that ExTwitter has some sort of (probably shoddily done) rate limiting tool that kicks in if any particular account suddenly gets a flood of new followers.

Having an account — especially an older account that changes names — suddenly get a large flood of new followers is a pattern consistent with spam accounts (often a spammer will somehow take over an old account, change the name, and then flood it with bot followers). It’s likely that, to combat that, ExTwitter has systems that kick in after a certain point and rate limit the followers.

The message which blames the follower might just be shoddy programming on ExTwitter’s part. Or it might be because part of the “signal” found in this pattern is that when a ton of accounts follow an old account like this, it often means all those follower accounts are now being flagged as potential bots (again, spam accounts flood newly obtained accounts with bot followers).

In other words, these rate limiting messages are entirely consistent with normal trust & safety automated systems.

Of course, most users immediately assumed the worst. Many posted their screenshots and insisted it was Musk putting his thumb on the scales. The New Republic (which is usually better than this) rushed in with an article where at least the headline suggests Musk is doing this intentionally: “Trump-Lover Elon Musk Is Already Causing Kamala Harris Problems.”

Then, some site called The Daily Boulder (?!?) made it worse by misinterpreting a tweet by Musk as supposedly admitting to doing something. The Daily Boulder report is very misleading in multiple ways. First, it falsely states that users trying to follow Harris got a “something went wrong” error, when they actually got the rate limiting error shown above. The “something went wrong” error was from something else.

After the @BidenHQ account was changed to @HarrisHQ, if you tried to go directly to @BidenHQ, rather than redirect, Twitter just showed an error message saying “Something went wrong.” Elon screenshotted that and said “Sure did.”

Image

This is a joke. Musk is joking that “something went wrong” with Joe Biden and/or the Biden campaign. Not that something went wrong with anyone trying to follow the Harris campaign.

The Daily Boulder piece confused the two different error messages. It seemed to think (incorrectly) that the screenshot Musk posted was of the Harris campaign account when it was the Biden one (I get that this is a bit confusing because the Biden account became the Harris account, but they don’t “redirect” if you go straight to the old name).

Either way, tons of Harris supporters flipped out and insisted that Musk was up to no good and was interfering. And, as much as I think Musk would have no issue doing something, nothing in this suggests anything done deliberately (indeed, I’ve tried to follow/unfollow/refollow the HarrisHQ account multiple times since Monday with no problem).

Still, Democrat Jerry Nadler has already called for an investigation, making him no better than Jim Jordan. Tragically, that NBC article fails to link to Nadler’s actual letter, leaving me to do their work for them. Here it is.

The letter is addressed to Jim Jordan, asking him to investigate this issue. That’s because Jordan is the chair of the House Judiciary Committee. Nadler is the top Democrat on the committee but is effectively powerless without Jordan’s approval. The most charitable version of this is that Nadler is trolling Jordan, given all of Jordan’s hearings insisting that bias in the other direction was obviously illegal but his unwillingness to do so when bias is on the other foot.

Indeed, some of the letter directly calls out Jordan’s older statements when the accusations went in the other direction:

If true, such action would amount to egregious censorship based on political and viewpoint discrimination—issues that this Committee clearly has taken very seriously.

As you have aptly recognized in the past: “Big Tech’s role in shaping national and international public discourse today is well-known.” Against this import, you have criticized tech platforms for alleged political discrimination. As you wrote in letters to several “Big Tech” companies: “In some cases, Big Tech’s ‘heavy-handed censorship’ has been ‘use[d] to silence prominent voices’ and to ‘stifle views that disagree with the prevailing progressive consensus.’” In your view, platform censorship is particularly harmful to the American public because, “[b]y suppressing free speech and intentionally distorting public debate in the modern town square, ideas and policies were no longer fairly tested and debated on their merits.” Ironically, X’s CEO Elon Musk himself has expressed similar sentiment: “Given that Twitter serves as the de facto public town square, failing to adhere to free speech principles fundamentally undermines democracy.”

Given your long track record of fighting against political discrimination on the platform “town squares” of American discourse, I trust that you will join me in requesting additional information from X regarding this apparent censorship of a candidate for President of the United States. The Committee should immediately launch an investigation and request at a minimum the following information from X.

But still, even if you’re trolling, Congress shouldn’t be investigating any company for their editorial choices. The answer to this weaponization of the government should not be even more weaponization of the government.

Which brings us to the final point in all of this. Even if it were true that Musk were doing this deliberately (and, again, there is no evidence to support that), it would totally be within his and ExTwitter’s First Amendment rights to do so.

I understand this upsets some people, but if it upsets you, think back to how you felt when Twitter banned Donald Trump. If you’re mad about this, I’m guessing there’s a pretty high likelihood you supported that move, right? That was also protected by the First Amendment. Platforms have First Amendment rights over who they associate with and who they platform. Twitter could choose to remove President Trump. ExTwitter could choose to remove or block the Harris campaign.

That’s how freedom works.

And to answer one other point that I saw a few people raise, no, this also would not be an “in kind contribution” potentially violating election law. We already went through this a few years back when the GOP whined that Google was giving Democrats in-kind contributions by filtering more GOP fundraiser emails to spam (based on their own misreading of a study). Both the FEC and the courts pointed out that this was not an in-kind contribution and was not illegal. The court pointed out that such filtering is clearly protected under Section 230.

The same is true here.

It’s fine to point out that this is a dumb way to handle issues. Or that ExTwitter should have made sure that people could follow the newly dubbed HarrisHQ account. But I haven’t seen anything that looks out of the ordinary, and I think people’s willingness to leap to the worst possible explanation for anything Musk related has gone too far here.

But even worse is Nadler’s call for an investigation. Even if it was just to mock Jordan’s other investigations, there’s no reason to justify such nonsense with more nonsense.

Filed Under: , , , , , , , ,
Companies: twitter, x

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “No, Elon Isn’t Blocking Kamala From Getting Followers, And Congress Shouldn’t Investigate”

Subscribe: RSS Leave a comment
52 Comments
Anonymous Coward says:

Re:

There are numerous indications — at the Internet operational level — that whoever’s left running Twitter has no idea what they’re doing. To be clear, I’m not talking about things inside Twitter/things visible to or affecting users; I’m talking about things that are observable by people looking at traffic and/or BGP and/or DNS and/or SMTP and/or etc. I wouldn’t be at all surprised if they take themselves out via an ill-advised/untested change.

Anonymous Coward says:

Re:

There was a saying, when I was just starting Computer Science, long years ago.

If builders built buildings the way programmers built programs, the first woodpecker that came along would destroy civilization.

While programmers have developed more security tools, and added robustness, I need merely point to CrowdStrike as Exhibit A.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

I think it’s become pretty clear over the past couple of years that I’m no fan of how Elon Musk runs ExTwitter.

The irony, of course, being that the x-Twitter user experience is better than ever, even after Elon reduced the bloated overhead and got rid of this site’s favorite gey commun!st political censor Yoel Roth!

bhull242 (profile) says:

Re:

  1. Yoel Roth wasn’t a censor.
  2. Roth was definitely not a communist, though he was gay (not that that is even remotely relevant to anything).
  3. No, the ExTwitter user experience is not “better than ever”. It may be so for you, personally, but that doesn’t mean it’s universally or generally objectively better (and, given your previous posts, I’m not sure that your point of view on this would be representative of the average user’s).
  4. Recklessly cutting corners is not “reduc[ing] the bloated overhead”. It’s just being greedy. The overhead was never bloated to begin with, anyways.
bhull242 (profile) says:

Re:

For the record, this isn’t even unique to ExTwitter under Elon.

I’ve been following VTubers on Twitter for several years now (it’s pretty much 95% of what I use the platform for nowadays), and every time a new one debuted for a big agency, it was extremely common for this to happen to their new accounts even back then. A bunch of people would follow them at once, and so Twitter would automatically put a hold of sorts on new followers, sometimes even removing existing follows as well, thinking this was an attempt to game the algorithm or something. After some time passed, the hold would be removed, and things would get back to normal. Same thing sometimes happened when they’d change their user name (though not their Twitter handle) to promote some event they’re involved in or something that wound up attracting a bunch of new followers.

Sure, it’s likely been exacerbated by the problems Elon’s handling has introduced, but it’s far from new to the platform. It just doesn’t normally affect accounts related to presidential races because you don’t normally have the candidate pull out of the race after they’ve picked a VP, so they don’t normally need to change the Twitter handle.

This comment has been deemed insightful by the community.
Anonymous Coward says:

There have been some spambot hijackings of humans’ accounts on a forum I’m an admin on, so I find “anti-spambot measure side-effect” to be a plausible explanation. It just comes with the territory of online security; inevitably, some innocent people get caught in the anti-spambot net. It would be wonderful if there were an effective security measure that only blocked spambots and left humans alone, but to my knowledge, no such measure exists yet.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Elon is incompetent at running his Nazi bar. We know this.

Old Twitter’s legacy systems still functioning is… a miracle. We are more surprised at this.

With that said, doesn’t Harris’ campaign have places other than the Nazi bar they have a presence in?

I mean, yes, Elon Musk, spoiled privileged apartheid rich fuck is carrying water for authoritarians, fascists and warmongers. It will look badbfor him if he deliberately blocks Harris’ campaign, but like the insurrectionist Russian asset Trump, he’s not going to lose any rep over this.

But honestly, people should realize that this should be the time to start looking for the Harris campaign outside of Elon’s Nazi bar.

Arianity says:

But even worse is Nadler’s call for an investigation. Even if it was just to mock Jordan’s other investigations, there’s no reason to justify such nonsense with more nonsense.

Pointing out hypocritic nonsense by trolling seems.. fine? It’s a great way to show if someone is willing to walk the walk on the record.

I understand this upsets some people, but if it upsets you, think back to how you felt when Twitter banned Donald Trump. If you’re mad about this, I’m guessing there’s a pretty high likelihood you supported that move, right? That was also protected by the First Amendment. Platforms have First Amendment rights over who they associate with and who they platform. Twitter could choose to remove President Trump. ExTwitter could choose to remove or block the Harris campaign.

That’s how freedom works.

Freedom of speech means they’re legally allowed to do it. It doesn’t mean people have to like it, and being upset is not limited to illegal actions. Part of freedom is a person has the freedom to be an asshole, and people can use their own freedom to judge them as an asshole.

(And that’s not getting into whether people agree whether particular actions should be First Amendment/230 protected or not to begin with. Just because it currently is, does not obligate people to agree that it should be).

Partisan attacks on free speech seem to flip the second the players switch.

There’s obviously a lot of partisan motivation, but the details matter. I’m pretty partisan on the left, and I don’t think I’d be comfortable in a hypothetical world where Twitter limited Trump’s follows unless he broke the rules (like when he was banned). I’d also have been very uncomfortable if they had banned him before he had broken the rules. That’s not hypocritical.

Stephen T. Stone (profile) says:

Re:

that’s not getting into whether people agree whether particular actions should be First Amendment/230 protected or not to begin with. Just because it currently is, does not obligate people to agree that it should be

Hey, so, I have One Simple Question for you.

Yes or no: Do you believe the government should have the right to compel any interactive web service into hosting any third-party speech that a given service would otherwise refuse to host?

Arianity says:

Re: Re:

Yes or no: Do you believe the government should have the right to compel any interactive web service into hosting any third-party speech that a given service would otherwise refuse to host?

Yeah, sometimes. I’m not a free speech absolutist. And yes, I’m aware how unpopular that is here.

I do consider the risks to be very high and it’s not something I take lightly, but it’s not something I’d consider absolute. And generally speaking, I do think the bar of strict scrutiny (in theory, if not in practice) balances that well.

For instance, I would have zero qualms about enforcing the Civil Rights Act Title 2 on an interactive web service. There’s a reason I don’t like 303 Creative.

Stephen T. Stone (profile) says:

Re: Re: Re:3

Did you actually ask Arianity the “One Simple Question” in good faith?

Yes. If someone believes an interactive web service should be forced by law to host speech that service would otherwise refuse to host, I want to know that⁠—and I want to know exactly what kind of speech that person believes the law should force that service to host.

Stephen T. Stone (profile) says:

Re: Re: Re:5

What are you going to even do with the answers and examples they give you if they respond back in good faith?

I ask for examples for two reasons:

  1. To have them on the record in re: the kind of speech they believe people should be forced to host against their will.
  2. To point out exactly why asking for the law to force people into hosting that speech is wrong by contrasting it with examples that are direct parallels to the given examples.
Anonymous Coward says:

Re: Re: Re:6

… because making laws forcing someone to carry something is excruciatingly difficult.

Barrier one: such a law cannot be based on the content of the message … because again, First Amendment prohibits compelled speech as much as forbidding speech.

Which is why eg Ex/Twitter is not a “common carrier” or a “government agent”. Neither of which could do such discrimination, even if doing so would result in a better quality (from most viewpoints) communication channel.

Arianity says:

Re: Re: Re:2

Please tell me exactly what kind of speech you believe the government should force Twitter, Facebook, Truth Social, Parler, 4chan, Neocities, the average Mastodon instance, and even the Techdirt comments sections to host. Be specific and provide examples.

I literally just gave you one in my previous post? Any speech where the user would be protected under Title 2 as a protected class.

Ironically, you forgot Tiktok. Another example I’d be ok with is, if hypothetically (and I want to emphasize that this is hypothetical where we know this is happening for a fact, there’s no current evidence this is currently happening) we knew that ByteDance was censoring certain topics due to pressure from the CCP, I’d want them to disclose that. I’d be fine with them censoring it, but with some sort of mandatory labeling/auditing so it’s not completely unauditable. Something like Project Texas would be fine, although i’m not sure if you’d call that forced speech per se.

A third I guess would be ad-disclosure (for say sponsorships). There’s probably some mandatory disclosures for certain medical ads. Those are already existing exceptions to free speech.

Emergency services in case of something like a natural disaster? If Musk wanted to ban FEMA or something, I’d be ok forcing that on there (with monetary compensation for server costs etc).

Those are the only four examples I can think of offhand where it’d be an obvious slam dunk for me (note in the above example, when I say “Tiktok” or “Musk”, those would also apply to the other platforms. I’d be fine with forcing an average Mastodon instance to have FEMA if it made sense. Similar for CCP: if Russia forced Facebook to tweak their algorithm, I’d want that disclosed. I’d say the same for the U.S. government, but generally speaking they shouldn’t be mucking with anyone’s algorithm for the most part). You could probably convince me on other things, but most other cases I’m pretty skeptical of. Strict scrutiny is a high bar, for a reason.

You said “forced to host”, so I’m not sure it’d count, but I’m also fine with say, forcing them to remove election fraud or business fraud. Also existing law (and probably uncontroversial). But that’s removing rather than hosting.

Stephen T. Stone (profile) says:

Re: Re: Re:3

Any speech where the user would be protected under Title 2 as a protected class.

That’s still vague. I want a specific example of the kind of speech you’d want the government to force the owner of an interactive web service to host under this logic.

Another example I’d be ok with is, if hypothetically … we knew that ByteDance was censoring certain topics due to pressure from the CCP, I’d want them to disclose that.

How would that even work without passing some sort of law that would mandate that disclosure? Hell, how would that avoid forcing other services to tell the government who reports certain kinds of speech, how often certain kinds of speech get reported, and how often the service takes action on those specific types of reports⁠—which would clearly be an attempt to coerce action or inaction on certain kinds of moderation and therefore a First Amendment violation?

A third I guess would be ad-disclosure (for say sponsorships). There’s probably some mandatory disclosures for certain medical ads. Those are already existing exceptions to free speech.

That’s fair. Twitch and YouTube already mandate disclosure for sponsored streams/paid promotions. (Incidentally: I gotta shout out SponsorBlock for making that shit skippable on YouTube. Not a paid promotion on my part, for the record.)

Emergency services in case of something like a natural disaster?

And you would force every interactive web service to carry this, even ones where carrying that speech would make little-to-no sense and/or require complex technical changes to existing infrastructure? I mean, do you really believe Neocities should be putting Tornado Emergency banners across user dashboards and the actual hosted sites themselves?

I’d be fine with forcing an average Mastodon instance to have FEMA if it made sense.

Yes or no: Do you truly believe a Mastodon instance that doesn’t want to host the speech of a government agency on its servers should be forced by law to host that speech anyway? And if the answer is “no”: Why, then, should that Mastodon instance be the exception to the rule?

You said “forced to host”, so I’m not sure it’d count, but I’m also fine with say, forcing them to remove election fraud or business fraud.

For the record: Most services do tend to remove speech like that because most services don’t want to have even a hint of legal liability for, y’know, actual criminal acts.

Arianity says:

Re: Re: Re:4

That’s still vague. I want a specific example of the kind of speech you’d want the government to force the owner of an interactive web service to host under this logic.

Specific how? Title 2 would be if someone were removed because they were a protected class. Under Title 2, those would be race, color, religion, or national origin.

One example of that would be if someone were banned from a service because of their race, which isn’t necessarily tied to the content of their speech. Another example would be if say, a black person was banned because they mentioned they were black in a post. For example, if Stormfront banned a Jewish person for posting a quote from the Torah.

There’s a huge amount of case law around Title 2 (including stuff like 303 Creative) that explains what it covers. If it’s discrimination around race, color, religion, or national origin, it should be covered. Someone can get banned from an expressive service for speech on the service, or their non-speech characteristics.

Title 2 also covers exceptions quite well. For instance, a business can discriminate against pro-life people, even if their pro-life views are religious inspired, as long as they also apply the same rules to pro-life secular people. Which solves the issue of say, an LGBTQ+ person being forced to host a Christian transphobe.

How would that even work without passing some sort of law that would mandate that disclosure?

It wouldn’t, it would require a law (and a law that passes strict scrutiny. Which with current SCOTUS probably doesn’t exist) to be passed. You asked what I would be comfortable with, not what would be feasible under existing law. Those are two different things, especially given SCOTUS. There’s a lot of things I would be comfortable with, that would not be possible under existing law.

Out of the examples I gave, I gave a mix of existing things (which for the most part, are already implemented and usually not controversial because we take them for granted), as well as ones that don’t exist.

The ones that don’t currently exist would require a new law. In the case of Title 2, it would require an amendment or a new SCOTUS.

Basically any expansion that I listed would require new laws, because the ones that work under existing law (like fraud/ad disclosures) are already enforced.

Hell, how would that avoid forcing other services to tell the government who reports certain kinds of speech, how often certain kinds of speech get reported, and how often the service takes action on those specific types of reports⁠—

It wouldn’t.

which would clearly be an attempt to coerce action or inaction on certain kinds of moderation and therefore a First Amendment violation?

That would depend on whether it falls within strict scrutiny or not. In my opinion, it would. There’s a compelling government interest, it’s narrowly tailored, and I don’t see a less restrictive way to do it.

That’s fair. Twitch and YouTube already mandate disclosure for sponsored streams/paid promotions.

Which is great (although I think they have to, by law? FTC ), but you could imagine sketchier sites not wanting to do so (and iirc, even Youtube had this problem like a decade or so ago? But I don’t recall exactly. There was a period where Youtubers weren’t labelling ads)

If I recall, Twitter actually somewhat recently had this exact scandal: link and two. Although I guess that could’ve been users not applying labels properly or incompetence, because they do supposedly have ad rules in the TOS.

And you would force every interactive web service to carry this, even ones where carrying that speech would make little-to-no sense and/or require complex technical changes to existing infrastructure?

Just because it can be imposed on every service, does not mean it has to be. It just means that whether the service wants to host it or not is not a valid criteria to refuse it. You could (and should) still set other criteria like user count etc.

You can tailor a law to restrict it to services that make sense. You probably don’t need a FEMA warning in World of Warcraft. That said, it would not necessarily be complex, technically speaking, to allow FEMA to use the service as a normal user. As far as I’m aware, FEMA on Twitter uses the same features as any other service. It doesn’t necessarily have to be a bespoke service (indeed, it currently is not, on places like Twitter) that requires extra dev work. That said, I do want to point out I also said it should be compensated, as well. I have no issue with paying Twitter for whatever bandwidth or other costs FEMA incurs.

But yes, I would be fine forcing it on places like Twitter, even if it didn’t want to support it. And again, I would also be fine tying it to financial compensation if need be.

I should also point out, we already do have analogous laws for other forms of expression like TV/radio. Those are mandatory by law. See [here](https://www.fema.gov/emergency-managers/practitioners/integrated-public-alert-warning-system/public/emergency-alert-system#:~:text=The%20Emergency%20Alert%20System%20(EAS,minutes%20during%20a%20national%20emergency.) for instance.

Yes or no: Do you truly believe a Mastodon instance that doesn’t want to host the speech of a government agency on its servers should be forced by law to host that speech anyway? And if the answer is “no”: Why, then, should that Mastodon instance be the exception to the rule?

If it’s something like FEMA that I think would satisfy strict scrutiny, yes. There’s nothing particularly special about Twitter/Musk vs some Mastodon instance. They’re both private associations.

For the record: Most services do tend to remove speech like that because most services don’t want to have even a hint of legal liability for, y’know, actual criminal acts.

I’m aware. And the reason that exists is again because of strict scrutiny (or going back historically, being inherited from common law). The First Amendment does not explicitly mention anything about criminality in regards to what speech is protected. It gives no exceptions as written.

This is true of every existing exception to the First Amendment. Stuff like Brandenburg didn’t come out of the text of the First Amendment, either.

Stephen T. Stone (profile) says:

Re: Re: Re:5

I’m supposed to be on a sabbatical from most of the Internet right now, but I saw that you posted this, and I’mma break that sabbatical long enough to reply because you deserve that much. If you want to reply, feel free, but I won’t be talking back after this comment⁠—not for a while, anyway.

One example of that would be if someone were banned from a service because of their race, which isn’t necessarily tied to the content of their speech. Another example would be if say, a black person was banned because they mentioned they were black in a post. For example, if Stormfront banned a Jewish person for posting a quote from the Torah.

Just so we’re clear, yes or no: Do you want the government to compel Stormfront into hosting pro-Jewish speech? Do you want the government to compel a Jewish-led pro-Israel forum into hosting Hamas propaganda? Do you want the government to compel a forum owned and operated by Black people into hosting White supremacist propaganda? Please keep in mind that all of those sites have a right to both free speech and association; if you’re thinking of saying “well of course they should”, you’ll need a hell of an argument to justify that position without looking like an asshole.

You asked what I would be comfortable with, not what would be feasible under existing law.

I didn’t ask you about “comfortability”. I asked you whether you think the law should compel the hosting of speech. Some two hundred years of First Amendment jurisprudence says your position of “yes” goes against the First Amendment, which is why I’m asking why you think we should ignore that in favor of compelled speech/association, no matter how limited/“nuanced” that compulsion may be.

There’s a compelling government interest

Other than “the reporting of illegal/unlawful content” or “violating the First Amendment to censor/chill certain kinds of legal speech from certain kinds of people”, what is the compelling government interest in knowing about content moderation efforts on any given website?

Just because it can be imposed on every service, does not mean it has to be.

Just because it doesn’t have to be imposed on every service doesn’t mean it won’t be.

It just means that whether the service wants to host it or not is not a valid criteria to refuse it.

Someone runs an anarchist-friendly Mastodon instance. They don’t want to host any form of government speech, as is their First Amendment right. Tell me the exact reason that said Masto instance should be compelled by law, under threat of legal penalties that could include jail time, to carry any form (or all forms) of government speech. “Because the government says so” is not a valid answer.

it would not necessarily be complex, technically speaking, to allow FEMA to use the service as a normal user

A right-leaning Mastodon instance whose owner despises FEMA doesn’t want to let FEMA have an account on that instance. Tell me the exact reason that said Masto instance should be compelled by law, under threat of legal penalties that could include jail time, to let FEMA join that instance. “Because the government says so” is not a valid answer. In this case, neither is “because FEMA is a good thing”.

I should also point out, we already do have analogous laws for other forms of expression like TV/radio.

TV and radio use public airwaves. The Internet doesn’t. Your local NBC station and Twitter aren’t even close to being the same thing; believing otherwise is a mistake I hope you’re not actively trying to make.

They’re both private associations.

And as private associations, they have every right to refuse association with government speech and actors, up to and including the refusal to host government speech. (In case you missed it: Twitter exercised that right with Donald Trump for the last two weeks of his presidency without legal penalty.) You have yet to spell out a compelling, logical, and legally valid reason for revoking those rights.

The First Amendment does not explicitly mention anything about criminality in regards to what speech is protected.

Over two hundred years of First Amendment jurisprudence has done that. Until and unless the Supreme Court decides to shit all over 1A precedent in a way that allows for compelled speech or the government somehow passes a amendment to the Constitution that allows for compelled speech, you’re not going to get around standing legal precedent.

Stuff like Brandenburg didn’t come out of the text of the First Amendment, either.

Neither did the ruling that legalized burning the American flag. Your point isn’t as compelling as you think it is.

Anonymous Coward says:

Re: Re: Re:3

forcing them to remove election fraud or business fraud.

Step one, there, is ensuring that it actually IS election or business fraud. You can’t just rely on hearsay, either, as that is not proof. TBH, fraud is not fraud until a court says it is fraud. Which is entirely too late for the timetable you want.

Step two, of course, is the First Amendment. How do you get around the First Amendment as regards “content of the communication”? This is the sort of thing §230 was designed for: placing liability for speech upon the shoulders of the one creating the message. Thus, the person who posts fraud can be punished. The entity who simply passes that message along, not so much unless you can prove intent on their part as well.

And this for just one of your categories.

You can’t just say “make the bad things go away”. Making Good law – the kind that doesn’t have truly evil side effects – is hard.

While I approve of the sentiment (of making bad things go away), it would require a lot more consideration, a lot more gaming out of scenarios, than can be done in a forum like this if you want good law to come out of it.

Arianity says:

Re: Re: Re:4

Step one, there, is ensuring that it actually IS election or business fraud. You can’t just rely on hearsay, either, as that is not proof. TBH, fraud is not fraud until a court says it is fraud. Which is entirely too late for the timetable you want.

Well, there definitely needs to be a judicial process via an injunction or something. I’m not sure what the current timetable is, but when I was thinking of the scenario, I was thinking about a platform that just categorically refused to remove it, even over a timeline long enough for a full court process. Not necessarily a fast one. Even if it doesn’t catch everything, a slow one is better than nothing.

A fast one would definitely be more complicated (and in many cases, probably not possible)

Step two, of course, is the First Amendment. How do you get around the First Amendment as regards “content of the communication”

Strict scrutiny is the usual bar for getting past an amendment/right (including the First: It’s the reason we can prosecute things like fraud,defamation, etc in the first place. They do override First Amendment protections). The bar for that is generally: the government has to have a compelling interest, the law would have to be narrowly tailored, and it has to be the least restrictive means possible. It is very tight

This is the sort of thing §230 was designed for: placing liability for speech upon the shoulders of the one creating the message.

To be clear, I think liability is a (slightly?) different question than forced speech. I think there are cases where you’d want to say, take down a post of business fraud, but not necessarily hold the platform liable for it. They’re definitely related, but I think I’d give a different answer.

While I approve of the sentiment (of making bad things go away), it would require a lot more consideration, a lot more gaming out of scenarios, than can be done in a forum like this if you want good law to come out of it.

Oh, for sure. I’m not claiming we can make a law in a few minutes on a forum. I took their question to be more of a hypothetical/ethical sort of stance (well, really it seemed like more of a gotcha hoping I’d say something ridiculous/indefensible, but close enough). There are so many pitfalls on specific implementation

I’ve read enough of Stephen’s posts to know that we philosophically disagree heavily on (some parts of) First Amendment rights. He’s much closer to an absolutist than I am. That’s why I specifically picked the 303 Creative example from the get go, because I know we disagree on it. We do agree on quite a bit though (although I usually don’t end up posting when we do, so it may not seem like it). We’re pretty close on the baseline, but the corner cases like Title 2 there’s some divergence. I don’t want to speak for him, but I think it’s mostly on whether something like strict scrutiny should exist at all or not. I think I’m more willing to apply it, he’s much more narrow, if at all.

Anonymous Coward says:

Re: Re: Re:4

Step one, there, is ensuring that it actually is election or business fraud. You can’t just rely on hearsay, either, as that is not proof. TBH, fraud is not fraud until a court says it is fraud. Which is entirely too late for the timetable you want.

Because comparison is impossible under a short timetable. Riiiight.

Anonymous Coward says:

Even if it were true that Musk were doing this deliberately (and, again, there is no evidence to support that) […]

With Musk, even he sincerely (as much as he can, without its so subtle sense of humor) admits it, we cannot even be sure it would be true.
But yeah, he’s doing what he wants. And since he doesn’t care about Twitter credibility now, there is not much he still can loose now.

But now, how any investigation could be useful? To find out that Musk/Twitter is biased? Read its last 20 tweets and call it an investigation.

And to which consequence? To ban Twitter because it’s another national security threat? It may only be the tip of the iceberg.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Even if it was just to mock Jordan’s other investigations, there’s no reason to justify such nonsense with more nonsense.

Emphasis mine.

Mike I disagree with your words here (but not with that I think you are trying to say). Responding to nonsense with nonsense is a perfectly legitimate way to communicate the problem with someone statement. However this (nonsensical) call for an investigation is also an action that will be directly detrimental and unjust to one party (and indirectly to a large portion of US citizens), that is extwitter. In my humble opinion THAT is totally unacceptable behavior.

David says:

Never attribute to malice

when it hasn’t had the time to overtake incompetence yet.

Of course, most users immediately assumed the worst. Many posted their screenshots and insisted it was Musk putting his thumb on the scales.

Yeah, those are premature complaints. You have to give Musk the time to put his thumb on the scales before assuming the worst. Right now it is merely a scale that has been completely broken, partly to give access to Musk’s thumb.

And I have no doubt that he will interfere and be disingenuous about it as much as he can.

But all in due time.

That One Guy (profile) says:

Live by the sword, die by the sword...

On the one hand I can see a system that’s meant to prevent bot swarms rushing a potentially hijacked account to boost it’s numbers dropping the ball here and triggering on an account that wasn’t hijacked and was seeing legitimate interest from actual people.

On the other hand I’m often inclined to apply a person’s own standards against them to see how they like it, and in that case the question I ask myself is ‘Would Elon or other MAGAts have been willing to give the benefit of the doubt to Twitter before his takeover if this same thing happened to a republican account?’

TKnarr (profile) says:

One thing, Mike: the message said that the reason for the failure was that the person trying to follow Harris wasn’t able to follow more people. That pretty clearly doesn’t point to a problem with the Harris account but with the account trying to follow her. That doesn’t fit with your description of an excessive number of other people trying to follow the Harris account. From the message my first thought would be “I haven’t followed anybody recently, why is it telling me I’m not allowed to follow more people when I try to follow her? And if I’m not allowed to follow any more people, why can I follow people other than Harris just fine?”.

I agree that it’s probably what you describe, but the message is so inappropriate to the situation that I can’t imagine anyone being that incompetent (which is par for the course with Elon).

Stephen T. Stone (profile) says:

Re:

One thing, Mike: the message said that the reason for the failure was that the person trying to follow Harris wasn’t able to follow more people. That pretty clearly doesn’t point to a problem with the Harris account but with the account trying to follow her.

From the article:

Having an account — especially an older account that changes names — suddenly get a large flood of new followers is a pattern consistent with spam accounts (often a spammer will somehow take over an old account, change the name, and then flood it with bot followers). It’s likely that, to combat that, ExTwitter has systems that kick in after a certain point and rate limit the followers.

The message which blames the follower might just be shoddy programming on ExTwitter’s part. Or it might be because part of the “signal” found in this pattern is that when a ton of accounts follow an old account like this, it often means all those follower accounts are now being flagged as potential bots (again, spam accounts flood newly obtained accounts with bot followers).

In other words, these rate limiting messages are entirely consistent with normal trust & safety automated systems.

Anonymous Coward says:

Re:

Because that never happened.

Trump was finally banned from Twitter because, if you were paying attention, he finally crossed the line and abetted a FUCKING INSURRECTION.

Twitter, like every fucking media outlet and social network tolerating his treasonous presence, treated him and every fucking Republican traitor to America with kid gloves.

What happened with HarrisHQ has been nothing but Twitter’s systems kicking in. For now.

Elon has not made the situation better by “joking” about his sheer malocious incompetence, yes. And there’s no doubt he’s more than happy to do Putin’s bidding by banning the Harris campaign from doing anything in his Nazi bar.

But that has not happened yet. What happened here is Twitter’s systems working as intended, for now.

And while I’ll defer to Mike’s expertise for this scenario, I’m more familiar with Twitter suspending new accounts with a surge in followers.

I’m surprised HarrisHQ wasn’t suspended until someone from the Harris campaign verified the change. Which is nice, I guess.

Hillary Kamp says:

I don’t know, haven’t heard about what the article is saying, but I did have an issue with X yesterday. I had to delete posts 5x yesterday when trying to tag @kamalahq because somewhere between my tag selection and the time posted it would change the tag to similar pages instead of the Harris page. For instance I would choose the Harris page in the selection drop down or type out @kamalahq, and when posted it would lead to a few different pages when clicking the link on my post instead ( @kamal for instance) I had to delete and repost several times before it finally showed up correctly. It was odd, that’s really all I can say. It’s also only a premium option to edit your comment after posting, which I find Irritating and out of line personally, but that in itself doesn’t qualify as dubious.

Anonymous Coward says:

Aw come on! Are we to believe Elon wouldn’t just hit a keystroke or two to lock up Kamala’s account? Elon wants nothing more than having fat trump as his personal dictator-for-life. Fat trump means Elon doesn’t have to waste any of his precious funds on those pesky Taxes that the serfs should pay instead. And he’s not alone: just ask any Billionaire (well, most of them, anyway).

Leave a Reply to Anonymous Coward Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt needs your support! Get the first Techdirt Commemorative Coin with donations of $100
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...