Both Things Can Be True: Meta Can Be Evil AND It’s Unlikely That The Company Deliberately Blocked A Mildly Negative Article About It

from the it's-the-perception-that-matters dept

Truth matters. Even if it’s inconvenient for your narrative. I’m going to do a question and answer style post, because I want to address a bunch of questions that came up on this story over the weekend, but let’s start here.

So what happened?

Last Thursday, the Kansas Reflector, a small local news non-profit in (you guessed it) Kansas, published an interesting piece regarding the intersection of Facebook and local news. The crux of the story was that Facebook makes it next to impossible to talk about anything related to Climate Change without having it blocked or with limited visibility.

The piece is perhaps a bit conspiratorial, but not that crazy. What’s almost certainly happening is that Meta has decided to simply limit the visibility of such content because of so many people constantly slamming Meta for supporting one or the other side of culture war issues, including climate change. It’s a dumb move, but it’s the kind of thing that Meta will often do: hamfisted, reactive, stupid-in-the-light-of-day managing of some issue that is getting the company yelled at. It’s not the criticism of Meta, but the hot button nature of the topic.

But then, Meta made things worse (a Meta specialty). Later in the day, it blocked all links to the Kansas Reflector from all Meta properties.

This morning, sometime between 8:20 and 8:50 a.m. Thursday, Facebook removed all posts linking to Kansas Reflector’s website.

This move not only affected Kansas Reflector’s Facebook page, where we link to nearly every story we publish, but the pages of everyone who has ever shared a story from us.

That’s the short version of the virtual earthquake that has shaken our readers. We’ve been flooded with instant messages and emails from readers asking what happened, how they can help and why the platform now falsely claims we’re a cybersecurity risk.

As the story started to Streisand its way around the internet and had people asking what Meta was afraid of, the company eventually turned links back on to most of the Kansas Reflector site. But not all. Links to that original story were still banned. And, of course, conspiracy theories remained.

Meta’s comms boss, Andy Stone, came out to say that it was all just a mistake and had nothing to do with the Reflector’s critical article about Meta:

Image

And, again, it felt like there was a decent chance that this was actually true. Mark Zuckerberg is not sitting in his office worrying about a marginally negative article from a small Kansas non-profit. Neither are people lower down the ranks of Meta. That’s just not how it works. There isn’t some guy on the content moderation line thinking “I know, Mark must hate this story, so I’ll block it!”

It likely had more to do with the underlying topic (the political hot potato of “climate change”) than the criticism of Meta, combined with a broken classifier that accidentally triggered a “this is a dangerous site” flag for whatever reason.

Then things got even dumber on Friday. Reporter Marisa Kabas reposted the original Reflector article on her site, The Handbasket. She could do this as the Reflector nicely publishes its work under a Creative Commons CC BY-NC-ND 4.0 license.

And then Marisa discovered that links to that article were also blocked across Meta (I personally tried to share the link to her article on Threads and had it blocked as “not allowed.”).

Soon after that, blogger Charles Johnson noticed that his site was also blocked by Meta as malware, almost certainly because a commenter linked to the original Kansas Reflector story. Eventually, his site was unblocked on Sunday.

Instagram and Threads boss Adam Mosseri showed up in somewhat random replies to people (often not those directly impacted) and claimed that it was a series of mistakes:

Image

What likely actually happened?

Like all big sites, Meta uses some automated tools to try to catch and block malicious sites before they spread wide and far. If they didn’t, you’d rightly be complaining that Meta doesn’t do the most basic things to protect its users from malicious sites.

Sometimes (more frequently than you would believe, given the scale) those systems make errors. Those errors can be false negatives (letting through dangerous sites that they shouldn’t) and false positives (blocking sites that shouldn’t be blocked). Both types of errors happen way more than you’d like, and if you tweak the dials to lessen one of those errors, you almost certainly end up with a ton of the other. It’s the nature of the beast. Being more accurate in one direction means less accurate in the other.

So, what almost certainly happened here was that there was some set of words or links or something on the Kansas Reflector story that tripped an alarm on a Meta classifier saying “this site is likely dangerous.”

This alarm likely is triggered thousands or tens of thousands of times every single day. Most of these are never reviewed, because most of them don’t become newsworthy. In many cases, there’s a decent chance no one ever even learns that their websites are barred by Meta, because no one ever actually notices.

Everything afterward stems from that one mistake. The automated trigger threshold was passed, the Reflector got blocked because Meta’s systems gave it a probabilistic score that suggested the site was dangerous. Most likely, no human at Meta even read the article before all this, and if anyone had, they would most likely not have cared about the mild criticism (way more mild than tons of stories out there).

If you are explaining why Meta did something that is garden variety incompetent, rather than maliciously evil, doesn’t that make you a Meta bootlicker?

Meta is a horrible company that has a horrible track record. It has done some truly terrible shit. If you want to read just how terrible the company is, read Erin Kissane’s writeup of what happened in Myanmar. Or read about how the company told users to download a VPN, Onavo, that was actually breaking encryption and spying on how users used other apps to send that data back to Meta. Or read about how Meta basically served up the open internet on a platter for Congress to slaughter, because they knew it would harm competition.

The list goes on and on. Meta is not a good company. I try to use their products as little as possible, and I’d like to live in a world where no one feels the need to use any of their products.

But truth matters.

And we shouldn’t accept a narrative as true just because it confirms our priors. That seems to have happened over the past few days regarding a broken content moderation decision that caused a negative news story about Meta to briefly be blocked from being shared across Meta properties.

It looks bad. It sounds bad. And Meta is a terrible company. So it’s very easy to jump to the conclusion that of course it was done on purpose. The problem is that there is a much more benign explanation that is also much more likely.

And this matters, because when you’re so trigger happy to insist that the mustache-twirling version of everything must be true, it actually makes it that much harder to truly hold Meta to account for its many sins. It makes it that much easier for Meta (and others) to dismiss your arguments as coming from a conspiracy theorist, rather than someone who has an actual point.

But what about those other sites? Isn’t the fact that it spread to other sites posting the same story proof of nefariousness?

Again, what happened there also likely stemmed from that first mistake. Once the system is triggered, it’s also probably looking for similar sites or sites trying to get around the block. So, when Kabas reposted the Reflector text, another automated system almost certainly just saw it as “here’s a site copying the other bad site, so it’s an attempt to get around the block.” Same with Johnson’s site, where it likely saw the link to the “bad” site as an attempt to route around the block.

Even after Meta was made aware of the initial error, the follow-on activities would quite likely continue automatically as the systems just did their thing.

But Meta is evil!

Yeah, we covered this already. Even if that’s true… especially if that’s true, we should strive to be accurate in our criticism of the company. Every overreaction undermines the arguments for the very real things that the company has done wrong, and that it continues to do wrong.

It allows the company to point to an error someone made in describing what they’ve done wrong here and use it to dismiss their more accurate criticism for other things.

But Meta has a history of lying and there’s no reason to give it the benefit of the doubt!

Indeed. But I’m not giving the benefit of the doubt to Meta here. Having spent years and years covering not just social media content moderation fuckups, but literally the larger history of content moderation fuckups, there are some pretty obvious things that suggest this was garden variety technical incompetence, found in every system, rather than malicious intent to block an article.

First, as noted, these kinds of mistakes happen all the time. Sooner or later, one is going to hit an article critical of Meta. It reminds me of the time that Google threatened Techdirt because it said comments on an article violated its terms of service. It just so happened that that article was critical of Google. I didn’t go on a rampage saying Google was trying to censor me because of my article that was critical of Google. Because I knew Google made that type of error a few times a month, sending us bogus threat notices over comments.

It happens.

And Meta has always allowed tons of way worse stories, including the Erin Kissane story above.

On top of that, the people at Meta know full well that if they were actually trying to block a critical news story, it would totally backfire and the story would Streisand all over the place (as this one did).

Also, Meta has a tell: if they were really doing something nefarious on this, they’d have a full court, slick press response ready to go. It wouldn’t be a few random Adam Mosseri social media posts going “oh shit, we fucked up, we’re fixing now…”

But it’s just too big of a coincidence, since this is over a negative story!

Again, there are way, way, way worse stories about Meta out there that haven’t been blocked. This story wasn’t even that bad. And no one at Meta is concerned about a marginally negative opinion piece in the Kansas Reflector.

When mistakes are made as often as they are made at this kind of scale (again, likely thousands of mistakes a day), eventually one of them is going to be over an article critical of Meta. It is most likely a coincidence.

But if this is actually a coincidence and it happens all the time, how come we don’t hear about it every day?

As someone who writes about this stuff, I do tend to hear similar stories nearly every day. But most of them never get covered because it’s just not that interesting. “Automated harmful site classifier wrong yet again” isn’t news. But even then, I do still write about tons of content moderation fuckups that fit into this kind of pattern.

Why didn’t Meta come out and explain all this if it was really no big deal?

I mean, they kinda did. Two different execs posted that it was a mistake and that they were looking into it, and some of those posts came over a weekend. It took a few days, but it appears that most of the blocked links that I was aware of earlier have been allowed again.

But shouldn’t they have a full, clear, and transparent explanation for what happened?

Again, if they had that all ready to go by now, honestly, then I’d think they were up to no good. Because they only have those packages ready to go when they know they’re doing something bad and need to be ready to counter it. In this case, their response is very much of the nature of “blech, classifier made a mistake again… someone give it a kick.”

And don’t expect a fully transparent explanation, because these systems are actually doing a lot to protect people from truly nefarious shit. Giving a fully transparent explanation of how that system works, where it goes wrong, and how it was changed might also be super useful to someone with nefarious intent, looking to avoid the classifier.

If Meta is this incompetent, isn’t that a big enough problem? Shouldn’t we still burn them at the stake?

Um… sure? I mean, there are reasons why I support different approaches that would limit the power of big centralized players. And, if you don’t like Meta, come use Bluesky (where people got to yell about this at me all weekend), where things are set up in a way that one company doesn’t have so much power.

But, really, no matter where you go online, you’re going to discover that mistakes get made. They get made all the time.

Honestly, if you understood the actual scale, you’d probably be impressed at how few mistakes are made. But every once in a while a mistake is going to get made that makes news. And it’s unlikely to be because of anything nefarious. It’s really just likely to be a coincidence that this one error happened to be one where a nefarious storyline could be built around it.

If Meta can’t handle this, then why should we let it handle anything?

Again, you’ll find that every platform, big and small, makes these mistakes. And it’s quite likely that Meta makes fewer of these mistakes, relative to the number of decisions it makes, than most other platforms. But it’s still going to make mistakes. So is everyone else. Techdirt makes mistakes as well, as anyone who has ever had their comments caught in the spam filter should recognize.

But why was Meta so slow to fix these things or explain them?

It wasn’t. Meta is quite likely dealing with a few dozen different ongoing crises at any one moment, some more serious than others. Along those lines, it’s quite likely that, internally, this is viewed as a non-story, given that it’s one of these mistakes that happens thousands of times a day. Most of these mistakes are never noticed, but the few that are get fixed in due time. This just was not seen as a priority, because it’s the type of mistake that happens all the time.

But why didn’t Adam Mosseri respond directly to those impacted? Doesn’t that mean he was avoiding them?

The initial replies from Mosseri seemed kinda random. He responded to people like Ken “Popehat” White on Threads and Alex “Digiphile” Howard on Bluesky, rather than anyone who was directly involved. But, again, this tends to support the underlying theory that, internally, this wasn’t setting off any crisis alarm bells. Mosseri saw those posts because he just happened to see those posts and so he responded to them, noting the obvious mistake, and promising to have someone look into it more at a later date (i.e., not on a weekend).

Later on, I saw that he did respond to Johnson, so as more people raised issues, it’s not surprising that he started paying closer attention.

None of what you say matters, because they were still blocking a news organization, and whether it was due to maliciousness or incompetence doesn’t matter.

Well, yes and no. You’re right that the impact is still pretty major, especially to the news sites in question. But if we want to actually fix things, it does matter to understand the underlying reasons why they happen.

I guarantee that if you misdiagnose the problem, your solution will not work and has a high likelihood of actually making things way, way worse.

As we discussed on the most recent episode of Ctrl-Alt-Speech, in the end, the perception is what matters, no matter what the reality of the story. People are going to remember simply that Meta blocked the sharing of links right after a critical article was published.

And that did create real harm.

But you’re still a terrible person/what do you know/why are you bootlicking, etc?

You don’t have to listen to me if you don’t want to. You can also read this thread from another trust & safety expert, Denise from Dreamwidth, whose own analysis is very similar to mine. Or security expert @pwnallthethings, who offers his own, similar explanation. Everyone with some experience in this space sees this as an understandable (which is not to say acceptable!) scenario.

I spend all this time trying to get people to understand the reality of trust & safety for one reason: so that they understand what’s really going on and can judge these situations accordingly. Because the mistakes do cause real harm, but there is no real way to avoid at least some mistakes over time. It’s just a question of how you deal with them when they do happen.

Is it an acceptable tradeoff if it means Meta allows more links to scam, phishing, and malware sites? Because those are the tradeoffs we’re talking about.

While it won’t be, this should be a reminder that content moderation often involves mistakes. But also, while it’s always easy to append some truly nefarious reason to things (e.g., “anti-conservative bias”), it’s often more just “the company is bad at this, because every company is bad at this, because the scale is more massive than anyone can comprehend.”

Sometimes the system sucks. And sometimes the system simply over-reacted to one particular column and Streisanded the whole damn thing into the stratosphere. And that’s useful in making people aware. But if people are going to be aware, they should be aware of how these kinds of systems work, rather than assuming mustache twirling villainy where there’s not likely to be any.

Filed Under: , , , , , ,
Companies: kansas reflector, meta

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Both Things Can Be True: Meta Can Be Evil AND It’s Unlikely That The Company Deliberately Blocked A Mildly Negative Article About It”

Subscribe: RSS Leave a comment
45 Comments
Stephen T. Stone (profile) says:

this should be a reminder that content moderation often involves mistakes

And when that moderation involves an article that paints the parent service in a negative light, those mistakes should be spotted, fixed, and both admitted to and apologized for in a public manner. That’s why people were so quick to assume Meta was acting nefariously. Had Meta’s people effectively said “our systems fucked up hard, we’re sorry, and we’re working to fix this” in a press release or even just a solemn black-and-white JPEG on Twitter, they might have been able to mitigate most of the negative response to this mistake. As it is, acting in a way that made it seem like Meta was doing something nefarious and only backing down after it had Fucked Around And Found Out was a bigger mistake than the one that blocked those links. “The cover-up is always worse than the crime,” or so the saying goes⁠—and that tends to apply to “cover-ups” that aren’t so much cover-ups as they are one bad decision compounding on a different one.

Dan B says:

Re:

Had Meta’s people effectively said “our systems fucked up hard, we’re sorry, and we’re working to fix this”

The they had accidentally blocked, say, CNN or Fox or even a major-city newspaper, that is something I would expect an official corporate response or press release for. That’s the kind of thing you bother the VP of Spin Control about over the weekend. “Our bad-content filter accidentally false-positive’d a site hardly anyone’s heard of, they complained, and we’re trying to fix it” is not. That happens countless times a day.

It only turned into something that needed a corporate-level response after the “zomg Meta is censoring news sites that criticize them” meme went viral, by which point it was too late. People who are willing to believe Facebook would block a minor new site over mild criticism (while leaving worse criticism in place) aren’t going to suddenly feel otherwise because Facebook pinky-swears it was all an accident.

Stephen T. Stone (profile) says:

Re: Re:

It only turned into something that needed a corporate-level response after the “zomg Meta is censoring news sites that criticize them” meme went viral, by which point it was too late.

Sure, it wasn’t a good look for Meta. But negative effects could’ve still been mitigated if Meta had been more publicly forthcoming (and conciliatory) about what had happened. That Meta appeared to silently roll back its mistakes after the shit had hit the fan only added fuel to the fire. Like I said, the “cover-up” is the bigger issue, even if Meta didn’t intend to cover anything up. People don’t trust Meta for a lot of reasons, but one of them is its opaque handling of situations like this.

Anonymous Coward says:

Yeah, we covered this already. Even if that’s true… especially if that’s true, we should strive to be accurate in our criticism of the company. Every overreaction undermines the arguments for the very real things that the company has done wrong, and that it continues to do wrong.

As I like to say, we should burn people (figuratively) of what they actually did. As Mike said: failure to do this undermines our ability to hold anyone accountable. As well as undermining any justification for doing so.

And furthermore: who whats to live in a world where you can be punished for anything someone else decided they want to punish you for (irrespective of what you actually did). That just destroys any order in society.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Rocky says:

Re: Re: Re:

Let me remind you what Elon Musk has said:

By ‘free speech’, I mean that which matches the law. I am against censorship that goes far beyond the law,” he declared before he bought Twitter. “If people want free speech, they will ask government to pass laws to that effect. Therefore, going beyond the law is contrary to the will of the people.

So here we have a country that has lawfully decided that some content on exTwitter should be removed which makes me ask: What the fuck is Elon complaining about?

I also have to poke at your stupidity a bit. Techdirt isn’t a hot-news site, if you expect the latest news and the brainless opinions about those news there are a myriad of other sites on the internet that cater to people just like you.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

I’m an admin on a small web forum. Automated defenses screwing up strikes me as perfectly plausible–frankly, I find it to be the likeliest explanation. Every day, at least a hundred spambots try to register on our site and spew their garbage. We’ve got automated defenses that prevent the overwhelming majority of registrations. On average, a few bots still get through each day, though. In addition, there have been a few incidents of humans getting blocked thanks to their IP address being in a “bad neighborhood.” The forum can get by with a single admin going on “patrol” each day to mop up any bots that break through. A giant like Facebook would probably be 99% spam if not for automated moderation. Even if Meta was willing to shell out the money necessary to have an all-human moderation team that did everything the automated defenses did, those humans would still slip up on occasion, too.

While there’s definitely a conversation to be had about just how much we should rely on AI as an alternative to human labor, I don’t think it’s relevant here; moderation mistakes are inevitable on a site as big as Facebook.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:

They were conciliatory, yes⁠—but initially, they conceded their faults in the replies of random people who mentioned the issue on social media rather than in public statements directed to the parties directly affected by this mistake. That’s why this feels to some people like Meta intentionally blocked the link and tried to backtrack after the shit hit the fan: If this truly was an honest mistake, why did it seem like they tried to cover it up?

I understand wanting/needing to run conciliatory statements past legal teams and such so legal liability is mitigated. But a simple “hey, we fucked up, sorry about that” DM to anyone affected by the link block would’ve gone further than the approach they took. Building trust takes years, breaking it takes only a moment, and repairing it doesn’t mean it’ll be the same as before. Meta compounded its mistake by making a bigger one; in the process, it likely broke a lot of trust that will never be repaired.

Anonymous Coward says:

In many cases, there’s a decent chance no one ever even learns that their websites are barred by Meta, because no one ever actually notices.

That is: if the customer service path “you blocked my in error” actually led to human intervention, it would cost a lot. So automatic “No, we’re right, we know we’re right, go away now” responses are the norm.

Which leaves out-of-band notice: virality, Streisanding, “reporter inquiries”, etc. That is, complaint by a) someone who “we” care about the opinion of, b) can’t simply put a hat over by way of silencing, and c) could affect the public perception of the company.

Anathema Device (profile) says:

Mike, why isn’t it possible that someone at Meta decided to ‘disappear’ or limit the original Reflector article, and then the entire system just rolled on from that?

Is it not possible that it’s malice AND then the system acting as designed? I doubt it’s Zuckerberg making that kind of decision, but is it impossible that a bad actor – even outside Meta – triggered this, through malicious reporting of spam or dangerous content?

Anathema Device (profile) says:

Denise (rahaeli) on Bluesky posted
https://bsky.app/profile/rahaeli.bsky.social/post/3kpn6nyk5ok2i

“Update to yesterday’s thread on the “Meta blocking this story” incident: I have now actually gone to look at the details of the stack the Kansas Reflector, the site that originally posted it, is running. It’s WordPress 6.3 with a bunch of outdated plugins and widgets.

This has moved the needle on my certainty that the “automated detection blocked the link as malicious” explanation is the correct explanation from 99.9% confidence to 99.999% confidence, and if the Reflector is smart, they will hire someone with forensic WordPress security skills for an audit”

There are a lot of outdated WP blogs out there though. Does Meta flag all of those as well?

Anathema Device (profile) says:

Re: Re:

Denise pointed out in that thread that WP 6.3 was exceptionally buggy and dangerous, but I still think the real trigger was a human false report of some kind. That doesn’t mean the malice was coming from Meta, but the Reflector is in Kansas where the shenanigans around newspapers have been well documented right on this blog and elsewhere.

Stephen T. Stone (profile) says:

Re: Re: Re:

Again: Even if Meta blocked the Reflector because its site was using an old version of WordPress⁠—and even if the block happened because of a malicious false-flag report⁠—that still doesn’t explain how sites that (I can safely assume) weren’t running that version of WordPress got dinged for reposting the text of the Reflector article verbatim.

Anathema Device (profile) says:

Kansas Reflector assessment of what happened

Facebook’s AI failure wiped out Kansas Reflector links. Even Facebook may not know what went wrong.

https://kansasreflector.com/2024/04/11/facebooks-ai-failure-wiped-out-kansas-reflector-links-even-facebook-may-not-know-what-went-wrong/

Facebook’s unrefined artificial intelligence misclassified a Kansas Reflector article about climate change as a security risk, and in a cascade of failures blocked the domains of news sites that published the article, according to technology experts interviewed for this story and Facebook’s public statements.

The assessment is consistent with an internal review by States Newsroom, the parent organization of Kansas Reflector, which faults Facebook for the shortcomings of its AI and the lack of accountability for its mistake.

It isn’t clear why Facebook’s AI determined the structure or content of the article to be a threat, and experts said Facebook may not actually know what attributes caused the misfire.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...