Ctrl-Alt-Speech: You Can’t Antitrust Anyone These Days
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Meta wins FTC antitrust trial over Instagram, WhatsApp deals (CNBC)
- Commission eyes further simplification of tech rules after DSA review (Euractiv)
- Inside Europe’s ‘Jekyll and Hyde’ tech strategy (Digital Policy)
- NetChoice sues Virginia to block its one-hour social media limit for kids (The Verge)
- Tech Giants Sue California Over Social Media Access Law (2) (Bloomberg Law)
- TikTok to give users power to reduce amount of AI content on their feeds (The Guardian)
- The Most Frustrating Word for Trust & Safety Professionals (LinkedIn)
Filed Under: content moderation, dsa, eu, europe, netchoice, regulation
Companies: meta, tiktok




Comments on “Ctrl-Alt-Speech: You Can’t Antitrust Anyone These Days”
That person’s take on “AI CSAM” also misses the point.
There are multiple issues.
1) A company doesn’t communicate well how a product actually works. In fact, I’m not sure they have much insight into that. In the end, someone ends up Blaming The User based on how it might work. There are quite a few guesses made based on the worst scenario someone can imagine.
2) A deepfake is not bad because it’s offensive, but that is often how it is portrayed. It’s bad because of it being a misuse of someone’s likeness impinging on their privacy. Too much chatter talks about something through the lens of it being content as if it’s about a bit of content being offensive.
A focus on “content” leads to bad policies which don’t help and actually makes it harder to deal with the issue.
3) It misses how it plays into broader copyright / data protection concerns.
4) AI is a technology, so someone can use it more or less ethically. The label “AI CSAM” assumes any use of AI is unethical. This leads to bad policy.
Re:
I’m confused as to what you’re referring to as I don’t think we spoke about AI CSAM at all.