China’s longstanding war on the internet, especially relating to children’s use of it, continues. Readers here will be well aware of the plethora of actions taken by China over the years to limit what its residents can see and do with the internet. From the Great Firewall of China to the country’s more targeted approach at limiting how much and when children can play online video games, all of this dovetails nicely with Beijing’s larger goals of tamping down on undesirable content and the erosion of any sign of democracy within its sphere of control. The toll this regulatory destruction has taken on the gaming industry in China is nearly too great to be believed.
And now China is setting its sights on another popular corner of the internet marketplace: content streamers. The country recently announced changes to how streaming services and streamers must operate, specifically in terms of limiting how and how often minors can interact with online streamers.
1) Viewers under the age of 18 will no longer be able to “tip,” a practice where those watching a broadcast are able to send small amounts of money, usually in exchange for a spoken or text acknowledgement of their contribution.
2) Anyone watching livestreamed content via a kid’s account will have all streams locked out after 10pm, and those responsible for creating content will “need to strengthen the management of peak hours for such shows.”
What’s the point of all this? Well, a couple of things. First, China’s stated goal in this further tightening of internet restrictions is supposedly to combat “chaos” occurring on the internet. What chaos, you ask? Well, almost certainly this has to do with tamping down the rise of popular personalities on Chinese streaming services that have, or could, build up huge followings and then suddenly say something “subverssive” about China’s government. Authoritarians, after all, don’t typically like a popularity contest. Keeping children, the bulk consumers of streaming services like this, from supporting streamers limits anyone’s opportunity to build a living that way.
As for the limitation on watching streamers at night, well, this follows right along with the limitation of online gaming in the evenings as well. Perhaps China thinks it can squeeze more educational production out of kids by making them go night-night at 10pm. Perhaps this is just a bit more control over culture, serving as a reminder of Beijing’s total authority over its people. On this, we can only speculate.
But raging against modernity isn’t a long term solution. Not even for a government as brutal as China’s.
Last week the European Union’s top court, the Court of Justice of the European Union (CJEU), handed down its judgment on whether upload filters should be allowed as part of the EU Copyright Directive. The answer turned out to be a rather unclear “yes, but…“. Martin Husovec, an assistant professor of law at the London School of Economics, has published an opinion piece exploring the ruling, which he sums up as follows:
The Court ruled this week that filtering as such is compatible with freedom of expression. However, it must meet certain conditions. Filtering must be able to “adequately distinguish” when users’ content infringes a copyright and when it does not. If a machine can’t do that with sufficient precision, it shouldn’t be trusted to do it at all.
The problem is deciding whether implementations of the upload filters do indeed “adequately distinguish” between legal and infringing material. As Husovec notes, both the CJEU and the EU Member States have tried to make this tricky problem someone else’s. That’s hardly surprising, since it is far from obvious how to resolve the issue of allowing filtering but only if it respects legal use of copyright material. However, Husovec offers a way forward with some concrete proposals:
Filters should be subjected to testing and auditing. Statistics on the use of filters and a description of how they work should be made public.
Consumer associations should have the right to sue platforms for using poorly designed filters. Some authorities should have oversight of how the systems work and issue fines in the event of shortcomings.
Husovec notes a neat way to bring in those requirements without wading back into the swamp that is the Copyright Directive. He suggests using the EU’s new AI Act, currently under discussion, as a vehicle to impose safeguards on upload filters, which will inevitably be based on algorithms, and could thus be subject to the artificial intelligence legislation if policymakers added them.
It’s a good approach. Given that the CJEU has approved the stupid idea of upload filters, the least we should do is to apply a little (artificial) intelligence to how they will operate.
From the Internet of very broken things to telecom networks, the state of U.S. privacy and user security is arguably pathetic. It’s 2022 and we still don’t have even a basic privacy law for the Internet era, in large part because over-collection of data is too profitable to a wide swath of industries, which, in turn, lobby Congress to do either nothing, or the wrong thing.
Apps routinely aren’t much of an exception. Mozilla’s latest *Privacy Not Included guide analyzed the privacy and security standards of 32 mental health and prayer apps, and gave 29 of them a “privacy not included” warning label indicating they failed to adhere to even basic user privacy standards:
“The vast majority of mental health and prayer apps are exceptionally creepy. They track, share, and capitalize on users’ most intimate personal thoughts and feelings, like moods, mental state, and biometric data. Turns out, researching mental health apps is not good for your mental health, as it reveals how negligent and craven these companies can be with our most intimate personal information.”
The problems included an over-collection and sale of data (including the collection of some mental health chat transcripts), poor password creation standards, and nebulous and undercooked privacy policies. Better Help, Youper, Better Stop Suicide, Woebot, Talkspace, and Pray.com were deemed the worst offenders. Only three of the 32 app makers responded to a Mozilla request for comment.
The U.S. isn’t known for quality mental health care, but online mental health apps and services are booming, with a particular focus on the sale of ketamine and psychedelics for therapeutic use. But many of these services have all the kinds of problems you might expect (shoddy therapy, incorrect doses) before you even get to the potential privacy problems that will ultimately and inevitably appear.
It’s not that difficult to pass a baseline privacy law for the Internet era that at least erects some basic guard rails and base-level accountability for bad actors and executives. But we have no such law because a huge array of industries have lobbied Congress into apathy and dysfunction, with the cost being repeatedly borne by ordinary Americans.
It will keep happening until there’s a privacy and security scandal so idiotically ferocious that the problem will be impossible to ignore (probably involving either significant deaths, or the extremely sensitive and personal data of powerful people). Even then, there’s no guarantee a grotesquely corrupt U.S. Congress will be willing or able to respond competently to the challenge.
We’ve talked a fair bit about Australia’s ridiculous “News Bargaining Code,” which is literally nothing more than a tax on Facebook and Google for sending traffic to media organizations. Again, the law requires Facebook and Google (and just Facebook and Google) to pay media organizations for sending them web traffic. This is, of course, backwards to any sensible set up. Why should anyone have to pay for sending traffic to a website? Here, the answer appears to be, because Rupert Murdoch wants to get paid and is jealous of the success of Facebook and Google. The best summary of the whole thing comes from the Australian satirical video maker, The Juice Media:
The law does seem widely popular in Australia, mainly because the very same media that is getting paid because of this law spent years demonizing Facebook and Google, so many people think the law is good, even though it’s really just transferring money from some giant companies to other giant companies, and making sure that smaller media organizations get screwed in the process.
Anyway, as the law went into effect, Facebook broke out the nuclear option and blocked news sharing in Australia. This move made a lot of people angry, but I still don’t understand why. You don’t create a tax on things you want more of, you create a tax on things you want less of. If you tell a social media company that you are going to make them pay for sending any traffic to news stories, why is it surprising or bad that the company then says “ok, no more links to news stories?” If you don’t like that result, then maybe don’t pass such ridiculous laws?
Eventually, after a slight modification to the law, Facebook caved in anyway, re-enabled links to news stories, and started negotiating to pay money to a small number of big media organizations in Australia.
Anyway, the Wall Street Journal recently had an article revealing some internal Facebook documents from a whistleblower with the fairly provocative title: “Facebook Deliberately Caused Havoc in Australia to Influence New Law, Whistleblowers Say” and I went to read it, expecting some terrible smoking gun, about just how badly Facebook management screwed up — something the company has an uncanny knack for doing. And… I couldn’t really find anything.
Basically, the documents seem to show that when Facebook put in place that Australian news block, it ended up over-blocking sites that shouldn’t have been blocked, including government and charity sites. And, yes, that sounds bad, but again, the way the law was written was that if Facebook was allowing links to any news, it faced potentially massive fees it would be forced to pay up. So, when that’s how the law is structured, the only reasonable response is to over-block. It’s what any lawyer would recommend. It’s what I would recommend too.
The law is written in such a way that if you make a mistake and let news through you’re going to get into trouble, and as we’ve seen from other intermediary liability laws from around the globe, the perfectly natural response to any of this is to over-block. If you don’t, you face massive liability.
So I kept reading the WSJ piece, and waiting for the big reveal of what terrible thing that Facebook had done… and it appears to basically be… exactly what you’d expect. The company rushed to put this in place and over-blocked because they didn’t want to let anything get through that would cause them problems under the law.
Two hours later, the product manager for the team wrote in the internal logs: “Hey everyone—the [proposed Australian law] we are responding to is extremely broad, so guidance from the policy and legal team has been to be overinclusive and refine as we get more information.”
She then outlined the team’s plan to undo the improper blocking, including starting with “the most obvious cases” like government and healthcare pages, and the need to go to outside legal counsel for “more nuanced” cases.
And… yeah? I mean, of course you would want to be overly inclusive. If you weren’t then the whole reason for blocking news links goes away. Again, it seems like the real issue here is the law, and how broadly and stupidly it was written.
The WSJ piece does raise a point that Facebook already had a list of news orgs that it didn’t use, and at first that seemed like perhaps that was significant… until the very next paragraph when it becomes obvious why that list wasn’t used: because it clearly was not comprehensive, and the law applied to a much broader list of news sites:
Instead of using Facebook’s long-established database of existing news publishers, called News Page Index, the newly assembled team developed a crude algorithmic news classifier that ensured more than just news would be caught in the net, according to documents and the people familiar with the matter. “If 60% of [sic] more of a domain’s content shared on Facebook is classified as news, then the entire domain will be considered a news domain,” stated one internal document. The algorithm didn’t distinguish between pages of news producers and pages that shared news.
The Facebook documents in the complaints don’t explain why it didn’t use its News Page Index. A person familiar with the matter said that since news publishers had to opt in to the index, it wouldn’t have necessarily included every publisher.
Okay, so… no real scandal there. The article does make a big deal of the fact that Facebook blocked government websites, and even implies repeatedly that Facebook deliberately did so to put pressure on the government to change its policy… but then 55 (55!) paragraphs into the article, the reporters admit that the internal documents they saw shows that the government website blocking was a legitimate mistake.
On the first day of the action, Facebook executives discussed that the platform had blocked about 17,000 pages as news that shouldn’t have been, of which 2,400 were “high priority” pages such as government agencies and nonprofits that they were working to unblock first, according to emails viewed by the Journal.
I mean, maybe I’m just a small country blogger, but if you’re going to spend all this time hinting at a smoking gun and how the company “deliberately caused havoc… to influence [the] new law” then maybe you shouldn’t wait until paragraph 55 to say, well, actually, the thing we spent many previous paragraphs talking about being really awful, was a legitimate accident that the company spotted almost immediately and moved to fix?
Look, there are many, many reasons why Facebook is a terrible company doing terrible things, and there’s little reason to trust the company to do the right thing when the wrong thing always seems to be the company’s top choice. But this whole article seems bizarrely empty of any actual support of its central premise.
Yes, Facebook broadly blocked news links for a little while until the law was slightly adjusted, but under the law, letting any links to news through would have triggered a provision that effectively would have enabled the government to simply order Facebook to pay huge sums to media companies. It seems like a perfectly logical and reasonable step to respond to such a move by blocking news links, and even more so to block broadly to avoid accidentally tripping the wire to trigger the law. And, yes, the overly broad blocking did include some government websites, but as the article itself admits if you make it all the way down to paragraph 55, the company immediately recognized those issues and set to work on fixing them, though it wanted to proceed cautiously to avoid accidentally triggering the law.
It’s easy to hate on Facebook — again, the company does a lot of terrible things — but this seems like yet another case where the journalists really badly wanted to tell a story of something evil, but the actual documents didn’t support the story… so they just wrote it anyway.
Laura Loomer still thinks she can sue her way back onto Facebook and Twitter. In support of her argument, she brings arguments that failed in the DC Appeals Court as well as a bill for $124k in legal fees for failing to show that having your account reported is some sort of legally actionable conspiracy involving big tech companies.
For this latest failed effort, she has retained the “services” of John Pierce, co-founder of a law firm that saw plenty of lawyers jump ship once it became clear Pierce was willing to turn his litigators into laughingstocks by representing Rudy Giuliani and participating in Tulsi Gabbard’s performative lawsuits.
Laura Loomer has lobbed her latest sueball into the federal court system and her timing could not have been worse. Her lawsuit against Twitter, Facebook, and their founders was filed in the Northern District of California (where most lawsuits against Twitter and Facebook tend to end up) just four days before this same court dismissed Donald Trump’s lawsuit [PDF] alleging his banning by Twitter violated his First Amendment rights.
Trump will get a chance to amend his complaint, but despite all the arguments made in an attempt to bypass both the First Amendment rights of Twitter (as well as its Section 230 immunity), the court’s opinion suggests a rewritten complaint will meet the same demise.
Plaintiffs’ main claim is that defendants have “censor[ed]” plaintiffs’ Twitter accounts in violation of their right to free speech under the First Amendment to the United States Constitution… Plaintiffs are not starting from a position of strength. Twitter is a private company, and “the First Amendment applies only to governmental abridgements of speech, and not to alleged abridgements by private companies.”
Loomer’s lawsuit [PDF] isn’t any better. In fact, it’s probably worse. But it is 133 pages long! And (of course), it claims the banning of her social media accounts is the RICO.
The lawsuit wastes most of its pages saying things that are evidence of nothing. It quotes several news reports about social media moderating efforts, pointing out what’s already been made clear: it’s imperfect and it often causes collateral damage. What the 133 pages fails to show how sucking at an impossible job is a conspiracy against Loomer in particular, which is what she needs to support her RICO claims.
The lawsuit begins with the stupidest of opening salvos: direct quotes from Florida’s social media law, which was determined to be unconstitutional and blocked by a federal judge last year. It also quotes Justice Clarence Thomas’ idiotic concurrence in which he made some really dumb statements about the First Amendment and Section 230 immunity. To be sure, these are not winning arguments. A blocked law and a concurrence are not exactly the precedent needed to overturn decades of case law to the contrary.
It doesn’t get any better from there. There’s nothing in this lawsuit that supports a conspiracy claim. And what’s in it ranges from direct quotes of news articles to unsourced claims thrown in there just because.
For instance, Loomer’s lawsuit quotes an authoritarian’s George Soros conspiracy theory as though that’s evidence of anything.
On or about May 16, 2020, Hungarian Prime Minister Viktor Orbán and the Hungarian Government called Defendant Facebook’s “oversight board” not some neutral expert body, but a “Soros Oversight Board” intended to placate the billionaire activist because three of its four co-chairs include Catalina Botero Marino, “a board member of the pro-abortion Center for Reproductive Rights, funded by Open Society Foundations” — Soros’s flagship NGO — and Helle Thorning-Schmidt, former Prime Minister of Denmark, who is “unequivocally and vocally anti- Trump” and serves alongside Soros and his son Alexander as trustee of another NGO, and a Columbia University professor Jamal Greene who served as an aide to Senator Kamala Harris (D-CA) during Justice Kavanaugh’s 2018 confirmation Hearings.
Or this claim, which comes with no supporting footnote or citation. Nor does it provide any guesses as to how this information might violate Facebook policy.
Defendant Facebook allows instructions on how to perform back-alley abortions on its platform.
Loomer’s arguments don’t start to coalesce until we’re almost 90 pages into the suit. Even then, there’s nothing to them. According to Loomer, she “relied” on Mark Zuckerberg’s October 2019 statement that he didn’t “think it’s right for tech companies to censor politicians in a democracy.” This statement was delivered five months after Facebook had permanently banned Loomer. Loomer somehow felt this meant she would have no problems with Facebook as long as she presented herself as a “politician in a democracy.”
In reliance upon Defendant Facebook’s promised access to its networks, Plaintiffs Candidate Loomer and Loomer Campaign raised money and committed significant time and effort in preparation for acting on Defendant Facebook’s fraudulent representation of such promised access to its network.
On or about November 11, 2019, Loomer Campaign attempted to set up its official campaign page for Candidate Loomer as a candidate rather than a private citizen.
On November 12, 2019, Defendant Facebook banned the “Laura Loomer forCongress” page, the official campaign page for Candidate Loomer, from its platform, and subsequently deleted all messages and correspondence with the campaign.
On page 94, the RICO predicates begin. At least Loomer and her lawyer have saved the court the trouble of having to ask for these, but there’s still nothing here. The “interference with commerce by threats or violence” is nothing more than noting that Facebook, Google, and Twitter hold a considerable amount of market share and all deploy terms of service that allow them to remove accounts for nearly any imaginable reason. No threats or violence are listed.
The “Interstate and Foreign Transportation in Aid of Racketeering Enterprises” section lists a bunch of content moderation stuff that happened to other people. “Fraud by Wire, Radio, or Television” consists mostly of Loomer reciting the law verbatim before suggesting Facebook and Procter & Gamble “schemed” to deny her use of Facebook or its ad platform. Most of the “fraud” alluded to traces back to Zuckerberg saying Facebook would allow politicians and political candidates to say whatever they wanted before deciding that the platform would actually moderate these entities.
There’s also something in here about providing material support for terrorism (because terrorists use the internet), which has never been a winning argument in court. And there’s some truly hilarious stuff about “Advocating Overthrow of Government” which includes nothing about the use of social media by Trump supporters to coordinate the raid on the US Capitol building, but does contain a whole lot of handwringing about groups like Abolish ICE and other anti-law enforcement groups.
All of this somehow culminates in Loomer demanding [re-reads Prayer for Relief several times] more than $10 billion in damages. To be fair, the ridiculousness of the damage demand is commensurate with the ridiculousness of the lawsuit. It’s litigation word soup that will rally the base but do nothing for Loomer but cost her more money. Whatever’s not covered by the First Amendment will be immunized by Section 230. There’s no RICO here because, well, it’s never RICO. This is stupid, performative bullshit being pushed by a stupid, performative “journalist” and litigated by a stupid, performative lawyer. A dismissal is all but inevitable.
Buy 1 Get 1 Free! Each order comes with 2 drones: 1 Alpha Z PRO Ultra HD Dual Camera Drone and 1 Flying Fox Ultra HD Dual Camera Drone. Both drones are equipped with a 4K front camera and a 720P bottom camera. The Alpha Z PRO comes in a sleek black color while the Flying Fox comes in a clean silver finish. Both drones will help you capture great shots from above with ease and in style. The drones are on sale for $175.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
It’s becoming quite clear that Elon Musk’s approach to dealing with complex issues is not to actually understand the complex realities behind them, but to simply say what he thinks an audience wants to hear, and perhaps relatedly, to simply accept the last thing that someone presented to him as the official state of things. The latest in the long line of bizarrely contradictory and nonsensical breadcrumbs that Musk is leaving regarding his planned approach to handling content moderation on Twitter includes a full warm embrace of the EU’s highly censorial Digital Services Act, as tweeted by Thierry Breton, the European Commissioner for the Internal Market.
The video is pretty short, but here’s a rough transcript:
Breton: So we’re in Austin, together with Elon Musk. Thank you, Elon, for welcoming me.
Musk: Thank you. You’re most welcome.
Breton: Of course, we discussed many issues, and I was happy to be able to explain to you the DSA, a new regulation in Europe, and I think that now, you understand very well. It fits pretty well with what you think we should do on the platform?
Musk: I think it’s exactly aligned with my thinking. I think I very much agree… It’s been a great discussion. I agree with everything you said, really. I think we’re very much of the same mind and anything that my companies can do that would be beneficial to Europe, we want to do that.
Musk responded to Breton’s tweet by saying that “we are very much on the same page.”
Of course, the actual DSA setup seems extremely different than what Musk has said he wants regarding a platform that allows most speech. As we’ve discussed, the DSA, as currently construed would make something of a mess for speech online, and would put much more onerous regulations in place regarding how websites can moderate, and how much content they need to pull down.
Earlier in the day, Musk had once again (after falsely claiming that Twitter has a leftwing bias) tweeted that his preference was to “hew close to the laws of countries in which Twitter operates.” Further saying “If the citizens want something banned, then pass a law to do so, otherwise it should be allowed.”
This is all nonsense on multiple levels. First of all, many, many countries are not actually democracies. So, laws are not always the will of the citizens. Secondly, in the US, we have things like the 1st Amendment that are actually designed so that Congress cannot pass a law that bans speech. But, most importantly, the laws of a country make a terrible guide for content moderation, because they’re really trying to serve two very different purposes.
Over the past few years, Twitter has actually been one of the leading companies speaking out about the very serious potential problems with the EU’s approach to speech in the DSA. It’s been a key player in explaining how the rules that the EU is looking to put in place could be damaging for free speech and also how the rules should be changed to avoid attacking free speech. And in walks Musk, with apparently little to no understanding of the details or the nuances, and just endorses the entire approach.
And, let’s not even bother getting into the fact that much of the meeting was actually to discuss other issues regarding Tesla and the EU, and how Musk notes that his companies (plural) want to do what’s best for Europe. People have raised serious questions about the business needs of Tesla in countries like China and India may run into issues with how Twitter is moderated, and now Musk is effectively announcing that if it’s good for Tesla in Europe, he’ll happily agree to much greater speech suppression on the site.
If you actually support free speech, it’s pretty damn maddening, because the last thing we need right now is a company like Twitter endorsing the current DSA approach, which would take a sledge hammer to certain speech rights. But, according to Musk, it’s all good, because it’s what the law says.
This week the Biden administration spent some time celebrating its accomplishments on broadband. The nation’s about to invest $42 billion in expanding broadband access (even though we still haven’t mapped broadband accurately). The administration also implemented the Affordable Connectivity Program (ACP), which doles out a $30 discount on broadband for qualifying low income households.
In an announcement, team Biden celebrated the fact that it got the nation’s biggest telecom giants to reduce broadband prices $30 for low income Americans:
$30 off broadband access will go a long way in households where affording basic food needs is a challenge. Though it should be noted that several of these companies heralded by the Biden administration abused a previous version of this program to try and upsell struggling Americans to more expensive broadband plans (and faced NO penalty for it).
It should also be noted that the Affordable Connectivity Program (ACP) effectively takes a limited pool of taxpayer money from the infrastructure bill, gives it to regional monopolies that caused the problem (high prices) in the first place, then lauds those companies for temporarily lowering prices for low income Americans. If you step back a bit you’ll notice this is kind of a weird band aid.
When the press picks up this kind of framing, you’d hardly know that regional monopolies caused most of this problem in the first place by relentlessly crushing competition:
Biden gets credit for fixing a problem (spotty, expensive, monopolized access) this program isn’t actually fixing, and the telecom industry gets credit for heroically helping low income people (who wouldn’t be suffering if they hadn’t monopolized access and crushed competitors in the first place) by passing on taxpayer subsidies to them.
Here’s a fact: U.S. broadband is extremely expensive because of government-sanctioned telecom monopolization and limited competition. The GOP has rubber stamped telecom monopolization at every turn for 40 years. The DNC, which professes to be much better on this subject, is rarely willing to acknowledge these monopolies exist, much less that they’re documentably harmful (seriously, try to find a Democratic FCC official in the last 20 years that has clearly criticized monopolization).
Which is to say you wouldn’t need band-aid low income discount programs if the U.S. government was willing to tackle the actual problem: broadband monopolization and the state and federal corruption that protects it. We not only don’t tackle it, we don’t acknowledge it exists; often framing spotty, expensive Internet access through nebulous, causation-free references to an ambiguous “digital divide.”
One of the leading advocates for monopoly reform is Gigi Sohn, the Biden’s nominee for the empty FCC Democratic Commissioner slot. Sohn was nominated by the Biden team after an inexplicable nine month delay. Sohn has since spent the last 6 months mired in grotesque attacks by the telecom lobby and the obstructionist GOP, without a single word of support from the Biden administration or FCC staffers.
With the GOP’s opposition to Sohn in the bag, the telecom sector is trying to scuttle support for Sohn among Senate Democrats by, among other things, waging covert proxy attacks falsely claiming she’s bad for Hispanics, hates cops, and doesn’t care about rural America. Again, I’ve yet to see a single instance of support for Sohn from the Democratic FCC, Biden administration, or DNC. Not a peep.
The telecom industry’s goal is obvious: scuttle the Sohn nomination to keep the nation’s top telecom regulator mired in 2-2 partisan gridlock so it can’t implement popular telecom monopoly, media consolidation, or consumer protection reform. They’ll eventually support a replacement, centrist Democratic nominee with a general disinterest in genuine monopoly and consolidation reform.
Again, I don’t want to dump on low income broadband discount programs or the $42 billion broadband infrastructure investment because I believe they’re genuinely good things and laudable accomplishments.
But it’s indisputable that the real reason U.S. broadband sucks is due to unaccountable monopolies we’ve let run amok, building regional fiefdoms in which they’re free to deliver spotty, overpriced, unreliable broadband. If you’re unwilling to tackle (or again even address) that problem, and refuse to even defend the reformer you belatedly nominated to a major post — you’re not actually taking the problem seriously.