We’ve already talked a bit about Elon Musk’s obvious censorial bullshit lawsuit against Media Matters. It’s quite obvious from the lawsuit that his intent is to intimidate critics and suppress speech about hateful content on ExTwitter. So far, it’s not working, as that lawsuit seems to have inspired more people to find more ads next to more hateful content. It’s also exposed just how many of the ‘free speech’ supporters who cheer on Musk’s every move are a bunch of hypocrites, as they’re now supporting a lawsuit to silence speech.
Incredibly, it seems to be getting even more ridiculous.
Over the weekend, Musk famously reinstated conspiracy theorist Alex Jones, despite early on promising never to do so. On Sunday Jones and Musk did a Spaces together (with a bunch of other nonsense peddlers), in which Musk again (1) insisted that his support for free speech was why he reinstated Jones, and simultaneously (2) that he’d not just continue to sue Media Matters over its free speech, but that he’d sue them in “every country that they operate,” and (3) that he’d sue “anyone funding” Media Matters. His reasoning? That “Media Matters is an evil propaganda machine” that “can go to hell.”
Yes. At the same time he not only was re-platforming and joining an online panel with Alex Jones, one of the most infamous propaganda machines ever, he’s claiming that Media Matters needs to be sued out of existence for being a propaganda machine.
The claim that he’d sue MMFA in “every country” seemed odd, given that the “A” in MMFA is “America.” Media Matters for America is pretty focused on the US. However, soon after that came out, I found a (very light on the details) report that ExTwitter has already sued Media Matters in Ireland as well.
Unfortunately, as of right now, I can only find that single news report about it, and no links to any details to look over, but:
X, FORMERLY TWITTER, has taken legal action in the Irish courts against a US media monitoring site.
Court papers filed this week show that Twitter International Limited Company, the name of its Ireland-based entity for operations, has taken legal action against Media Matters for America.
That’s basically it as details go. A search on the Irish court website does note that a filing has been made, but there’s no complaint. Just a “plenary summons.”
But, um, what the actual fuck?
What kind of “free speech absolutist” decides to go on a libel tourism trip to Ireland, filing a clearly bogus vexatious censorial lawsuit over an issue between two US-based organizations that had fuck all to do with Ireland?
It’s unclear what kind of impact this would have. While jurisdiction for defamation claims works differently in the EU (assuming he even is filing a defamation claim, which he didn’t actually do in the US), assuming MMFA has no operations or assets in the EU, it’s not clear if such a lawsuit can actually do anything. Worst case, ExTwitter wins… and then is blocked from enforcing it in the US thanks to the SPEECH Act (another law that actually protects free speech, which Elon is seeking to undermine).
As for the claim that he’s going to sue funders of MMFA, well that’s equally censorial. It’s an attempt to intimidate donors and silence their speech as well. While there are some exceptions, if the donors are somehow actively involved in a particular tort, the idea of suing donors to a non-profit because you don’t like the (admitted as true) speech of that non-profit is… so extraordinarily ridiculous and censorial that it seems very open to getting sanctioned.
For what it’s worth, it also seems to be backfiring. On social media, I’ve seen a bunch of people who had never donated to MMFA before tossing $10 or $20 their way and then posting the receipts in Elon’s mentions, asking if he’s going to sue them.
Elon Musk is not a free speech absolutist. He’s not even a free speech supporter.
He’s a vexatious, anti-speech litigant, eagerly abusing and exploiting the courts in an attempt to silence and suppress voices that criticized him and his companies.
Sometimes it’s new stuff. Sometimes it’s stuff that’s been around for years, but no one bothered to question it until Wyden. Sometimes it’s stuff like this — stuff that seems more like opportunism than a smart new form of intelligence gathering.
If you want data, you go to where the data is. National security agencies collect and store plenty of data, but other governments aren’t allowed to just go rooting through other governments’ virtual file cabinets.
No, the biggest collectors of data are tech companies. Anything that can be collected almost always is collected. Google stands astride multiple data streams, including (apparently) information generated by push notifications sent to Android phones. The same thing can be said about Apple, even though it has taken a few more proactive steps to limit data-gathering and doesn’t have anywhere near the (data) market share Google has, what with its massive suite of ubiquitous services, all capable of gathering vast amounts of info.
So, what’s the (latest) problem? Well, it looks like foreign governments have figured out Google and Apple have another trove of data they can tap, as Raphael Satter reports for Reuters:
Unidentified governments are surveilling smartphone users via their apps’ push notifications, a U.S. senator warned on Wednesday.
In a letter to the Department of Justice, Senator Ron Wyden said foreign officials were demanding the data from Alphabet’s (GOOGL.O) Google and Apple (AAPL.O). Although details were sparse, the letter lays out yet another path by which governments can track smartphones.
Add that to the list that includes metadata from nearly every internet-based communication, location data gathered by Google/Apple directly or by third-party apps, keywords used by search engine users, etc. Now, there’s this: governments gathering push notification data from Apple and Google just because they can.
Wyden’s letter [PDF] suggests it’s only foreign governments doing this, at least for the moment. (Or at least as far as he knows…)
In the spring of 2022, my office received a tip that government agencies in foreign countries were demanding smartphone “push” notification records from Google and Apple. My staff have been investigating this tip for the past year, which included contacting Apple and Google. In response to that query, the companies told my staff that information about this practice is restricted from public release by the government.
Check out that last sentence. Which government could forbid US companies from releasing information about these data requests? That’s the key sentence. That’s why Wyden is asking the DOJ one question, while informing the public there’s a more direct question he could be asking instead.
This is made even more explicit in the next paragraph of Wyden’s letter:
Apple and Google should be permitted to be transparent about the legal demands they receive, particularly from foreign governments, just as the companies regularly notify users about other types of government demands for data. These companies should be permitted to generally reveal whether they have been compelled to facilitate this surveillance practice, to publish aggregate statistics about the number of demands they receive, and unless temporarily gagged by a court, to notify specific customers about demands for their data. I would ask that the DOJ repeal or modify any policies that impede this transparency.
This strongly suggests it’s domestic demands for push notification data that has kept this under wraps. Wyden’s request that the DOJ modify any policies demanding blanket secrecy be rescinded makes it clear he knows more than he’s willing to state in a public letter to the DOJ.
There is absolutely no doubt in my mind DOJ components are demanding this data and demanding these companies not talk about it. There’s simply no way only foreign governments have access to this data. And they certainly don’t have the legal reach to demand eternal silence. But the DOJ does. And if DOJ components are doing it, there’s a good chance other federal agencies are doing the same thing.
Wyden’s letter has provoked at least one useful response, as Satter reports for Reuters:
In a statement, Apple said that Wyden’s letter gave them the opening they needed to share more details with the public about how governments monitored push notifications.
“In this case, the federal government prohibited us from sharing any information,” the company said in a statement. “Now that this method has become public we are updating our transparency reporting to detail these kinds of requests.”
Google said that it shared Wyden’s “commitment to keeping users informed about these requests.”
If it’s now public knowledge (thanks to this letter), these companies can now start telling the public about these data demands. And that may have been the point of Wyden’s letter: freeing up Google and Apple to detail requests for push notification data without having to raise a ton of legal challenges before some court finally decides they actually have standing to challenge these requests.
And if that was Wyden’s intent, it was handled beautifully. It starts with the misdirection of expressing concern about snooping by foreign governments before twisting it the other way to suggest (without ever directly saying so) that the DOJ is doing the same thing and swearing Apple and Google to silence. But now that it’s out, these companies are no longer required to pretend it isn’t happening.
Apple has spent the past few years pushing the marketing message that it, alone among the big tech companies, is dedicated to your privacy. This has always been something of an exaggeration, but certainly less of Apple’s business is based around making use of your data, and the company has built in some useful encryption elements to its services (both for data at rest, and data in transit). But, its actions over the past few days call all of that into question, and suggest that Apple’s commitment to privacy is much more a commitment to walled gardens and Apple’s bottom line, rather than the privacy of Apple’s users.
First, some background:
Back in September, we noted that the EU had designated which services were going to be “gatekeepers” under the Digital Markets Act (DMA), which would put on them various obligations, including regarding some level of interoperability. Apple had been fighting the EU over whether or not iMessage would qualify, and just a few days ago there were reports that the EU would not designate iMessage as a gatekeeper. But that’s not final yet. This also came a few weeks after Apple revealed that, after years of pushing back on the idea, it might finally support RCS for messaging (though an older version that doesn’t support end-to-end encryption).
Separately, for years, there has been some debate over Apple’s setup in which messaging from Android phones shows up in “green bubbles” vs. iMessage’s “blue bubbles.” The whole green vs. blue argument is kind of silly, but some people reasonably pointed out that by not allowing Android users to actually use iMessage itself, it was making communications less secure. That’s because messages within the iMessage ecosystem can be end-to-end encrypted. But messages between iMessage and an Android phone are not. If Apple actually opened up iMessage to other devices, messaging for iPhone users and the people they spoke to would be much more protected.
But, instead of doing that, Apple has generally made snarky “just buy an iPhone” comments when asked about its unwillingness to interoperate securely.
That’s why Apple’s actions over the last week have been so stupidly frustrating.
For the past few years, some entrepreneurs (including some of the folks who built the first great smartwatch, the Pebble), have been building Beeper, a universal messaging app that is amazing. I’ve been using it since May and have sworn by it and gotten many others to use it as well. It creates a very nice, very usable single interface for a long list of messaging apps, reminiscent of earlier such services like Trillian or Pidgin… but better. It’s built on top of Matrix, the open-source decentralized messaging platform.
Over the last few months I’ve been talking up Beeper to lots of folks as the kind of app the world needs more of. It fits with my larger vision of a world in which protocols dominate over siloed platforms. It’s also an example of the kind of adversarial interoperability that used to be standard, and which Cory Doctorow rightfully argues is a necessary component of stopping the enshittification curve of walled garden services.
Of course, as we’ve noted, the big walled gardens are generally not huge fans of things that break down their walls, and have fought back over the years, including with terrible CFAA lawsuits against similar aggregators (the key one being Facebook’s lawsuit against Power.com). And ever since I started using Beeper, I wondered if anyone (and especially Apple) might take the same approach and sue.
There have been some reasonable concerns, about how Beeper handled end-to-end encrypted messaging services like Signal, WhatsApp, and iMessage. It originally did this by basically setting up a bunch of servers that it controls, which has access to your messages. In some ways, Beeper is an “approved” man-in-the-middle attack on your messages, with some safeguards, but built in such a way that those messages are no longer truly end-to-end encrypted. Beeper has taken steps to do this as securely as possible, and many users will think those tradeoffs are acceptable for the benefit. But, still, those messages have not been truly end-to-end encrypted. (For what it’s worth, Beeper open sourced this part of its code so if you were truly concerned, you could also host the bridge yourself and basically man in the middle yourself to make Beeper work, but I’m guessing very few people did that).
That said, from early on Beeper has made it clear that it would like to move away from this setup to true end-to-end encryption, but that requires interoperable end-to-end encrypted APIs, which (arguably) the DMA may mandate.
Or… maybe it just takes a smart hacking teen.
Over the summer, a 16-year-old named James Gill reached out to Beeper’s Eric Migicovsky and said he’d reimplemented iMessage in a project he’d released called Pypush. Basically, he reverse engineered iMessage and created a system by which you could message securely in a truly end-to-end encrypted manner with iMessage users.
If you want to understand the gory details, and why this setup is actually secure (and not just secure-like), Snazzy Labs has a great video:
Over the last few months, Beeper had upgraded the bridge setup it used for iMessage within its offering to make use of Pypush. Beeper also released a separate new app for Android, called Beeper Mini, which is just for making iMessage available for Android users in an end-to-end encrypted manner. It also allows users (unlike the original Beeper, now known as Beeper Cloud) to communicate with iMessage users just via their phone number, and not via an AppleID (Beeper Cloud requires the Apple ID). Beeper Mini costs $2/month (after a short free trial), and apparently there was demand for it.
I spoke to Migicovsky on Sunday and he told me they had over 100k downloads in the first two days it was available, and that it’s the most successful launch of a paid Android app ever. It was a clear cut example of why interoperability without permission (adversarial interoperability) is so important, and folks like Cory Doctorow rightfully cheered this on.
But all that attention also seems to have finally woken up Apple. On Friday, users of both Beeper Cloud and Beeper Mini found that they could no longer message people via iMessage. If you watch that YouTube video above by Snazzy Labs, he explains why it’s not that easy for Apple to block the way Beeper Mini works, but, Apple still has more resources at its disposal than just about anyone else and devoted some of them to doing exactly what Snazzy Labs (and Beeper) thought it was unlikely to do: blocking Beeper Mini from working.
So… with that all as background, the key thing to understand here is that Beeper Mini was making everyone’s messaging more secure. It certainly better protected Android users in making sure their messages to iPhone users were encrypted. And it similarly better protected Apple users, in making sure their messages to Android users were also encrypted. Which means that Apple’s response to this whole mess underscores the lie that Apple cares about users’ privacy.
Apple’s PR strategy is often to just stay silent, but it actually did respond to David Pierce at the Verge and put out a PR statement that is simply utter nonsense, claiming it did this to “protect” Apple users.
At Apple, we build our products and services with industry-leading privacy and security technologies designed to give users control of their data and keep personal information safe. We took steps to protect our users by blocking techniques that exploit fake credentials in order to gain access to iMessage. These techniques posed significant risks to user security and privacy, including the potential for metadata exposure and enabling unwanted messages, spam, and phishing attacks. We will continue to make updates in the future to protect our users.
Almost everything here is wrong. Literally, Beeper Mini’s interoperable setup better protected the privacy of Apple’s customers than Apple itself did. Beeper Mini’s setup absolutely did not “pose significant risks to user security and privacy.” It effectively piggybacked onto Apple’s end-to-end encryption system to make sure that it was extended to messages between iOS users and Android users, better protecting both of them.
When I spoke to Eric on Sunday he pledged that if Apple truly believed that Beeper Mini somehow put Apple users at risk, he was happy to agree to have the software fully audited by an independent third party security auditor that the two organizations agreed upon to see if it created any security vulnerabilities.
For many years people like myself and Cory Doctorow have been talking up the importance of interoperability, open protocols, and an end to locked-down silos. Big companies, including Apple, have often made claims about “security” and “privacy” to argue against such openness. But this seems like a pretty clear case in which that’s obviously bullshit. The security claims here are weak, given that from the way Beeper Mini is constructed, it seems significantly more secure than Apple’s own implementation, which puts less security on iOS-Android interactions.
And for Apple to do this just as policymakers are looking for more and more ways to ensure openness and interoperability seems like a very stupid self-own. We’ll see if the EU decides to exempt iMessage from the DMA’s “gateekeeper” classification and its interop requirements, but policymakers elsewhere are certainly noticing.
While I often think that Elizabeth Warren’s tech policy plans are bonkers, she’s correctly calling out this effort by Apple.
She’s correct. Chatting between different platforms should be easy and secure, and Apple choosing to weaken the protections of its users while claiming it’s doing the opposite is absolute nonsense, and should be called out as such.
It wasn’t that long ago that cable TV execs were trying to claim that “cord cutting” was either outright fiction, or a fad that would end once Millennials started procreating. The willful denial among cable execs was downright palpable for the better part of the last decade. Now they all just pretend like they never made those claims or predictions.
Fast forward to 2023, and streaming subscriptions are poised to pass traditional cable TV subscriptions for the first time ever. According to new analysis by research firm Insider Intelligence, by the end of this year, the number of cable TV subscribers will drop 10.2% to 121.1 million, while non-pay TV customers (streaming, antenna) will soar 12.5%, to 144.1 million.
By 2027, the firm estimates that 182.4 million Americans will be streaming TV customers while “just” 91.3 million Americans will subscribe to cable TV:
“Regardless of how one defines pay TV, there is an unmistakable attrition in the number of people who are willing to pay upwards of $100 a month for a live TV bundle,” Paul Verna, vice president of content at II, said. “The cord cutters have won.”
Organizations like Nielsen indicate that the Rubicon already technically crossed back in June, when the firm reported that 38% of all television viewing was now being done via streaming, compared with 31% for traditional cable. In 2023, broadcast TV viewing accounted for 20.8% of total TV watching, a new low point.
It’s all fairly impressive for a trend that executives spent the better part of a decade trying to pretend wasn’t happening. Even sports leagues, the last major reason to retain a traditional cable connection, have slowly been beefing up their direct-to-consumer live streaming options (see: NFL+).
Now that’s not to say there isn’t trouble in paradise. We’ve seen ample indication that streaming executives have learned absolutely nothing from history, and, as they try to deliver improved quarterly returns to Wall Street, are engaging in many of the same behaviors (nickel-and-diming users, relentless price hikes, an unyielding thirst for consolidation) that gave us Comcast in the first place.
The illiterate “moderation is censorship because it suppresses speech!” lie originates solely from both entitlement and irrationality.
Let’s say a person’s unrestricted ability to speak is defined as a baseline “speech value” of 1.0. By being offered the privilege of borrowing another’s speech platform a speaker can, let’s say, expand their speech value to 5x. Have multiple platforms open? Let’s say your speech value is 25x. The entitled and irrational believe that any withdrawal of these conditional privileges whatsoever (e.g. A platform saying “You broke our rules so you’re no longer allowed on our private property.”) so as to bring one’s current speech value value under this maximum potential, even say 24x, is “suppression” of speech. In reality, free speech remains fully intact and unsuppressed until it drops below that baseline value of 1.0 (i.e. the government saying “You are not allowed to say this anywhere.) Matthew, Koby, Benjamin, Hyman, BDAC, etc. lying that moderation is censorship is a malicious, disingenuous twisting of language that misleadingly conflates loss of privileges with loss of a hallucinatory “right to post” the sole intent behind which being to support the loss of the actual Constitutional and free speech rights held by platforms. It is impossible to truthfully claim to support free speech rights while simultaneously opposing moderation.
For editor’s choice on the insightful side, we start out with one more comment from Thad, this time on our post warning against using copyright to fight AI:
Anyone who thinks expanding copyright will help individual creators rather than corporate publishers hasn’t been paying attention the last…every single time we’ve ever tried that.
“I hate this website so much that I return every day and force myself to read articles I don’t like and comment about how much I don’t like them because that is definitely a sign of a healthy individual.”
That sentence, realizing it was stuck in a hopeless and stupid case, suffered a seizure and glossolalia. Have pity upon it. It is now undergoing therapy and we are hopeful that it will resume conveying information in the future.
Uh, well, okay then. I really thought we were done with the whole Microsoft buying Activision Blizzard saga. Hell, I even wrote what I thought was a final post on the matter, called the post a curtain call, and discussed how the deal had passed all the regulatory barriers and had been consumated. That happened after the FTC lost in court on its request for a TRO to block the deal and then subsequently paused on its suit entirely, clearing the way for the deal to move forward. At the time, the FTC made some noises about appealing the lower court’s decision, but then didn’t.
Until now. Nearly five months later, the FTC has appealed the court’s decision, arguing that the lower court essentially just believed whatever Microsoft said at face value.
The US government told a federal appeals court Wednesday that Microsoft’s recent purchase of Activision should not have been cleared by a lower-court judge, because the judge had been too deferential to Microsoft’s promises about the future of “Call of Duty,” a popular first-person shooter game.
District Judge Jacqueline Scott Corley went too far, the Federal Trade Commission argued, when she ruled in July that 11th-hour contracts Microsoft signed with Nintendo, Nvidia and other gaming companies concerning “Call of Duty” would resolve anticompetitive concerns related to the blockbuster deal.
Even if you think that the FTC’s argument is valid, which I very much do and wrote about at the time, I will be completely surprised if this gets any traction. Too much has progressed in too many places, especially in the European markets, to imagine the courts somehow coming back 2 months after this deal was completed and unringing the bell.
The only shred of hope I could see this having is the part of the FTC’s argument in which it claims that Microsoft’s decision to go around inking a bunch of 10 year deals to bring certain titles, namely the Call of Duty series, to non-Microsoft platforms altered the landscape the FTC was analyzing so significantly that it didn’t have the time dig into the details and build an argument against that new landscape.
“I fail to understand how giving somebody a monopoly of something would be pro-competitive,” said Imad Dean Abyad, an FTC attorney, in the argument Wednesday before the appeals court. “It may be a benefit to some class of consumers, but that is very different than saying it is pro-competitive.”
Abyad said that Microsoft’s flurry of licensing agreements in response to regulator scrutiny altered the economic picture in ways the FTC did not have an opportunity to fully review but that courts are now forcing it to accept.
“What the district court relied on, mostly, are contracts that were entered into after the [FTC] complaint was filed,” Abyad said. “The facts were changing all along. Even after the district court decided the case, Microsoft went ahead and entered into yet another contract [to restructure the cloud licensing rights].”
We said at the time that Microsoft was clearly taking the complaints from various regulatory bodies as some sort of paint by numbers prescription as to what deals to make to get around them. And I very much can see the FTC’s point on this. It brought a complaint under one set of facts only to have Microsoft alter those facts, leading to the courts slamming the deal through before the FTC had a chance to amend its arguments.
But ultimately it won’t matter. This last gasp attempt will almost certainly fail. American regulatory bodies have dull teeth to begin with and I’ve seen nothing that would lead me to believe that the courts are going to allow the agency to unwind a closed deal after everything it took to get here.
Here we go again, everyone. Another far-right state lawmaker has introduced a bill requiring age verification in order to access porn sites from within state limits. This time it is Tennessee state Rep. Patsy Hazlewood who introduced yet another extreme age verification proposal that essentially makes it a crime to own a legally operating porn website protected by the First Amendment – regardless of whether the material protects certain regional regulations.
Referred to as the Protect Tennessee Minors Act, her bill takes a few notes from other far-right lawmakers in Ohio and Indiana. Both state legislatures have bills that levy misdemeanors and felonies on companies that own adult entertainment websites that fail or choose not to follow age verification requirements. The proposal in Ohio makes it a crime for users to circumvent an age gate through legally available means, like a VPN. The act, or House Bill 1614, is a pre-filing for 2024’s legislative session, and it adopts a new Class C felony for failure to comply with the law.
While the official bill language has yet to be published, House Bill 1614 is what we in the adult entertainment industry press call a “copycat” of mandatory age verification first adopted in the state of Louisiana. Throughout this year, proposals targeting adult entertainment websites with age-gating rules have grown exponentially extreme. Rep. Hazlewood’s bill fits this clear mold.
However, a significant volume of sexual abuse imagery isn’t tied to the online adult industry, and mandatory age verification for end users isn’t the answer to fighting against these heinous acts.
New Mexico Attorney General Raúl Torrez argued in a new lawsuit that platforms like Pornhub and OnlyFans do more to counter CSAM and non-consensual intimate imagery (revenge porn) than platforms like Facebook and Instagram. National Center for Missing & Exploited Children’s (NCMEC) CyberTipline data overwhelmingly confirms this fact. Age gates on porn sites – or even social media networks – will not curtail CSAM online. Admittedly, the parent companies that own the mentioned platforms are involved in programs that locate, remove, and report cases of CSAM and non-consensual intimate imagery (e.g., NCMEC’s TakeItDown program). The age verification hypothesis certainly doesn’t solve this problem, and it shouldn’t come at the expense of the First Amendment rights of adults who are not breaking laws or imposing harm on others.
Michael McGrady covers the legal and tech side of the online porn business, among other topics. He is the politics and legal contributing editor for AVN.com.
A little over a month ago we told the Copyright Office in a comment that there was no role for copyright law to play when it comes to training AI systems. In fact, on the whole there’s little for copyright law to do to address the externalities of AI at all. No matter how one might feel about some of AI’s more dubious applications, copyright law is no remedy. Instead, as we reminded in this follow-up reply comment, trying to use copyright to obstruct development of the technology instead creates its own harms, especially when applied to the training aspect.
One of those harms, as we reiterated here, is that it impinges on the First Amendment right to read that human intelligence needs to have protected, and that right must inherently include the right to use technological tools to do that “reading,” or consumption in general of copyrighted works. After all, we need record players to play records – it would do no one any good if their right to listen to one stopped short of being able to use the tool needed to do it. We also pointed out that this First Amendment right does not diminish even if people consume a lot of media (we don’t, for instance, punish voracious readers for reading more than others) or at speed (copyright law does not give anyone the right to forbid listening to an LP at 45 rpm, or watching a movie on fast forward). So if we were to let copyright law stand in the way of using software to quickly read a lot of material to it would represent a deviation from how copyright law has up to now operated, and one that would undermine the rights to consume works that we’ve so far been able to enjoy.
Which is why we also pointed out that using copyright to deter AI training distorted copyright law itself, which would be felt in other contexts where copyright law legitimately applies. And we highlighted a disturbing trend emerging in copyright law from other quarters as well, this idea that whether a use of a work is legitimate somehow depends on whether the copyright holder approves of it. Copyright law was not intended, or written, to give copyright owners an implicit veto over any or all uses of works – the power of a copyright is limited to what its exclusive rights allow control over and fair use doesn’t otherwise justify.
A variant of this emerging trend also getting undue oxygen is the idea that profiting from a use of a copyrighted work used for free is somehow inherently objectionable and therefore ripe for the copyright holder to veto. But, again, such would represent a significant change if copyright law could work that way. Copyright holders are not guaranteed every penny that could potentially result from the use of a copyrighted work, and it has been independently problematic when courts have found otherwise.
Furthermore, to the extent that this later profiting may represent an actual problem in the AI space, which is far from certain, a better solution is to instead keep copyright law away from AI outputs as well. Some of the objection to AI makers later profiting seems to be based on the concern that certain enterprises might use works for free to develop their systems and then lock up the outputs with their own copyrights. But it isn’t necessary for copyright to apply to everything that is ever created, and certainly not by an artificial intelligence, so we should therefore also look hard at whether it is itself appropriate for copyright to apply to AI outputs. Not everything needs to be owned; having works immediately enter the public domain after their creation is an option, and a good one that vindicates copyright’s goals of promoting the exchange of knowledge.
Which brings us back to an earlier point to echo again now, that using copyright law as a means of constraining AI is also an ineffective way of addressing any of its potential harms. If, for instance, AI is used in hiring decisions and leads to discriminatory results, such is not a harm recognized by copyright law, and copyright law is not designed to address it. In fact, trying to use copyright law to fix it will actually be counterproductive: bias is exacerbated when the training data is too limited, and limiting it further will only make worse the problem we’re trying to address.
For many, many years we’ve been calling on companies to enable end-to-end encryption by default on any messaging/communications tools. It’s important to recognize that doing so correctly is difficult, but not impossible (similarly, it’s important to recognize that doing so poorly is dangerous, as it will lead people to believe their communications are secure when they are most certainly not).
So, over the years we’ve been hopeful as Meta made moves towards implementing end-to-end encryption in Facebook Messenger. However, over and over during the past decade or so, those working on the issue have told us that while Meta really wants to set it up, the practical realities of doing it correctly are way more complex than most people think. And that’s ignoring the fact that law enforcement, intelligence agencies, and, even random shareholders, have tried to get Meta to move away from its encryption plans.
Today I’m delighted to announce that we are rolling out default end-to-end encryption for personal messages and calls on Messenger and Facebook, as well as a suite of new features that let you further control your messaging experience. We take our responsibility to protect your messages seriously and we’re thrilled that after years of investment and testing, we’re able to launch a safer, more secure and private service.
Since 2016, Messenger has had the option for people to turn on end-to-end encryption, but we’re now changing private chats and calls across Messenger to be end-to-end encrypted by default. This has taken years to deliver because we’ve taken our time to get this right. Our engineers, cryptographers, designers, policy experts and product managers have worked tirelessly to rebuild Messenger features from the ground up. We’ve introduced new privacy, safety and control features along the way like delivery controls that let people choose who can message them, as well as app lock, alongside existing safety features like report, block and message requests. We worked closely with outside experts, academics, advocates and governments to identify risks and build mitigations to ensure that privacy and safety go hand-in-hand.
The extra layer of security provided by end-to-end encryption means that the content of your messages and calls with friends and family are protected from the moment they leave your device to the moment they reach the receiver’s device. This means that nobody, including Meta, can see what’s sent or said, unless you choose to report a message to us.
It’s extremely rare that I’d offer kudos to Meta, but this is a case where it absolutely deserves it. Even if some of us kept pushing the company to move faster, they did get there, and it looks like they got there by doing it carefully and appropriately (rather than the half-assed attempts of certain other companies).
I am sure that we’ll hear reports of law enforcement and politicians whining about this, but this is an unquestionably important move towards protecting privacy and private communications.