While we’ve talked a great deal now about Microsoft’s proposed acquisition of Activision Blizzard, most of the focus has been on how three major regulatory bodies are handling approving, or not, the purchase. But those regulatory bodies are not the only ones challenging the purchase. A small group of gamers filed their own private suit to block the acquisition, arguing that they would be negatively impacted if it were approved. That was earlier this year and the judge dismissed the suit, stating that the plaintiffs had not provided enough specific evidence of harm in its complaint to allow the suit to move forward. However, the court also provided the plaintiffs with the ability to re-file the suit and told them what the court would be looking for in an amended complaint.
The gamers did submit a new complaint and, while the judge refused their request to issue a preliminary injunction pausing the purchase, is allowing the suit to move forward.
Regarding those amended claims, District Court Judge Jacqueline Scott Corley said in a Friday ruling that, while it was too early to fully rule on the merits of the case, the plaintiffs “plausibly attest to their loyalty to the Call of Duty franchise and thus that each will purchase a different console or subscription service, or pay an inflated price, if needed to continue to play Call of Duty, especially if needed to play with their friends.” That’s a turnaround from the initial March dismissal, where Corley wrote that the plaintiffs didn’t “plausibly allege” that the merger “creates a reasonable probability of anticompetitive effects in any relevant market.”
Even if those “plausible” claims are eventually proved at trial, though, Corley said the plaintiffs hadn’t shown any evidence of the “immediate, irreparable harm” that would be needed to justify a preliminary injunction at this point. On the contrary, Corley writes that, immediately following any merger, there’s no evidence that Microsoft “can do anything to make these [existing PlayStation] Call of Duty versions currently owned by Plaintiffs somehow stop working, let alone that it would do so.”
And, so, these folks are going to get their day in court. Antitrust claims are notoriously hard to make stick in the United States generally, never mind suits brought by private individuals like this. But they will get their shot at litigation and, given all of the surrounding drama from the regulators, you can’t count the suit out.
Still, all of this doesn’t really matter all that much until the FTC’s lawsuit to block the deal plays out anyway.
Any effects of the merger will also have to wait until the resolution of the current Federal Trade Commission administrative action seeking to stop the deal, not to mention UK regulatory efforts to do the same. If and when the deal finally survives those hurdles, Corley writes that “the Court will be able to hold a trial on the merits and finally decide the issue before Plaintiffs suffer any irreparable harm.”
With all of the legal action surrounding this purchase, Microsoft has to at least feel like the dog that finally caught the car, if not dog that knocked over a hornet’s nest and immediately found its feet stuck in cement, unable to avoid getting stung from every direction.
For my final post of last year, I wrote about the many reasons to be optimistic about a better future, one of which was that we were seeing the crumbling of some large, bureaucratic (enshittified) companies, and new competitive upstarts pushing the boundaries. One of those areas was in the artificial intelligence space. As I noted in that piece, a few years ago, if you spoke to anyone about AI, the widespread assumption was that there were only four companies who could possibly even have a chance to lead the AI revolution, as (we were told) it required so much data, and so much computing power, that only Google, Meta, Amazon or Microsoft could possibly compete.
But, by the end of last year, we were already seeing that that wasn’t true, and there were a bunch of new entrants, many of whom appeared to be doing a better job than the “big tech” players when it came to AI, and many of them offering their models in open source ways.
Luke Sernau, a senior Google engineer, made that clear when he referenced one of Buffett’s most famous theories—the economic moat—in an internal document released Thursday by the consulting firm SemiAnalysis, titled “We have no moat. And neither does OpenAI.” In the document, which was published within Google in early April, Sernau claimed that the company is losing its artificial intelligence edge, not to the flashy, Microsoft-backed OpenAI—whose ChatGPT has become a huge hit since its release last November—but to open-source platforms like Meta’s LLaMa, a large language model that was leaked to the public in February.
“We’ve done a lot of looking over our shoulders at OpenAI… But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch,” he wrote. “I’m talking, of course, about open source. Plainly put, they are lapping us.”
Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.
And that takes us to last week’s testimony before Congress by OpenAI’s Sam Altman. Sam is very smart and very thoughtful, though it’s not clear to me he fully recognizes the policy implications of what he’s talking about, and that came across in his testimony.
But, of course, the most notable takeaway from the hearing was that the “industry” representatives appeared to call for Congress to regulate them. Senators pretended this was surprising, even though it’s actually pretty common:
Senator Dick Durbin, of Illinois, called the hearing “historic,” because he could not recall having executives come before lawmakers and “plead” with them to regulate their products—but this was not, in fact, the first time that a tech C.E.O. had sat in a congressional hearing room and called for more regulation. Most notably, in 2018, in the wake of the Cambridge Analytica scandal—when Facebook gave the Trump-aligned political-consultancy firm access to the personal information of nearly ninety million users, without their knowledge—the C.E.O. of Facebook, Mark Zuckerberg, told some of the same senators that he was open to more government oversight, a position he reiterated the next year, writing in the Washington Post, “I believe we need a more active role for governments and regulators.”
And, of course, various cryptocurrency companies have also called for regulations as well. Indeed, it’s actually kind of typical: when companies get big enough and fear newer upstart competition, they’re frequently quite receptive to regulations. They may make some superficial moves to look like they’re worried about them, but that’s generally for show, and to make lawmakers feel more powerful than they really are. But established companies often want those regulations in order to lock themselves in as the dominant players, and to saddle the smaller companies with impossible to meet compliance costs.
When looked at this way, and in combination with the Google memo about the lack of “moats,” it’s not hard to read last week’s testimony as Altman’s call for Congress to create a moat that protects his company from open source upstarts. Of course, he would never admit that publicly, and instead he can frame it as preventing “bad” actors from making nefarious use of the technology. But, it is self-serving all the same. And that seems pretty obvious to many observers (though it’s not clear if Congress recognizes this):
Figuring out how to assess harm or determine liability may be just as tricky as figuring out how to regulate a technology that is moving so fast that it is inadvertently breaking everything in its path. Altman, in his testimony, floated the idea of Congress creating a new government agency tasked with licensing what he called “powerful” A.I. models (though it is not clear how that word would be defined in practice). Although this is not, on its face, a bad idea, it has the potential to be a self-serving one. As Clem Delangue, the C.E.O. of the A.I. startup Hugging Face, tweeted, “Requiring a license to train models would . . . further concentrate power in the hands of a few.” In the case of OpenAI, which has been able to develop its large language models without government oversight or other regulatory encumbrances, it would put the company well ahead of its competitors, and solidify its first-past-the-post position, while constraining newer entrants to the field.
Were this to happen, it would not only give companies such as OpenAI and Microsoft (which uses GPT-4 in a number of its products, including its Bing search engine) an economic advantage but could further erode the free flow of information and ideas. Gary Marcus, the professor and A.I. entrepreneur, told the senators that “there is a real risk of a kind of technocracy combined with oligarchy, where a small number of companies influence people’s beliefs” and “do that with data that we don’t even know about.” He was referring to the fact that OpenAI and other companies have kept secret what data their large language models have been trained on, making it impossible to determine their inherent biases or to truly assess their safety.
That’s not to say that there should be no consideration for what might go wrong, or that there should be no rules at all. But it does mean that we should look a little skeptically on the latest round of tech CEOs begging Congress to regulate them, and assuming that their intentions and motives are to benefit humanity.
It’s more likely they really just want Congress to build them a moat.
The EU Parliament is looking to regulate AI. That, in itself, isn’t necessarily a bad idea. But the EU’s proposal — the AI Act — is pretty much bad all over, given that it’s vague, broad, and would allow pretty much any citizen of any EU nation to wield the government’s power to shut down services they personally don’t care for.
But let’s start with the positive aspects of the proposal. The EU does want to take steps to protect citizens from the sort of AI law enforcement tends to wield indiscriminately. The proposal would actually result in privacy protections in public spaces. This isn’t because the EU is creating new rights. It’s just placing enough limits on surveillance of public areas that privacy expectations will sort of naturally arise.
James Vincent’s report for The Verge highlights the better aspects of the AI Act, which is going to make a bunch of European cops upset if it passes intact:
The main changes to the act approved today are a series of bans on what the European Parliament describes as “intrusive and discriminatory uses of AI systems.” As per the Parliament, the prohibitions — expanded from an original list of four — affect the following use cases:
“Real-time” remote biometric identification systems in publicly accessible spaces;
“Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
Predictive policing systems (based on profiling, location or past criminal behaviour);
Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
That’s the good stuff: a near-complete ban on facial recognition tech in public areas. Even better, the sidelining of predictive policing programs which, as the EU Parliament already knows, is little more than garbage “predictions” generated by bias-tainted garbage data supplied by law enforcement.
In a bold stroke, the EU’s amended AI Act would ban American companies such as OpenAI, Amazon, Google, and IBM from providing API access to generative AI models. The amended act, voted out of committee on Thursday, would sanction American open-source developers and software distributors, such as GitHub, if unlicensed generative models became available in Europe. While the act includes open source exceptions for traditional machine learning models, it expressly forbids safe-harbor provisions for open source generative systems.
Any model made available in the EU, without first passing extensive, and expensive, licensing, would subject companies to massive fines of the greater of €20,000,000 or 4% of worldwide revenue. Opensource developers, and hosting services such as GitHub – as importers – would be liable for making unlicensed models available. The EU is, essentially, ordering large American tech companies to put American small businesses out of business – and threatening to sanction important parts of the American tech ecosystem.
When you make the fines big enough and the mandates restrictive enough, only the most well-funded companies will feel comfortable doing business in areas covered by the AI Act.
In addition to making things miserable for AI developers in Europe, the Act is extraterritorial, potentially subjecting any developer located anywhere in the world to the same restrictions and fines as those actually located in the EU.
And good luck figuring out how to comply with the law. The acts (or non-acts) capable of triggering fines and bans are equally vague. AI providers must engage in extensive risk-testing to ensure they comply with the law. But the list of “risks” they must foresee and prevent are little more than a stack of government buzzwords that can easily be converted into actionable claims against tech companies.
The list of risks includes risks to such things as the environment, democracy, and the rule of law. What’s a risk to democracy? Could this act itself be a risk to democracy?
In addition, the restrictions on API use by third parties would put US companies in direct conflict with US laws if they attempt to comply with the EU’s proposed restrictions.
The top problem is the API restrictions. Currently, many American cloud providers do not restrict access to API models, outside of waiting lists which providers are rushing to fill. A programmer at home, or an inventor in their garage, can access the latest technology at a reasonable price. Under the AI Act restrictions, API access becomes complicated enough that it would be restricted to enterprise-level customers.
What the EU wants runs contrary to what the FTC is demanding. For an American company to actually impose such restrictions in the US would bring up a host of anti-trust problems.
While some US companies will welcome the opportunity to derail their smaller competitors and lock in large contracts with their wealthiest customers, one of the biggest tech companies in the world is signaling it wants no part of the EU Parliament’s AI proposal. The proposal may not be law yet, but as Morgan Meaker and Matt Burgess report for Wired, Google is already engaging in some very selective distribution of its AI products.
[G]oogle has made its generative AI services available in a small number of territories of European countries, including the Norwegian dependency of Bouvet Island, an uninhabited island in the South Atlantic Ocean that’s home to 50,000 penguins. Bard is also available in the Åland Islands, an autonomous region of Finland, as well as the Norwegian territories of Jan Mayen and Svalbard.
This looks like Google is sending a bit of a subtle hint to EU lawmakers, letting them know that if they want more than European penguins to have access to Google’s AI products, they’re going to have to do a bit of a rewrite before passing the AI Act.
The EU Parliament is right to be concerned about the misuse of AI tech. But this isn’t the solution, at least not in this form. The proposal needs to be far less broad, way less vague, and more aware of the collateral damage this grab bag of good intentions might cause.
One hurdle defeated, two more to go. For months now, we have been discussing Microsoft’s proposed acquisition of Activision for $69 billion. What would be the largest video game studio acquisition in history has faced several hurdles along the way, primarily from the EU, the UK, and the United States. While the UK’s CMA has already formally nixxed the purchase (appeal by Microsoft pending) and the FTC decision is looming, leaks had already suggested months ago that the EU was set to approve the deal.
And now those leaks have been proven prescient. The European Commission has formally approved of the purchase. To get there, the EC relied on three points: it believes Microsoft’s promises to keep its titles available to other cloud-gaming providers, it thumbs its nose at the popularity of Call of Duty in the EU, and, my favorite, it claims that Microsoft wouldn’t make popular titles Xbox exclusives because Microsoft has so badly lost the console wars to Sony. Yes, seriously.
As to those promises:
Because of those concerns, the EC’s decision is conditional on certain assurances Microsoft has made to preserve competition. Those include a free license for any cloud streaming service to allow its users access to “any Activision Blizzard PC and console games” for at least 10 years. Anyone who has purchased any current or upcoming Activision-Blizzard games (or accessed them through a subscription) will “have the right to stream those games with any cloud game streaming service of their choice and play them on any device using any operating system” throughout Europe.
With this commitment in place, the EC says it’s satisfied that the merger will “represent a significant improvement for cloud game streaming compared to the current situation.” The Commission notes that “cloud game streaming service providers gave positive feedback and showed interest in the licenses” and points to existing Microsoft agreements with cloud providers such as Boosteroid.
Which… fine, whatever. Cloud gaming isn’t completely without adoption, but it also is a fraction of the total gaming market. If, when it comes to cloud-gaming specifically, the EC wants to buy into Microsoft’s totally coincidental 10 year deals it made after the purchase, so be it. I, and the industry generally, have been more focused on the non-cloud console market. What happens if Microsoft decides to make Call of Duty an Xbox/PC exclusive? Well, to start, no big deal, according to the EC, because the series isn’t as popular in the EU as it is in America.
Even if Microsoft did make the Call of Duty franchise an Xbox exclusive, the decision would “not significantly harm competition in the consoles market” because the series “is less popular in [Europe] than in other regions of the world, and is less popular in [Europe] within its genre compared to other markets,” European regulators wrote.
And that’s true. It’s hard to break this down to the EU specifically, or by game specifically, but sources I’m looking at suggest that Call of Duty titles quite recently have been the number one seller in the EU, even if those sales are outpaced by the Americas in total numbers. I don’t think the numbers warrant the hand-waving routine the EC is undergoing here, but that also wasn’t their only comment on the matter. The EC didn’t think Microsoft would ever consider pulling CoD off of the PlayStation due to Microsoft being outpaced by its rival in sales so badly.
In the end, European regulators said they were not concerned about the merger’s effects on the market for non-cloud console gaming. Despite Sony’sconcerns, Microsoft “would have no incentive to refuse to distribute Activision’s games to Sony” after a merger, the EC said, partly because “there are four Sony PlayStation consoles for every Microsoft Xbox console bought by gamers” across Europe.
And there you have it: the EC has approved the acquisition.
As I stated earlier, this isn’t the end of the story. Microsoft still has its appeal of the CMA to contend with, nevermind the far more important potential battle with the FTC in America. Were the latter to refuse to allow this to move forward, that would probably be the end of this deal. In the meantime, I suppose Microsoft should be focusing on ensuring it can keep its promises to the EC while also committing to not accidentally gaining console market share on Sony.
Analysts had been quietly noting for a while that Starlink satellite broadband service would consistently lack the capacity to be disruptive at any real scale. As it usually pertains to Musk products, that analysis was generally buried under product hype. A few years later, and Starlink users are facing obvious slowdowns and a steady parade of price hikes that show no signs of slowing down.
Last November, Starlink announced it would be implementing one terabyte per month usage caps in a bid to tackle growing network congestion.
The problem: usage caps generally aren’t a great fix for network congestion. While companies like Comcast use them to nickel-and-dime captive customers under the pretense of managing congestion, actual congestion is commonly tackled by far more sophisticated network management tech that prioritizes or deprioritizes traffic depending on local network load.
Starlink appears to have belatedly figured this out, and has been sending users a notice saying the company has already backed away from monthly usage caps entirely, for now:
Speeds have dropped as Starlink attracts more users. As recently as late September, Starlink said that residential users should expect download speeds of 50Mbps to 200Mbps, upload speeds of 10Mbps to 20Mbps, and latency of 20 to 40 ms. Business service at the time was said to offer 100Mbps to 350Mbps downloads and 10Mbps to 40Mbps uploads. The expected speeds were lowered by early November, Internet Archive captures show.
As one Starlink user wrote on Reddit, “It’s not exactly a win. They’re only promising 25-100Mbps for residential now. I’ve noticed some pretty significant speed issues lately, so I think this has been implemented before it was announced.”
There’s a reason this particular business segment (low earth orbit satellites) have been peppered with failures: it’s hugely expensive and capacity constraints (and the laws of physics) are a major nuisance that makes scaling the network extremely difficult. It’s why the feds have increasingly prioritized subsidizing future-proof fiber builds instead of Musk’s pet project.
Musk wants to maximize revenue and keep the service in headlines despite capacity constraints, so he keeps on expanding the potential subscriber base, whether that’s a tier aimed at boaters (at $5,000 a month), the specialized tier aimed at RVs ($135 a month plus a $2,500 hardware kit), or the new plan to sell service access to various airlines to help fuel in-flight broadband services.
To try and manage this growing load, the company has consistently raised prices while speeds decline. Now the company offers two basic options: a “Standard” tier (25Mbps to 100Mbps, a $600 up front hardware charge, and $90-$120 a month depending on how congested your neighborhood is) and a “Priority” tier (40Mbps to 220Mbps, requiring a $2,500 up front hardware charge and $250 a month).
This is before you get to the year+ long waiting list that greets many users upon signing up, something else you can pay extra to avoid. That’s increasingly expensive given broadband affordability remains one of the biggest hurdles to widespread adoption in a country dominated by monopolies.
Starlink remains a great option for users in regions with absolutely no service or stuck on a DSL line from 2002. But steadily increasing prices, slower speeds, and comically terrible customer service (often a trademark of most Musk companies) means the service will never actually be as disruptive at scale as much of the initial early press hype suggested (also often a trademark of most Musk companies).
A few weeks ago I wrote about an interview that Substack CEO Chris Best did about his company’s new offering, Substack Notes, and his unwillingness to answer questions about specific content moderation hypotheticals. As I said at the time, the worst part was Best’s unwillingness to just own up to what he was saying were the site’s content moderation plans, which was that they would be quite open to hosting the speech of almost anyone, no matter how terrible. That’s a decision that you can make (in the US at least), but if you’re going to do that, you have to be willing to own the decision that you’re making and be clear about it, which Best was unwilling to do.
I compared it the “Nazi bar” problem that has been widely discussed on social media in the past, where if you own a bar, and don’t kick the Nazis out up front, you get the reputation as a “Nazi bar” that is difficult to get rid of.
It was interesting to see the response to this piece. Some people got mad, claiming it was unfair to call Best a Nazi, even though I was not doing that. As in the story of the Nazi bar, no one is claiming that the bar owner is a Nazi, just that the public reputation of his bar would be that it’s a Nazi bar. That was the larger point. Your reputation is what you allow, and if you’re taking a stance that you don’t want to get involved at all, and you want to allow such things, that’s the reputation that’s going to stick.
I wasn’t calling Best a Nazi or a Nazi sympathizer. I was saying that if he can’t answer a straightforward question like the one that Nilay Patel asked him, Nazis are going to interpret that as he’s welcoming them in, and they will act accordingly. So too will people who don’t want to be seen hanging out at the Nazi bar. The vaunted “marketplace of ideas” includes the ability for a large group of people to say “we don’t want to be associated with that at all…” and to find somewhere else to go.
And this brings us to Bluesky. I’ve written a bunch about Bluesky going back to Jack Dorsey’s initial announcement which cited my paper among others as part of the inspiration for betting on protocols.
As Bluesky has gained a lot of attention over the past week or so, there have been a lot of questions raised about its content moderation plans. A lot of people, in particular, seem confused by its plans for composable moderation, which we spoke about a few weeks ago. I’ve even had a few people suggest to me that Bluesky’s plans represented a similar kind of “Nazi bar” problem as Best’s interview did, in particular because their initial reference implementation shows “hate speech” as a toggle.
I’ve also seen some people claim (falsely) that Bluesky would refuse to remove Nazis based on this. I think there is some confusion here, and it’s important to go deeper on how this might work. I have no direct insight into Bluesky’s plans. And they will likely make big mistakes, because everyone in this space makes mistakes. It’s impossible not to. And, who knows, perhaps they will run into their own Nazi bar problem, but I think there are some differences that are worth exploring here. And those differences suggest that Bluesky is better positioned not to be the Nazi bar.
The first is that, as I noted in the original piece about Best, there’s a big difference between a centralized service and its moderation choices, and a decentralized protocol. Bluesky is a bit confusing to some as it’s trying to do both things. Its larger goal is to build, promote, and support the open AT Protocol as an open social media protocol for a decentralized social media system with portable identification. Bluesky itself is a reference app for the protocol, showing how things can be done — and, as such it has to do content moderation tasks to avoid Bluesky itself running into the Nazi bar problem. And, at least so far, it seems to be doing that.
The team at Bluesky seems to recognize this. Unlike Best, they’re not refusing to answer the question, they’re talking openly about the challenges here, but so far have been willing to remove truly disruptive participants, as CEO Jay Graber notes here:
But, they definitely also recognize that content moderation at scale is impossible to do well, and believe that they need a different approach. And, again, the team at Bluesky recognizes at least some of the challenges facing them:
But, this is where things get potentially more interesting. Under a traditional centralized social media setup, there is one single decision maker who has to make the calls. And then you’re in a sort of benevolent dictator setup (or at least you hope so, as the malicious dictator threat becomes real).
And this is where we go on a little tangent about content moderation: again, it’s not just difficult. It’s not just “hard” to do. It’s impossible to do well. The people who are moderated, with rare exceptions, will disagree with your moderation decisions. And, while many people think that there are a whole bunch of obvious cases and just a few that are a little fuzzy, the reality (this is part of the scale part) is that there are a ton of borderline cases that all come down to very subjective calls over what does or does not violate a policy.
To some extent, going straight to the “Nazi” example is unfair, because there’s a huge spectrum between the user who is a hateful bigot, deliberately trying to cause trouble, and the good helpful user who is trying to do well. There’s a very wide range in the middle and where people draw their own lines will differ massively. Some of them may include inadvertent or ignorant assholery. Some of it may just include trolling. Or sometimes there are jokes that some people find funny, and others find threatening. Sometimes people are just scared and lash out out of fear or confusion. Some people feel cornered, and get defensive when they should be looking inward.
Humans are fucking messy.
And this is where the protocol approach with composable moderation becomes a lot more interesting. On the most extreme calls, the ones where there are legal requirements, such as child sexual abuse material and copyright infringement, for example, those can be removed at the protocol level. But as you start moving up into the more murky areas, where many of the calls are subjective (not so much: “is this person a Nazi” but more along the lines of “is this person deliberately trolling, or just uninformed…”) the composable moderation system begins to let (1) the end users make their own rules and (2) enable any number of 3rd parties to build tools to work with those rules.
Some people may (for perfectly good reasons, bad reasons, or no reasons at all) just not have any tolerance for any kind of ignorance. Others may be more open to it, perhaps hoping to guide ignorance to knowledge. Just as an example, outside of the “hateful” space, we’ve talked before about things like “eating disorder” communities. One of the notable things there was that when those communities were on more mainstream services, people who had gotten over an eating disorder would often go back to those communities and provide help and support to those who needed it. When those communities were booted from the mainstream services, that actually became much more difficult, and the communities became angrier and more insulated, and there was less ability for people to help those in need.
That is, there will still need to be some decision making at the protocol level (this is something that people who insist on “totally censorship proof” systems seem to miss: if you do this, eventually the government is going to shut you down for hosting CSAM), but the more of the decision making that can be pushed to a different level and the more control put in the hands of the user, the better.
This allows for more competition for better moderation, first of all, but also allows for the variance in preferences, which is what you see in the simple version that Bluesky implemented. The biggest decisions can be made at the protocol level, but above that, let there be competitive approaches and more user control. It’s unclear exactly where Bluesky the service will come down in the end, but the early indications from what’s been said so far are that the service level “Bluesky” will be more aggressive in moderating, while the protocol level “AT Protocol” will be more open.
And… that’s probably how it should be. Even the worst people should be able to use a telephone or email. But, enabling competition at the service level AND at the moderation level, creates more of the vaunted “marketplace of ideas” where (unlike what some people think the marketplace of ideas is about), if you’re regularly a disruptive, disingenuous, or malicious asshole, you are much more likely to get less (or possibly no) attention from the popular moderation services and algorithms. Those are the consequences of your own actions. But you don’t get banned from the protocol.
To some extent, we’ve already seen this play out (in a slightly different form) with Mastodon. Truly awful sites like Gab, and ridiculously pathetic sites like Truth Social, both use the underlying ActivityPub and open source Mastodon code, but they have been defederated from the rest of the fediverse. They still get to use the underlying technology, but they don’t get to use it to be obnoxiously disruptive to the main userbase who wants nothing to do with them.
With AT Protocol, and the concept of composable moderation, this can get taken even further. Rather than just having to choose your server, and be at the whims of that server admin’s moderation choices (or the pressure from other instances which keeps many instances in check and aligned), the AT Protocol setup allows for a more granular and fluid system, where there can be a lot more user empowerment, without having to resort to banning certain users from using the technology entirely.
This will never satisfy some people, who will continue to insist that the only way to stop a “bad” person is to ban them from basically any opportunity to use communications infrastructure. However, I disagree for multiple reasons. First, as noted above, outside of the worst of the worst, deciding who is “good” and who is “bad” is way more complicated and fraught and subjective than people like to note, and where and how you draw those lines will differ for almost everyone. And people who are quick to draw those lines should realize that… some other day, someone who dislikes you might be drawing those lines too. And, as the eating disorder case study demonstrated, there’s a lot more complexity and nuance than many people believe.
That’s why a decentralized solution is so much better than a centralized one. With a decentralized system you don’t have to be worrying about yourself getting cut out either. Everyone gets to set their own rules and their own conditions and their own preferences. And, if you’re correct that the truly awful people are truly awful, then it’s likely that most moderation tools and most servers will treat them as such, and you can rely on that, rather than having them cut off at the underlying protocol level.
It’s also interesting to also see how the decentralized social media protocol nostr is handling this as well. While it appears that some of the initial thinking behind it was the idea that nothing should ever be taken down, it appears that many are recognizing how impossible that is, and they’re now having really thoughtful discussions on “bottom up content moderation” specifically to avoid the “Nazi bar” problem.
Eventually in the process, thoughtful people recognize that a community needs some level of norms and rules. The question is how are those created, how are they implemented, and how are they enforced and by whom. A decentralized system allows for much greater control by end users to have the systems and communities that more closely match their own preferences, rather than requiring the centralized authority handle everything, and be able to live up to everyone’s expectations.
As such, you may end up with results like Mastodon/ActivityPub, where “Nazi bar” areas still form, but they are wholly separated from other users. Or you may end up with a result where the worst users are still there, shouting into the wind with no one bothering to listen, because no one wants to hear them. Or, possibly, it will be something else entirely as people experiment with new approaches enabled by a composable moderation system.
I’ll add one other note on that, because there are times when I’ve discussed this that people highlight that there are other forms of harassment or other kinds of risks beyond direct harassment. And just blocking a user does not stop them from harassing or encouraging or directing harassment against another. This is absolutely true. But, this kind of setup does also allow for better tooling for potentially monitoring such a thing without having to be exposed to it directly. This could take the form of Block Party’s “lockout folder” where you can have a trusted third party review the harassing messages you’ve been receiving rather than having to go through it yourself, or, conceivably. other monitoring and warning services could pop up, that could track people who are doing awful things, try to keep them from succeeding, and alert the proper people if things require escalation.
In short, decentralizing things, and allowing many different approaches, and open systems and tooling doesn’t solve all problems, but it presents some creative ways to handle the Nazi Bar problem that seem likely to be a lot more effective than living in denial and staring blankly into the Zoom screen as a reporter asks you a fairly basic question about how you’ll handle racist assholes on your platform.
The plan basically involves charging users an extra $2-$3 a month if it’s found that someone is using your account outside of your home. The problem: Netflix has already been imposing blanket price hikes, and it already limits the number of simultaneously streams per account, forcing users to subscribe to more expensive tiers if they want to expand the limit.
While the crackdown isn’t expected to hit U.S. subscribers until the end of the second quarter (aka soon), the effort has generally been a hot mess in the smaller countries Netflix first used as guinea pigs to test both the underlying tech and company messaging.
“There are of course inherent risks with clamping down on password sharing, particularly when back in 2017 Netflix was seen to be actively encouraging it. Some users were expected to be lost in the process but losing over 1 million users in a little over a month has major implications for Netflix and whether it decides to continue with its crackdown globally.
Interestingly, there is no strong demographic skew to those who cancelled, signaling a more outright rejection of the password sharing clampdown. In a worrying sign for the next quarter, 10% of remaining Netflix subscribers say they plan to cancel their plan in Q2 2023, which is well above the average seen in previous quarters.”
It’s just blanketly stupid to impose annoying and costly new restrictions at a time when streaming competition has become more heated than ever. Netflix’s competitors can now simply gain a competitive advantage by being less confusing and annoying on pricing and account restrictions.
I don’t expect it to be fatal, but it sure as hell won’t help the company maintain leadership in an increasingly competitive and crowded field. Meanwhile, some analysts say Netflix’s predictions that it will be a boon for revenues simply aren’t based in reality:
Benchmark Co. analyst Matthew Harrigan, in a note last week, expressed skepticism that it would be a “growth game-changer,” opining that the strategy “cannibalizes full-ride member growth.” He pegged the incremental revenue lift at less than 4% revenue, even with generous assumptions about how many piggybackers Netflix might be able to convert to Extra Member accounts.
But it’s also just another example of Netflix’s pivot from disruptive innovative to just another powerful corporation primarily interested in nickel-and-diming its existing customer base in a bid to please Wall Street’s insatiable need for improved quarterly returns at any cost. It doesn’t matter to many myopic investors if you’re sabotaging longer term product quality and company reputation.
We’ve noted for decades how telecom monopolies convinced corrupt state legislatures to pass counterproductive bans on creative community broadband networks. The bills are protectionist crap that are ghost written by telecom giants like AT&T and Comcast, and designed to protect their regional broadband monopolies from grass roots competitive disruption on a town by town level.
The harmful nature of such bills were highlighted during COVID, prompting several states (Arkansas and Washington) to roll them back. But 17 states still have such laws in the books, hampering the creation or expansion of cooperatives, city-owned utilities, municipalities, and public private partnerships and other creative attempts to expand access to faster, more affordable broadband.
Enter U.S. Reps. Anna G. Eshoo (CA), Jared Golden (ME) and U.S. Senator Cory Booker (NJ) who have reintroduced their Community Broadband Act, which would strip away state restrictions, allowing communities to decide for themselves whether to build their own broadband networks:
“The Community Broadband Act would empower cities, towns and villages in every state to choose for themselves whether and how to invest in locally-owned broadband infrastructure. Without this flexibility for communities, federal broadband grant programs will not be able to reach their full potential.” –National League of Cities
In short, the bill would amend the Telecommunications Act of 1996 to not only eliminate such protectionist bans already on the books, but also prohibit states from blocking communities from building their own broadband networks.
For decades U.S. telecom monopolies lobbied for such bills under the pretense they were simply looking to protect U.S. taxpayers from wasteful spending. In reality, the U.S. doles out untold billions of dollars to entrenched telecom monopolies that then routinely fail to deliver the next-generation networks (or jobs) they’ve long promised in exchange for a rotating crop of regulatory favors.
Booker and friends also introduced this bill several years ago, but it routinely struggles to gain any traction in a corrupt Congress slathered in Verizon, AT&T, Comcast, and Charter campaign contributions. Said corruption is so extensive, most U.S. regulators and lawmakers lack the courage to even acknowledge telecom monopolies are real, much less that they cause very obvious consumer and market harm.
Instead, we generally enjoy throwing billions of dollars at companies like AT&T with a long history of taxpayer fraud, who then, time after time, fail to fully deploy the next-generation networks promised. As a result, local communities have been frustrated for decades by the lack of competition, high prices, spotty coverage, slow speeds, and terrible customer service that results.
But when communities decide to take action (often at direct voter behest), they often run face-first into hostile, captured state lawmakers or regulators, as well as a bevy of lawsuits from companies like Comcast very keen on protecting the very broken, but very profitable status quo.
Entrenched incumbent monopolies could have responded to these organic, grass roots efforts by providing better, faster, cheaper service to neglected areas, but quite often it’s simply much cheaper to effectively buy a state law or a state or federal lawmaker. As billions in new infrastructure bill broadband subsidies head to the states, the debate has taken on renewed importance.
Numerous studies have consistently shown that community broadband networks (which take on a wide variety of forms ranging from direct municipal builds to cooperatives) provide better, faster, cheaper broadband service. Our recent Copia study on America’s broadband problems outlines how embracing such alternatives are a great way to spur real competition in a very broken market.
We’ve been following the entire saga of Microsoft’s proposed acquisition of Activision Blizzard for some time now. The whole thing has been decidedly messy, for various reasons. For starters, there are three main regulatory bodies that most of us have been waiting to hear from: the UK’s CMA, the USA’s FTC, and the EU. And those bodies have been in different places and on different timelines to date. The EU gave its tacit approval to the deal, while the FTC signaled it wanted more information before making any decisions, while the CMA has voiced some very serious concerns about approving the deal. If you’re an American reading this, you may be conditioned to roll your eyes at all of this talk of regulation. The FTC in this country has behaved largely as though it lacks fangs when it comes to antitrust activity.
The final report cites Microsoft’s “strong position” in the cloud-gaming sector, where the company has an estimated 60 to 70 percent market share that makes it “already much stronger than its rivals.” After purchasing Activision, the CMA says Microsoft “would find it commercially beneficial to make Activision’s titles exclusive to its own cloud gaming service.”
As to all of those largely cloud-gaming based deals Microsoft inked to keep AAA titles like Call of Duty on those platforms and the company’s argument that this showed its commitment to robust competition and non-exclusivity in the market, well:
Specifically, the CMA said Microsoft’s proposed remedy doesn’t sufficiently cover “multigame subscription services,” or providers working with “games on PC operating systems other than Windows.” Microsoft’s proposed standardized cloud-gaming licensing terms would also prevent those deals from being “determined by the dynamism and creativity of competition in the market” the CMA said.
“Accepting Microsoft’s remedy would inevitably require some degree of regulatory oversight by the CMA,” the regulator said in a press release. “By contrast, preventing the merger would effectively allow market forces to continue to operate and shape the development of cloud gaming without this regulatory intervention.”
What a breath of fresh air. Whether you agree with the CMA’s assessment or not, it’s quite nice to see a regulatory body show its teeth a bit, particularly when the focus is squarely on which outcome actually benefits the market and consumers more.
Now, all of this comes with the stipulation that Microsoft can, and will, appeal this decision. And, as you might expect, the promise to appeal comes along with Activision and Microsoft throwing all kinds of public temper tantrums over the final report.
“The CMA’s report contradicts the ambitions of the UK to become an attractive country to build technology businesses,” Activision Blizzard’s Joe Christinat said in a statement provided to Ars Technica. “We will work aggressively with Microsoft to reverse this on appeal. The report’s conclusions are a disservice to UK citizens, who face increasingly dire economic prospects. We will reassess our growth plans for the UK. Global innovators large and small will take note that—despite all its rhetoric—the UK is clearly closed for business.”
Trying to read that statement without rolling your eyes takes the kind of fortitude of which I am not made. And, frankly, this only effects the UK market. But, and here’s where this might be more important, the decision does serve as a first-to-plunge rejection with the FTC’s suit to block the deal having not even begun yet, and with the EU’s formal decision not yet in place.
So the real question isn’t solely what happens in the UK, but how this decision might effect the decisions of the EU and American markets, which represent huge risks to this deal.
As we noted two and a half years ago when Epic filed its antitrust lawsuit against Apple, it seemed like a pretty big uphill climb legally speaking. The whole thing seemed more like “contract negotiation via antitrust judicial battle” rather than a legitimate antitrust claim. And, so far, it looks like we were correct. The district court ruling a year and a half ago mostly sided with Apple, noting that “the Court cannot ultimately conclude that Apple is a monopolist under either federal or state antitrust laws.”
The only part that Epic won was an injunction against Apple’s “anti-steering” provisions, that forbade Epic (or others) from pushing users to complete transactions off app, so that Apple doesn’t get the 30% cut. That was seen as a bridge too far.
Epic appealed to the 9th Circuit over the antitrust claims and the 9th Circuit gave a pretty complete victory to Apple. While the 9th Circuit disagrees with some of the lower court’s ruling, in the end analysis it basically leads to the same result: no antitrust violation. Apple is allowed to set its own rules for its own app store, but it does leave in place the lower court’s injunction against the “anti-steering” stuff, though procedurally leaves that open to a later appeal.
Epic can (and likely will) ask for an en banc rehearing, and then could seek Supreme Court review too, though I’m not sure either move would succeed. It is a “big” case, but I’m really not sure it’s presenting any novel issues. Epic doesn’t like Apple’s rules, but Apple’s app store just doesn’t reach the level of a monopoly when you apply the relevant tests for what market we’re talking about.
Key to this, as the 9th Circuit ruling noted, is that there are “legally cognizable procompetitive rationales” for the way Apple handles things. It notes that Apple’s security rationale also is reasonable, and pro-consumer (which should raise some questions about the bills like Open App Markets that would undermine this rationale).
There’s not much more to say about this case at this point. It will be interesting to see if a rehearing or SCOTUS cert petition do anything. However, it does put another nail in the coffin for the odd belief over the past few years that some people could simply ignore the last few decades of antitrust law, where you have to actually put in the work to accurately define the market, and not just assume that “big” is automatically bad and anti-competitive.