Usually when we discuss trademark disputes, we tend to highlight examples and stories where the dispute is initiated by a party where we really, really don’t think they have much of a leg to stand on. This story is different in that respect. In India, an eBike company called Yulu has sued a company called Kinetic Green over it’s eBike that is branded as the “Zulu.” If you squint at this whole dispute just right, you can begin to see the concern Yulu might have.
The names are very similar, with only one letter difference between the brands, and that one letter difference is the letter next in the alphabet to the original. Both make eBikes, though the product lines are somewhat different. They both operate in the same geographic market. You get it.
According to media reports, Yulu filed the lawsuit in January after Kinetic Green launched its new Zulu range of electric scooters. The company believes that it sounds too similar to its own brand and may cause confusion among customers.
Meanwhile, the Karnataka High Court reportedly ordered a temporary injunction on February 5, 2024. The court has restrained Kinetic Green Energy from using, selling and advertising with ‘YULU’ and associated trademarks or similar words, including ‘ZULU’ and ‘Kinetic Green Zulu’.
Here’s the thing though: I don’t think there’s any real reason to be worried about confusion in the public. Why? Several reasons, actually.
The branding for the companies doesn’t otherwise resemble one another, for starters. The Kinetic Green bikes and branding are not particularly similar generally.
As you will see, the look and feel of the products themselves is quite disimilar. Add to that the fact that Yulus seem to be in the category of eBike rental stations scattered throughout cities, versus the Zulu product just being a thing you buy, and it’s hard to see how anyone is going to seriously get confused here.
And now let’s tack on the fact that “Zulu” is itself a very recognizable term, having been the name for a large ethnic group that exists in southern Africa. This isn’t two fanciful names that sound alike, but rather one original name and one that has solid footing in the global lexicon. Again, where is the confusion really going to occur here?
And that’s probably why Kinetic Green is itching to get this trial started.
In response, Kinetic Green’s lawyers have requested the High Court to advance the date and allow them to file their objections sooner. The final decision on the interim application is expected to be made by the Commercial Court by March 11, 2024.
That date is only a few days out at this point, so it seems like we’ll get our answer on this sooner rather than later. From my perspective, I don’t see any real reason why Kinetic Green shouldn’t be allowed to sell its Zulu bikes.
Amazon’s home surveillance tech acquisition, Ring, wanted to be all things to everyone. But mostly, it wanted to be BFFs with law enforcement.
Providing homeowners with an easy way to surveil their own doorsteps and driveways was enough for Ring for a little while. Then, following its acquisition by Amazon, it began to portray itself as an essential addition to government surveillance networks. Ring promised cops easy access to recordings captured by devices mounted on private residences, giving officers “free” cameras to hand out to taxpayers — each one bearing the implicit suggestion that any footage gathered by “free” doorbell cams should be considered property of citizens’ government benefactors, rather than their own.
Flock, the private sector purveyor of automated license plate readers, followed the same pattern. It pitched its products to gated communities and homeowners’ associations, promising them an effortless way to deter those not paying HOA fees from passing unnoticed into the interiors of their walled gardens.
While Flock was initially happy to increase the assholish nature of the inordinately wealthy, it soon developed a thirst for real money. It started pitching its ALPRs to government agencies, hoping to secure lucrative contracts that contained the potential to generate revenue streams unlikely to be interrupted any time before the turn of the next century.
Company communications with state transportation agencies obtained via public records requests, and interviews with more than half a dozen former employees, suggest that in its rush to install surveillance cameras in the absence of clear regulatory frameworks, Flock repeatedly broke the law in at least five states. In two, state agencies have banned Flock staff from installing new cameras.
The company claims it’s on a mission to “reduce crime” in this country. Not its own, apparently. In its hurry to expand it market, Flock tended to ignore regulatory requirements meant to discourage companies from installing cameras where they weren’t welcome, or where they couldn’t be installed safely.
With its general disregard for applicable laws exposed, Flock has gone on the defensive. Its statement to Forbes suggests it’s doing its best but laws are just so darn complicated these days.
Responding to a detailed list of questions, Flock spokesperson Josh Thomas told Forbes that the company has nearly 50 people dedicated to permitting and “operates to the best of our abilities within the bounds of the law.” He said that since jurisdictional boundaries are not always clear, Flock didn’t always know when and where it should be applying for a permit. “For the tens of thousands of permits we have applied for, and the tens of thousands of locations that do not require permits, we have certainly not been perfect,” Thomas said. “But we try to respond and fix any issues, or we make the effort to retropermit as needed.”
Some sympathy is warranted. Regulatory laws are generally referred to as “thickets” or “impenetrable wall of text boilerplate nightmares.” There are a lot of hoops to jump through, plenty of boxes to tick, and a variety of other red tape analogies to satisfy. That being said, if you’re a tech company that’s decided to fully embrace the government business of law enforcement, you should be going above and beyond to ensure you’ve got everything locked down on your end.
Flock, despite its alleged “50 people” dedicated to the task of performing regulatory hoop jumps prior to deployment, has failed often enough it’s now attracting national attention. Sure, no person or entity will ever be without sin, but Flock wants to help cops cast the first stone. That’s the sort of thing that rarely goes well when you’ve decided to cast these stones from the interior of your glass house.
Sure, it’s one thing for me, a private citizen and contributor to Techdirt, to have exceeded my regulatory allowance of mixed metaphors. It’s quite another when a public company decides to mix its business and pleasure by bedding down with any cop shop that will have it while failing to ensure it’s complied with all applicable laws. At best, it just looks sloppy. At worst, it looks like the sloppiest form of collusion — the kind that assumes all will be forgiven because Flock is now (by extension) in the business of law enforcement.
Haste makes waste, Flock is now learning, presumably after promising agencies in compromising positions that it would be the best thing that has ever happened to them.
In South Carolina, State Transportation Secretary Christy Hall told Forbes that since spring 2022, her staff has found more than 200 unpermitted Flock cameras during routine monitoring of public roads. In July 2023, the agency put a moratorium on new installations and ordered a safety and compliance review of all Flock cameras across the state.
I’m no business talking guy, but it’s pretty hard to expand your market when you’ve been locked out of it after pissing off regulators, including state representative Todd Rutherford, who told Forbes Flock is apparently “willing to break the law to install these cameras.”
Directly north of this state, things aren’t going any better for Flock. As Brewster and Farivar report, Flock was hit with an injunction forbidding it from installing new cameras after it was sued by the North Carolina Department of Public Safety over its refusal to obtain licensing for its camera installations.
Not that this constant illegality appears to bother those in the literal business of law enforcement:
But as state officials grumbled, cops raved — and kept buying. Tim Martin, a former police officer in Huntington Beach, California, was an early user of Flock’s surveillance cameras and described them as” one of the greatest technological advancements” of his career.
And that’s the bigger problem — one regulators won’t easily be able to change. It rarely seems to matter to law enforcement agencies if their preferred tech providers violate laws. All that really seems to matter to these agencies is whether or not the tech makes it easier to engage in mass surveillance or enables performing more police work via keyboard and monitor, rather than by actually going out and engaging with the people they serve.
The truth is cops are as antagonistic towards over-regulation as the average small business owner. The difference is law enforcement agencies will look the other way while companies they like bypass regulations. Then they’ll go out and enforce these same laws against businesses they don’t like. Police officials will complain about red tape slowing down their valuable work (and, far too often, this “red tape” includes such things as constitutional rights) but are always willing to wrap others up with it because it gives the impression they actually care about laws and enforcement.
Flock is simply reading the signals being sent by its preferred customers and acting accordingly. And, no matter what’s happening now, it will lose ZERO law enforcement customers because it thumbs its nose at regulators. If anything, this demonstrates to cops that Flock is one their own: willing to respect the law only when it works in its favor.
Anyone who follows Techdirt knows we’re very interested in the progress of Bluesky, the decentralized social network that embraces our concept of protocols over platforms. Bluesky recently ended its invite-only beta and opened its doors to the public, so it seems like a great time for a check-in, and who better to check in with than Bluesky CEO Jay Graber? Jay joins us on this week’s episode for a discussion about Bluesky’s progress and what the future holds.
A few weeks ago, Prof. James Grimmelmann and (former Techdirt) journalist Tim Lee wrote a piece for Ars Technica, stating why the NY Times might win its copyright lawsuit against OpenAI. It’s no secret that I’m skeptical of the underpinnings of the lawsuit and think the NY Times is being silly in filing it, but I don’t think there’s any question that the NY Times could win. Copyright law (as both Grimmelmann and Lee well know) ‘tis a silly place, where judges will justify just about anything if they feel one party has been “wronged” no matter what the law might say. The Supreme Court’s ruling in the Aereo case should always be a reminder of that. Sometimes copyright cases are decided on vibes and not the law.
The main crux of the argument for why the NY Times could win is that the NYT showed how they got OpenAI to regurgitate very similar versions of stories, as lots of people commented on regarding the lawsuit. However, as we noted in our analysis, they only did so by effectively limiting the potential output to such a narrow range of possibilities, that a very near copy was about the only possible answer. Basically, the system is trained on lots and lots of input training data, but if you systematically use your prompt to basically say “give me exactly this, and exclude every other possibility” eventually an LLM may return something kinda like what you asked for.
This is why it seems that, if there is any infringement (or other legal violation), the liability should fall almost entirely on the prompter. They’re the ones using the tool in such a manner to produce potentially violative works. We don’t blame the car company because a driver drove a car recklessly and caused damage. We blame the driver.
Either way, we now have OpenAI’s motion to dismiss in the case. While I’ve seen lots of people saying that OpenAI is claiming the NY Times “hacked” their system and finding such an allegation laughable, the reality is (as usual) more nuanced and important to understand. The NY Times definitely had to do a bunch of gaming to get the outputs it wanted for the lawsuit, which undermines the critical claim that OpenAI’s tools magically undermine the value of a NY Times’s subscription.
As OpenAI points out, the claims in the NY Times’ complaint would not live up to the Times’ well-known journalistic standards, given just how misleading the complaint was:
The allegations in the Times’s Complaint do not meet its famously rigorous journalistic standards. The truth, which will come out in the course of this case, is that the Times paid someone to hack OpenAI’s products. It took them tens of thousands of attempts to generate the highly anomalous results that make up Exhibit J to the Complaint. They were able to do so only by targeting and exploiting a bug (which OpenAI has committed to addressing) by using deceptive prompts that blatantly violate OpenAI’s terms of use. And even then, they had to feed the tool portions of the very articles they sought to elicit verbatim passages of, virtually all of which already appear on multiple public websites. Normal people do not use OpenAI’s products in this way.
This is where the “hacked” headlines come from. And, frankly, claiming it’s a “hack” is a bit silly for OpenAI. The other points it’s raising are much more important. A key part of the Times’ lawsuit is claiming that because of their prompt engineering, they could reproduce similar (though not exact) language to articles, which would allow users to bypass a NY Times paywall (and subscription) to just have OpenAI generate the news for them.
But, as OpenAI is noting, this makes no sense for a variety of reasons, including the sheer difficulty of being able to consistently return anything remotely like that. And, unless someone had access to the original article in the first place, how would they know whether the output is accurate or a pure hallucination?
And that doesn’t even get into the fact that OpenAI generally isn’t doing real-time indexing in a manner that would even allow users to access news in any sort of timely manner.
OpenAI makes the obvious fair use argument, rightly highlighting how much of its business (and the wider AI) space has been built in the belief that reading/scanning of content that is publicly available is obviously fair use, and that to change that would massively upend a whole industry. It even makes a nod to the point that I raised in my initial article about the lawsuit: the NY Times itself relies regularly on the kind of fair use it now claims doesn’t exist.
Indeed, it has long been clear that the non-consumptive use of copyrighted material (like large language model training) is protected by fair use—a doctrine as important to the Times itself as it is to the American technology industry. Since Congress codified that doctrine in 1976, see H.R. Rep. No. 94-1476, at 65–66 (1976) (courts should “adapt” defense to “rapid technological change”), courts have used it to protect useful innovations like home video recording, internet search, book search tools, reuse of software APIs, and many others.
These precedents reflect the foundational principle that copyright law exists to control the dissemination of works in the marketplace—not to grant authors “absolute control” over all uses of their works. Google Books, 804 F.3d at 212. Copyright is not a veto right over transformative technologies that leverage existing works internally—i.e., without disseminating them—to new and useful ends, thereby furthering copyright’s basic purpose without undercutting authors’ ability to sell their works in the marketplace. See supra note 23. And it is the “basic purpose” of fair use to “keep [the] copyright monopoly within [these] lawful bounds.” Oracle, 141 S. Ct. at 1198. OpenAI and scores of other developers invested billions of dollars, and the efforts of some of the world’s most capable minds, based on these clear and longstanding principles
It makes that point even more strongly a bit later:
To support its narrative, the Times claims OpenAI’s tools can “closely summarize[]” the facts it reports in its pages and “mimic[] its expressive style.” Compl. ¶ 4. But the law does not prohibit reusing facts or styles. If it did, the Times would owe countless billions to other journalists who “invest[] [] enormous amount[s] of time, money, expertise, and talent” in reporting stories, Compl. ¶ 32, only to have the Times summarize them in its pages
The motion also highlights the kinds of games the Times had to play just to get the output it used for the complaint in the now infamous Exhibit J, including potentially including things in the prompt like “in the style of a NY Times journalist.” Again, this kind of prompt engineering is basically using the system to systematically limit the potential output in an effort to craft output that the user could claim is infringing. GPT doesn’t just randomly spit out these things.
OpenAI highlights how many of the claimed “infringements” fall outside the three-year statute of limitations. As for the contributory infringement claims, they are equally as ridiculous because to do that, you have to show that the defendant knew of users making use of the platform to infringe and somehow encouraged that behavior.
Here, the only allegation supporting the Times’s contributory claim states that OpenAI “had reason to know of the direct infringement by end-users” because of its role in “developing,testing, and troubleshooting” its products. Compl. ¶ 180. But “generalized knowledge” of “the possibility of infringement” is not enough. Luvdarts, 710 F.3d at 1072. The Complaint does not allege OpenAI “investigated or would have had reason to investigate” the use of its platform to create copies of Times articles. Popcornflix.com, 2023 WL 571522, at *6. Nor does it suggest that OpenAI had any reason to suspect this was happening. Indeed, OpenAI’s terms expressly prohibit such uses of its services. Supra note 8. And even if OpenAI had investigated, nothing in the Complaint explains how it might evaluate whether these outputs were acts of copyright infringement or whether their creation was authorized by the copyright holder (as they were here).
The complaint had also made a bunch of DMCA 1202 claims. That’s the part of the law that dings infringers for removing copyright management info (CMI). This (kinda silly) part of the law is basically designed as a tool to go after commercial infringers who would strip or hide a copyright notice from a work in order to resell it (e.g., on a DVD sold on a street corner or something). But clearly that’s not what’s happening here. Here, the Times didn’t even say what CMI was removed.
Count V should be dismissed at the outset for failure to specify the CMI at issue. The Complaint’s relevant paragraph fails to state what CMI is included in what work, and simply repeats the statutory text. Compl. ¶ 182 (alleging “one or more forms of [CMI]” and parroting language of Section 1202(c)). The only firm allegation states that the Times placed “copyright notices” and “terms of service” links on “every page of its websites.” Compl. ¶ 125. But, at least for some articles, it did not. And when it did, the information was not “conveyed in connection with” the works, 17 U.S.C. § 1202(c) (defining CMI), but hidden in small text at the bottom of the page. Judge Orrick of the Northern District of California rejected similar allegations as deficient in another recent AI case. Andersen v. Stability AI Ltd., No. 23-cv-00201, 2023 WL 7132064, at *11 (N.D. Cal. Oct. 30, 2023) (must plead “exact type of CMI included in [each] work”).
Another key point is that the Times claims that the parts of NY Times articles that showed up as close (but usually not exact) excerpts in GPT output couldn’t be dinged for CMI removal. This is because if that was the law it would open up tons of other organizations (including the NY Times itself) that quote or excerpt works without including the CMI:
Regardless, this “output” theory fails because the outputs alleged in the Complaint are not wholesale copies of entire Times articles. They are, at best, reproductions of excerpts of those articles, some of which are little more than collections of scattered sentences. Supra 12. If the absence of CMI from such excerpts constituted a “removal” of that CMI, then DMCA liability would attach to any journalist who used a block quote in a book review without also including extensive information about the book’s publisher, terms and conditions, and original copyright notice. See supra note 22 (example of the Times including 200-word block quote in book review).
And then there’s this tidbit:
Even setting that aside, the Times’s output-based CMI claim fails for the independent reason that there was no CMI to remove from the relevant text. The Exhibit J outputs, for example, feature text from the middle of articles. Ex. J. at 2–126. As shown in the exhibit, the “Actual text from NYTimes” contains no information that could qualify as CMI. See, e.g., id. at 3; 17 U.S.C. § 1202(c) (defining CMI). So too for the ChatGPT outputs featured in the Complaint, which request the “first [and subsequent] paragraph[s]” from Times articles. See, e.g., Compl. ¶¶ 104, 106, 118, 121. None of those “paragraphs” contains any CMI that OpenAI could have “removed.”
There’s some more in there, but I find it a very strong motion. That doesn’t mean that the case will get dismissed outright (remember, copyright land ‘tis a silly place), but it sure lays out pretty clearly how silly the examples in the Times lawsuit are and how weak their claims are as soon as you hold them up to the light.
Yes, in some rare circumstances, you can reproduce content that is kinda similar (but not exact) to copyright covered info if you tweak the outputs and effectively push the model to its extremes. But… as noted, if that’s the case, any liability should still feel like it should be on the prompter, not the tool. And the NY Times can’t infringe on its own copyright.
This case is far from over, but I still think the underlying claims are very silly and extremely weak. Hopefully the court agrees.
Like highway patrol officers bitching about the fact they couldn’t talk a driver into a voluntary search, a British censorship board is complaining about the fact they can’t get US companies to comply with takedown requests they’re under no legal obligation to comply with.
Britain’s media censorship board is trying to woo Big Tech. But the Silicon Valley giants just aren’t interested.
Tech firms including Google, Meta, and X have repeatedly spurned the secretive British committee in its mission to prevent state secrets spreading across social platforms.
The Defence and Security Media Advisory (DSMA) Committee is run by retired military officers and counts some of the U.K.’s biggest media brands, including Sky, the BBC, and the Times, among its members.
There’s no reason for US companies to feel “interested.” The DSMA is (supposedly) an independent board that can issue requests to take down content that might threaten the UK’s national security, but has no real legal weight behind its requests.
The so-called “D-notices” rely on voluntary compliance, even in the UK. UK media companies might feel a bit more obliged to comply, but refusing to comply doesn’t actually mean they’re breaking the law. Sure, things are a bit tougher due to recent legislation (namely, the National Security Act), but the DSMA is still mostly on the outside, legally speaking. It can request and hope that those requests are honored.
It’s that voluntary nature that has secured the most compliance, not the latent threat of UK national security laws. Approaching entities with requests, rather than legal threats, has worked out well for the DSMA for much of its existence.
Its biggest win — as Politico points out — was preventing extensive reporting on the Snowden leaks. But it looks as though the Snowden leaks may have changed things overseas, resulting in less compliance by US tech companies which were stung by reporting that detailed their complicity in domestic and overseas surveillance efforts.
Still, the DMSA feels it has the right to complain about US tech companies being less than compliant with D-notices they’re not obliged to comply with.
“We’ve been trying to break into the so-called tech giants,” said DSMA notice secretary and former military diplomat, Geoffrey Dodds, in an interview. He said Meta and Google were among the social media companies the committee had reached out to.
At present, governments can ask social platforms like Meta and X to remove content if it violates local laws or platform rules.
But Dodds suggested that tech firms could monitor their platforms like they do for illegal content, such as child abuse material — and, if they saw something pertaining to D-Notices, seek advice from the committee.
There it is: yet another suggestion from someone who’s never worked in the field of content moderation that tech companies can always do more to proactively vet content that’s uploaded by the gigabyte every second on behalf of hundreds of government agencies that all want something different monitored on their behalf.
Tech companies do make efforts to take down and report content that is obviously illegal. What’s never immediately obvious is whether reporting or content shared on their platforms violates the hundreds of national security laws put in place by dozens of governments all over the world.
The DSMA likes to claim that it’s an independent body, presumably in hopes that distancing itself from the UK government might make service providers more receptive to its “would you kindly” requests. But, much like a majority of “independent” police oversight boards in this country, the DSMA is closely tied to the government entities it hopes to protect.
The DSMA committee claims to be independent from government, but is currently run by the Ministry of Defence’s director general for security policy, Paul Wyatt. The committee includes government members hailing from the Foreign Office, Cabinet Office, MoD and the Home Office, and the meetings take place in the MoD.
I’m not sure how an entity run and overseen by current government employees can pretend it’s not a government entity. And if it can’t be honest about itself, it shouldn’t expect others — especially those not located in the UK — to honor its “requests” for content removal.
No matter where the targets of D-notices are located, the simple fact remains they just aren’t used that often. This is probably due to compliance being mostly voluntary, especially if the targets are not UK-based content providers. According to the DSMA secretary, the last notice was sent out in January of this year. Prior to that, it was used sparingly, with only a few requests sent out between April 2023 and January 2024.
If the DMSA isn’t carpet-bombing service providers with takedown requests, the loss of some US allies hardly seems worth complaining about. That the board is bitter about US companies refusal to comply (or refusal to partner up with UK companies) seems like the sort of sour graping that would have been better off being relegated to the DMSA’s Slack channel.
Complaints about incremental gains in NatSec are the sort of complaints that just make an entity look bitter, rather than useful. Given the makeup of this board (in every way), there’s nothing in this for US tech companies, which currently have their hands full dealing with US government pressure and a half-dozen (unconstitutional) state laws that insist these private companies shouldn’t be allowed to moderate content on their own.
Then there’s this, which suggests… well, I don’t know exactly what, but it’s hardly flattering:
As the committee attempts to modernize, the minutes — which until recently bore Dodds’ signature in the Comic Sans typeface — also reveal internal agonizing over the group’s lack of diversity. A survey found the committee was overwhelmingly “pale and male” and half of participants had attended a private school.
The UK equivalent of a “good ol’ boys” club presided over by someone who thinks Comic Sans is an acceptable font for official communications. God bless the king/queen/whatever the fuck. May I suggest Papyrus might be more effective moving forward?
The Apple Watch Wireless Charger Keychain is the perfect accessory for Apple Watch users on-the-go. With a built-in 950mAh lithium-ion battery, it can charge all series of Apple Watch. The technology allows it to be used as a base for a bedside table or table for convenient charging. Its portable, pocket-size design makes it easy to carry around while exercising or traveling. The wireless magnet charging method provides a unique charging experience, and the strong magnetic adsorption allows for adjustable angles without deviating from the charger center. With four LED lights indicating the status of the charge, it’s easy to use and monitor. It’s on sale for $19.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
It appears that Meta is serious about no longer bribing news orgs to keep corrupt politicians from forcing them to engage in sketchy wealth transfer schemes to news orgs. While it caved in the past in Australia and paid off news orgs there, the company is informing news orgs that they won’t be renewing the deal.
Around the globe, there remain ongoing attempts to force Google and Meta (mainly) to hand money over to news organizations. Supporters have no fundamental principle behind this other than “Google and Meta are making money, and some news companies are struggling, therefore, they should pay us.” As we’ve discussed at great length, these laws are dangerous on multiple levels. They’re an extreme form of crony corruption, forcing one industry to pay off another. They’re also an attack on the open web, because they are based on the principle of “if your users link to news too much, you have to pay for sending them traffic.”
None of this makes sense. If the news companies don’t want the traffic, they can block it. But they want the free traffic and they want to be paid for it. It’s extraordinarily corrupt.
There have been variations on the link tax model over the past decade or so. Various failed experiments in the EU were followed by Australia’s infamous news bargaining code. Mainstream news orgs continue to insist Australia’s experiment has been a huge success, but that’s because the only ones talking about it are the big media orgs that are getting millions of dollars from Meta and Google. This might cloud their reporting on the law, not that they admit that. About the only Australian news orgs I’ve seen call out the inherent corruption in these plans are the satirical Juice Media and the irreverent Crikey.
Crikey’s summary is dead on:
The logic of the news media bargaining code isn’t that of ending a rip-off perpetrated by foreign tech giants. Instead, it’s similar to Coles and Woolworths successfully demanding, on the basis of all the great work they’ve done for the community, that the government forcibly transfer profit from an international competitor that had successfully disrupted their business model.
The fact is, these link taxes have been a disaster wherever they’ve been implemented, including Australia. The Public Interest Journalism Initiative in Australia tracks changes in the journalism space across the country with laser-like precision. And its data certainly does not suggest a huge grand success for journalism in the country. Rather, it shows a lot of consolidation, and plenty of smaller journalism outlets still struggling, while there’s an increase in areas with little to no journalism coverage. However, contractions in the news business greatly outweigh expansions:
Apparently the money flowing in is — as plenty of people predicted — going to the tippy top of the market, making folks like Rupert Murdoch even wealthier. But not doing much to help journalism.
Google has been much more willing to give in and pay the demanded extortion. A decade ago, Google was willing to take a stand in places like Spain, shutting down Google News in that country. But these days, Google has been willing to cave, quickly, in both Australia and, more recently, Canada.
On the other hand, Meta has been much more willing to push back on these laws. It would be nice to think Meta is doing this to protect the open web, but no one’s going to fall for that. Meta has spent years trying to wall off the open internet, so it’s not like the company magically got a conscience on these issues. But, whether for good reasons or bad, Meta has been way more willing to push back on these laws. In Canada, the company has blocked news links, where it was quickly discovered that news orgs needed traffic from Meta way, way more than Meta needed links from news orgs. Meta has also threatened to take similar steps in the US if various state or federal laws come into effect.
In Australia, you may recall, Meta initially blocked news there, before cutting a few deals with news orgs there. Those deals (and the ones Google did as well) were not technically under the News Bargaining Code. Rather, they were blatant payoffs to avoid the invoking of the code, which would then force the companies into binding arbitration.
But, apparently, Meta has decided enough is enough. It informed the news orgs it paid off a few years ago that it will not be renewing those deals when they finish, and that it’s removing its dedicated news tab.
Facebook and Instagram’s parent company, Meta, has set itself on a collision course with the Albanese government after announcing it will stop paying Australian publishers for news, and plans to shut down its news tab in Australia and the United States.
Meta informed publishers on Friday that it would not enter new deals when the current contracts expire this year.
The news tab – a dedicated tab for news in the bookmarks section of Facebook – will also shut down in April, after a similar shut down in the UK, Germany and France last year.
Again, it’s nearly impossible to get good reporting on this stuff because all the major media sites are biased in that they are recipients of these payoffs. The Guardian report quotes a ton of politicians and news orgs decrying this, and only presents Meta’s PR quotes in response — not bothering to speak to any civil society or academics who are willing to speak out as to why these regulatory schemes are so corrupt and problematic.
But, Meta makes a fairly clear point that highlights the absurdity of these laws: what if Meta just doesn’t want to be in the news business? The company has made it pretty damn clear over the last few years that focusing on “news” as it did for a few years was nothing but a headache. It would rather people just use social media to connect with friends, not argue about the news.
Should it be allowed to do that?
“We know that people don’t come to Facebook for news and political content – they come to connect with people and discover new opportunities, passions and interests. As we previously shared in 2023, news makes up less than 3% of what people around the world see in their Facebook feed, and is a small part of the Facebook experience for the vast majority of people.”
Again, the reaction from people who are mad at this move just puts the exclamation point on just how corrupt the whole scheme is. They don’t care about the reasons or the problems of having to pay to allow users to link to public news sites. No, they just want cash and are mad that they don’t get cash.
The prime minister, Anthony Albanese, told reporters on Friday the decision was “not the Australian way”.
“We know that it’s absolutely critical that media is able to function properly and be properly funded. Journalism is important and the idea that research and work done by others can be taken free is simply untenable,” he said.
But nothing is being “taken free.” It is just that users on Facebook decide they want to point people to news stories, thereby sending free traffic to that news organization by posting the link. A little bit of text and an image shows up on Facebook, but that is entirely controllable by the news org since they can set the details for the cards that show up when linked.
So, Prime Minister, what the fuck is “taken” and what was “taken free”? Because the answer is nothing.
The communications minister, Michelle Rowland, and assistant treasurer, Stephen Jones, called news media companies on Friday following the announcement, advising them the government would be taking all of the steps available under the news media bargaining code.
“We’re not talking about some plucky little startup, we’re talking about one of the world’s largest and most profitable companies,” Jones said. “It has a responsibility to ensure that it pays for the content that … has been used on its platform, and frankly, that it’s making millions and millions of dollars out of it and so the government is adamant it will be backing the code we’ll be taking all of the actions that are available to us under the code.”
No, it’s not a plucky little startup, but it’s also not “using” the content on their platform. It’s allowing its users to link to that content, which is a fundamental part of the open web. And by doing so, they are sending free traffic to that website.
If Albanese and the Australian government are so concerned about things happening without payment, why aren’t they making news orgs pay Facebook for the traffic they’re getting?
It’s like they live in this upside down world.
Either way, it sounds like the end result of this is that the Australian government is likely to try to force Meta to (1) host news it has no interest in hosting, and (2) paying for that news it does not value and which it would prefer not to host.
We’ve noted repeatedly how early attempts to integrate “AI” into journalism have proven to be a comical mess, resulting in no shortage of shoddy product, dangerous falsehoods, and plagiarism. It’s thanks in large part to the incompetent executives at many large media companies, who see AI primarily as a way to cut corners, assault unionized labor, and automate lazy and mindless ad engagement clickbait.
The folks rushing to implement half-cooked AI at places like Red Ventures (CNET) and G/O Media (Gizmodo) aren’t competent managers to begin with. Now they’re integrating “AI” with zero interest in whether it actually works or if it undermines product quality. They’re also often doing it without telling staffers what’s happening, revealing a widespread disdain for their own employees.
After CNET repeatedly published automated dreck, Wikipedia has taken the step of no longer ranking the formerly widely respected news site as a “generally reliable” news source. As Futurism notes, the website’s crap automated content crafted by fake automated journalists increasingly doesn’t pass muster:
“Let’s take a step back and consider what we’ve witnessed here,” a Wikipedia editor who goes by the name “bloodofox” chimed in. “CNET generated a bunch of content with AI, listed some of it as written by people (!), claimed it was all edited and vetted by people, and then, after getting caught, issued some ‘corrections’ followed by attacks on the journalists that reported on it,” they added, alluding to the time that CNET’s then-Editor-in-Chief Connie Guglielmo — who now serves as Red Ventures’ “Senior Vice President of AI Edit Strategy” — disparagingly referred to journalists who covered CNET’s AI debacle as “some writers… I won’t call them reporters.””
Of course CNET was already having credibility problems long before AI came on the scene. The website, like many “tech news” websites, increasingly acts more of an extension of gadget marketing departments than an adult news venture. CNET editorial standards have long been murky, as exemplified by that whole CES Dish Network award scandal roughly a decade ago.
Things got worse once CNET was purchased by Red Ventures, which has been happy to soften the outlet’s coverage to please advertisers, and, like most modern media companies, sees journalism not as a truth-telling exercise, but as a purely extractive path toward chasing engagement at impossible scale.
That sentiment is everywhere you currently look, as a rotating crop of trust fund failsons drive what’s left of U.S. journalism into the soil. These folks see journalism as an irrelevant venture, and they’re keen to turn it into a sort of automated journalism simulacrum; stuff that looks somewhat like useful reporting, but is predominantly an unholy fusion of facts-optional marketing and engagement bait.
It’s great to see the folks at Wikipedia take note and act accordingly.