Glyn Moody's Techdirt Profile

Glyn Moody

About Glyn Moody

Posted on Techdirt - 7 December 2023 @ 03:28pm

Good And Bad News On Attempts To Implicate DNS Services For Copyright Infringement At The Domains They Resolve

Two years ago Techdirt wrote about an attempt by Sony Music in Germany to implicate Quad9, a free anycast DNS platform (Cloudflare has technical details on what “recursive” means in this context), in copyright infringement at the domains it resolves. That was bad news for at least two reasons. First, because Quad9 is operated by the Quad9 Foundation, a Swiss public-benefit, not-for-profit organization, whose operational budget comes from sponsorships and donations. It aims to protect tens of millions of users around the world from malware and phishing, receiving nothing in return. More generally, success in this lawsuit would create a terrible precedent for blaming a service that is part of the Internet’s basic plumbing for what passes through its pipes.

Unfortunately, the Regional Court in Hamburg, where the case was heard, issued an interim injunction ordering Quad9 to cease resolving the names of sites that Sony Music alleged were infringing on its copyright. A more recent Techdirt post noted that Quad9 had appealed to the Hamburg Higher Regional Court against the lower court’s decision. Around this time the Regional Court in Leipzig handed down another ruling against the company. Quad9 said that it would be appealing to the Dresden Higher Court against that decision. The good news is that the court in Dresden has now ruled in favor of Quad9. A blog post by Quad9 summarizes what happened:

The appeal with the Higher Regional Court in Dresden follows a decision by the Regional Court in Leipzig, in which Sony prevailed, and Quad9 was convicted as a wrongdoer. Before that, Sony successfully obtained a preliminary junction against Quad9 with the Regional Court in Hamburg. The objection against the preliminary injunction by Quad9 was unsuccessful, and the appeal with the Higher Regional Court in Hamburg was withdrawn by Quad9 since a decision in the main proceeding was expected to be made earlier than the conclusion of the appeal in the preliminary proceedings.

That’s great news, since it confirms that Quad9 benefits here from the liability privileges as a “mere conduit”. Also good news is the court’s ruling that the case “cannot be taken to a higher court and their decision is the final word in this particular case.” Except, as Quad9 explains, it’s not quite over yet:

Sony may appeal the appeal closure via a complaint against the denial of leave of appeal and then would have to appeal the case itself with the German Federal Court. So while there is still a possibility that this case could continue, Sony would have to win twice to turn the decision around again.

There’s also a situation in which a DNS resolver might still be required to block a domain:

it is possible that a DNS resolver operator can be required to block as a matter of last resort if the claiming party has taken appropriate means to go after the wrongdoer and the hosting company unsuccessfully. Such measures could be legal action by applying for a preliminary injunction against a hosting company within the EU. These uncertainties still linger, and we expect that this ongoing question of what circumstances require what actions, by what parties, will continue to be argued in court and in policy circles over the next few years.

Moreover, despite this clear win in Germany, Quad9 has been served with another demand (from media companies once more), this time to block domain names because of alleged copyright infringement in Italy:

Italian legal representatives have presented us with a list of domains and a demand for blocking those domains. Now we must again determine the path to take forward fighting this legal battle, in another nation in which we are neither headquartered nor have any offices or corporate presence.

As to how these legal actions in Germany and Italy can be brought in countries where Quad9 has no corporate presence, the answer is something called the Lugano Convention. And to end on a more positive note, another major DNS service provider, Cloudflare, has also won a legal battle in Germany:

A recent decision from the Higher Regional Court of Cologne in Germany marked important progress for Cloudflare and the Internet in pushing back against misguided attempts to address online copyright infringement through the DNS system. In early November, the Court in Universal v. Cloudflare issued its decision rejecting a request to require public DNS resolvers like Cloudflare’s 1.1.1.1. to block websites based on allegations of online copyright infringement. That’s a position we’ve long advocated, because blocking through public resolvers is ineffective and disproportionate, and it does not allow for much-needed transparency as to what is blocked and why.

Although these victories are welcome, they are hard won. Moreover, the battles between deep-pocketed media companies and not-for-profit organizations like Quad9 are inherently unbalanced. Quad9 itself admits:

Quad9 can only have a few legal fronts open at once – we are nearly entirely dedicated to operational challenges of running a free, non-profit recursive resolver platform that protects end users against malware and phishing. We are not a for-profit company with lawyers on retainer.

And that’s why the lawsuits keep coming – in the hope that one day the people defending the Internet, as Quad9 and Cloudflare have done with success, run out of money or management time to devote to these fights. It’s a risk that has not gone away, despite these recent wins.

Follow me @glynmoody on Mastodon.

Posted on Techdirt - 28 November 2023 @ 03:25pm

Main Chinese Social Media Platforms Now Require Top Influencers To Display Their Real Names Online

Back in 2015, Techdirt wrote about one of China’s many attempts to control the online world, in this case by requiring everyone to use real names when they register for online services. As that post noted, the fact that the Chinese authorities had announced similar initiatives several times since 2003 suggests that implementing the policy was proving hard. Twenty years after those first attempts to root out anonymity online, China is still trying to tighten its grip. A post on the Rest of the World site reports:

On October 31, Weibo, as well as several other major Chinese social media platforms including WeChat, Douyin, Zhihu, Xiaohongshu, and Kuaishou, announced that they now required popular users’ legal names to be made visible to the public. Weibo stated in a public post that the new rule would first apply to all users with over 1 million followers, then to those with over 500,000.

As that indicates, there’s a new wrinkle in the fight against anonymity: real names are only required for top influencers on the main social media sites. That’s obviously much easier to police than trying to force hundreds of millions of users to comply. Here’s why the Chinese government is concentrating on the smaller group:

Min Jiang, a professor of communication studies at the University of North Carolina at Charlotte, told Rest of World the real-name rule would limit the influence of key opinion leaders, who still wield a lot of power on the Chinese internet. “Outspoken individuals have been conditioned to navigate the red line with ingenuity and creativity, steering public opinions even under heavy censorship,” she said.

The new targeted approach seems to be working. Several high-profile influencers who use pseudonyms online have announced that they will give up posting altogether. Others are actively “purging” their fans to get the total below the one million threshold for the new policy:

Tianjin Stock King, who posts finance content, removed over 6 million followers overnight, cutting his following from 7 million to just over 900,000. Ken, another Weibo “Big V,” told Rest of World he used the extension Cyber Zombie Cleaner to remove about 20,000 followers over the past month. The software, developed by software engineer Xiao Gu, enables users to remove inactive followers in large numbers, and has accumulated over 100,000 views on China’s code-sharing forum, CSDN.

Interestingly, the Rest of the World post says that it is not government repression that those with big followings fear under the new rules. Previous policies regulating anonymity already require Weibo users to register with their real name, and to show their IP location next to their user name. But mandating real names online means that influencers will be subject to the scrutiny of other users, who will be able to compare a person’s online activity with their offline identity. Conveniently for the Chinese authorities, that will make it more difficult to express controversial opinions. One group who are likely to be particularly affected by this requirement are influencers working at state-affiliated organizations, who may be accused of disloyalty or lack of patriotism once their identity is known to the wider public.

Follow me @glynmoody on Mastodon.

Posted on Techdirt - 20 November 2023 @ 03:21pm

How The DMCA Is Being Weaponized Against E-Commerce Sites

The copyright system is flawed at many levels, as hundreds of posts on this blog make clear. One particular class of problems concern takedowns. The best known of the ‘notice and takedown’ systems, that of the US Digital Millennium Copyright Act (DMCA), allows the copyright industry to send takedown notices when they discover infringements on a site to the relevant Internet companies, asking for removal of that material. The person who uploaded the relevant files can send a counter-notice. Such a response may trigger a lawsuit from the company claiming copyright. If it does not, the site owner may restore the material that was taken down.

That might look like a fair and balanced system, but appearances are deceptive here, for reasons Walled Culture the book (free digital versions available) explores in detail. Takedown notices are generally sent by lawyers or specialists who carry out this operation all the time, often thousands of times a day, using automated systems (Google has received billions of such automated requests). These experts know the details of the law and are only required to provide a statement that they have a ‘good faith belief’ that the use of the copyright material is unauthorized.

By contrast, recipients of takedown notices are often small businesses, or ordinary members of the public. They are unlikely to have any legal training yet must respond to a formal legal notification if they wish to send a counter-notice. The latter must include a statement ‘under penalty of perjury’ that the material was taken down by mistake. Many will quail at the thought that they risk being convicted of perjury, and this stands in stark contrast to the mere ‘good faith belief’ required from the sender of a takedown request. Consequently, most people will simply accept that their material is removed, even if it was legal, for instance under fair use.

Takedown notices can be abused for purposes that have nothing to do with copyright. For example, they are a handy way to censor perfectly legitimate online material. The practice has become so common that an entire industry sector – reputation management – has evolved to take advantage of this trick. Online reputation management companies often use takedown notices as a way of intimidating sites in order to persuade them to remove material that is inconvenient for their clients.

Takedowns can also be mis-used in a business context, as a story on TorrentFreak indicates. It concerns the Canadian e-commerce platform Shopify, some of whose users had been targeted with takedown notices:

Starting on October 5, an unknown person created the account “Sacha Go” which was subsequently used to file dozens of DMCA takedown requests. The notices targeted listings on a variety of shops selling perfume products, claiming that they infringe copyright.

After being alerted by one of the targeted merchants Shopify looked into the matter, concluding that all takedowns were false. Instead of containing legitimate claims the DMCA notices were being used to harass Shopify and its merchants.

Shopify explains in a complaint it has filed alleging DMCA violations that those false takedowns can have serious financial consequences for Shopify’s merchants:

Under certain circumstances, a takedown notice can even result in the complete termination of a merchant’s online store. Like all DMCA service providers, Shopify is required to implement a policy under which those who are “Repeat Infringers” lose access to the platform. Under Shopify’s policy, a takedown notice results in a “strike,” and an accumulation of strikes over time results in termination. A merchant that receives a takedown notice may submit a counter notice and lift the strike. But for unsuspecting merchants who may be unfamiliar with the DMCA, a sudden onslaught of takedown notices can result in the termination of their entire store under Shopify’s repeat infringer policy.

Shopify’s complaint warns that “unscrupulous individuals are increasingly seeking to exploit the DMCA takedown process for anti-competitive purposes or reasons of animus.” In other words, these takedown notices have nothing to do with copyright or protecting the rights of creators.

The experience of Shopify and its merchants demonstrate well how extreme copyright laws can be abused in far-reaching ways. Those future issues clearly never occurred to the politicians who were too focused on giving the copyright industry yet more one-sided legal powers when they drew up the DMCA.

Follow me @glynmoody on Mastodon. Originally published to Walled Culture.

Posted on Techdirt - 14 November 2023 @ 03:46pm

Copyright Leads To Internet Fragmentation

The EU Copyright Directive is arguably the most important recent legislation in the area of intellectual monopolies. It is also a failure, judged purely on its own terms as an initiative to modernize and unify copyright across the European Union. Instead, it includes many backward-looking features that go against the grain of the digital world, which are explored in Walled Culture the book (free digital versions available). It has also fragmented digital copyright law, as EU Member States struggle to implement a badly-drafted and self-contradictory text. For example, France’s national law went even further than the Directive in tilting the playing-field in favor of copyright companies. Germany, by contrast, attempted to produce a more balanced approach, recognizing the rights of ordinary Internet users. The result is a patchwork of different laws across the EU – exactly what the Directive was supposed to eliminate.

A post on the International Federation of Library Associations and Institutions (IFLA) Web site points out that this is a global problem, particularly with regard to copyright exceptions:

while international copyright law is prescriptive about what minimum rights should be guaranteed, it leaves far more flexibility when it comes to exceptions, and is silent around cross-border working. As a result, there are as many sets of copyright exceptions as there are countries in the world.

The impact of this is just the same sort of uncertainty and caution about cross-border working as characterises other drivers of internet fragmentation.

That is, while minimum rights for the copyright industry have been set in stone globally, rights for everyone else are far from guaranteed, and vary greatly in different jurisdictions. This has practical consequences for key institutions, as the IFLA post explains:

Variance in copyright exceptions not only holds back librarians, as well as archivists and museum workers from cooperating across borders, for example in the context of research collaborations or online and distance learning, but can also be a driver of inequality. If researchers are expected to travel to access a unique source or collection, only the wealthiest are likely to be able to do this.

The result is just another example of internet fragmentation, and a particularly serious one in that it most directly affects key wider drivers of sustainability – education, research and cultural participation.

The IFLA post goes on to offer an example of how that fragmentation has been overcome in the past. The Marrakesh VIP Treaty allows countries to bring in exceptions to facilitate the creation of versions of works that could be accessed by the visual impaired, something that copyright law had often prevented. The Marrakesh VIP Treaty, discussed on this blog two years ago, was undoubtedly an important achievement, and did indeed help to reduce fragmentation in this area. However, it is worth noting that it was adopted in June 2013. A detailed history of the Treaty on the Knowledge Ecology International (KEI) site reveals:

In 1981, the governing bodies of WIPO and UNESCO agreed to create a Working Group on Access by the Visually and Auditory Handicapped to Material Reproducing Works Produced by Copyright. This group meeting took place on October 25-27, 1982 in Paris, and produced a report that included model exceptions for national copyright laws. (UNESCO/WIPO/WGH/I/3). An accessible copy of this report is available here.

That is, it took nearly 30 years of on and off negotiations for a treaty to be agreed, a delay largely the result of fierce resistance by the copyright world, which places the preservation of its intellectual monopoly above all else – even social justice and compassion. In a Walled Culture interview, the director of KEI, and one of the leading campaigners for a treaty, James Love, recalled: “publishers did everything you can imagine to derail this [treaty]”. Attempts to resolve fragmentation of digital copyright in the EU Copyright Directive and elsewhere are likely to meet a similarly fierce resistance, and will probably take as long to resolve.

Follow me @glynmoody on Mastodon. Originally published to Walled Culture.

Posted on Techdirt - 13 November 2023 @ 08:34pm

ISPs Launch Legal Attack On Italy’s ‘Pirate Shield,’ Blocking Law

The copyright industry’s war on the Internet and its users has gone through various stages (full details and links to numerous references in Walled Culture the book, free digital versions available).

The first was to sue Internet users directly for sharing files. By 2007, the Recording Industry Association of America (RIAA) had sued at least 30,000 individuals. Perhaps the most famous victim of this approach was Jammie Thomas, a single mother of two. She was found liable for $222,000 in damages for sharing twenty-four songs online. Even the judge was appalled by the extreme nature of the punishment: he called the damages “unprecedented and oppressive.” He “implored” US Congress to amend the Copyright Act to address the issue of disproportionate liability. He also ordered a new trial for Thomas. Unfortunately, on re-trial, she was found liable for even more – $1.92 million. The RIAA may have been successful in these court cases, but it eventually realized that suing grandmothers and 12-year-old girls, as it had done, made it look like a cruel and heartless bully – which it was.

So it shifted strategy, and started lobbying for a “graduated” approach, also known as “three strikes and you’re out”. The idea was that instead of taking users suspected of sharing copyright material to court, which had terrible optics, they would be sent progressively more threatening warnings by an appropriate government body, thus shielding the copyright industry from public anger. After three warnings, the person would (ideally) be thrown off the Internet, or at least fined.

France was the most enthusiastic proponent of the three-strikes approach, with its Hadopi law. Even though the government body sent out millions of warnings to French users, only one disconnection order was issued, and that was never carried out. In total, some €87,000 in fines were imposed, but the cost of running Hadopi was €82 million, paid by French taxpayers. In other words, a system that failed to scare people off from downloading unauthorized copies of copyright material cost nearly a thousand times more to run than it generated in fines.

Since attacking Internet users had proved to be such a failure, the copyright industry changed tack. Instead it sought to block access to unauthorized material using court orders against Internet Service Providers (ISPs). The idea was that if people couldn’t access the site offering the material, they couldn’t download it.

Italy has been at the forefront of this approach. In 2014, the country’s national telecoms regulator, Autorità per le Garanzie nelle Comunicazioni (Authority for Communications Guarantees, AGCOM) allowed sites to be blocked without the need for a court order. More recently, it has set up an automated blocking system called Piracy Shield. Rather surprisingly, according to a post on TorrentFreak, AGCOM will not check blocking requests before it validates them – it will simply assume they are justified and set the system in motion:

Once validated, AGCOM will instruct all kinds of online service providers to implement blocking. Consumer ISPs, DNS providers, cloud providers and hosting companies must take blocking action within 30 minutes, while companies such as Google must block or remove content from their search indexes.

It’s a very unfair, one-sided copyright law, which assumes that people are guilty until proven innocent. That tilting of the playing field may prove Pirate Shield’s undoing. As another post on the TorrentFreak site explains:

An ISP organization has launched a legal challenge against new Italian legislation that authorizes large-scale, preemptive piracy blocking. Fulvio Sarzana, a lawyer representing the Association of Independent Providers, informs TorrentFreak that the measures appear to violate EU provisions on the protection of service providers and the right to mount a defense.

It would be nicely ironic if the very extremism of the copyright industry, which always wants legal and technical systems biased in its favor, and with as few rights as possible for anyone else, might see the latest incarnation of its assault on the digital world thrown out.

Follow me @glynmoody on Mastodon. Originally published to Walled Culture.

Posted on Techdirt - 3 November 2023 @ 12:10pm

EU Tries To Slip In New Powers To Intercept Encrypted Web Traffic Without Anyone Noticing

The EU is currently updating eIDAS (electronic IDentification, Authentication and trust Services), an EU regulation on electronic identification and trust services for electronic transactions in the European Single Market. That’s clearly a crucial piece of legislation in the digital age, and updating it is sensible given the fast pace of development in the sector. But it seems that something bad has happened in the process. Back in March 2022, a group of experts sent an open letter to MEPs [pdf] with the dramatic title “Global website security ecosystem at risk from EU Digital Identity framework’s new website authentication provisions”. It warned:

The Digital Identity framework includes provisions that are intended to increase the take-up of Qualified Website Authentication Certificates (QWACs), a specific EU form of website certificate that was created in the 2014 eIDAS regulation but which – owing to flaws with its technical implementation model – has not gained popularity in the web ecosystem. The Digital Identity framework mandates browsers accept QWACs issued by Trust Service Providers, regardless of the security characteristics of the certificates or the policies that govern their issuance. This legislative approach introduces significant weaknesses into the global multi-stakeholder ecosystem for securing web browsing, and will significantly increase the cybersecurity risks for users of the web.

The near-final text for eIDAS 2.0 has now been agreed by the EU’s negotiators, and it seems that it is even worse than the earlier draft. A new site from Mozilla called “Last Chance to fix eIDAS” explains how new legislative articles will require all Web browsers in Europe to trust the the certificate authorities and cryptographic keys selected by the government of EU Member States. Mozilla explains:

These changes radically expand the capability of EU governments to surveil their citizens by ensuring cryptographic keys under government control can be used to intercept encrypted web traffic across the EU. Any EU member state has the ability to designate cryptographic keys for distribution in web browsers and browsers are forbidden from revoking trust in these keys without government permission.

This enables the government of any EU member state to issue website certificates for interception and surveillance which can be used against every EU citizen, even those not resident in or connected to the issuing member state. There is no independent check or balance on the decisions made by member states with respect to the keys they authorize and the use they put them to. This is particularly troubling given that adherence to the rule of law has not been uniform across all member states, with documented instances of coercion by secret police for political purposes.

To make matters worse, browser producers will be forbidden from carrying out routine and necessary checks:

The text goes on to ban browsers from applying security checks to these EU keys and certificates except those pre-approved by the EU’s IT standards body – ETSI. This rigid structure would be problematic with any entity, but government-controlled standard bodies are especially susceptible to misaligned incentives in cryptography. ETSI in particular has both a concerning track record of producing compromised cryptographic standards and a working group dedicated entirely to developing interception technology.

European Signature Dialog, which aims “to connect major European Trust Service Providers to share best practices, develop a common industry viewpoint on regulatory issues and empower European solutions for guaranteed data-security,” disagrees with Mozilla’s analysis. In a post on LinkedIn it writes:

Mozilla has recently launched a campaign that pushes serious misinformation about the current eIDAS legislation in order to block changes to Article 45 covering the EU’s Qualified Web Authentication Certificates (“QWACs”).

A document [pdf] from European Signature Dialog offers what it claims are refutations of Mozilla’s analysis. I will leave it to technical experts to decide who is right on the detailed points it discusses – for those interested in understanding the underlying technology, there’s an excellent introduction to eIDAS and QWACs from Eric Rescorla on the Educated Guesswork blog. But there’s a less technical issue too. Mozilla writes that:

forcing browsers to automatically trust government-backed certificate authorities is a key tactic used by authoritarian regimes, and these actors would be emboldened by the legitimising effect of the EU’s actions. In short, if this law were copied by another state, it could lead to serious threats to cybersecurity and fundamental rights.

To which European Signature Dialog responds:

The European Union is not controlling the “roots” used by the issuers of QWACs, and so the EU can’t use the certificates to “spy” on EU citizens. Mozilla should be ashamed of itself for suggesting this.

While it may be true that the European Union itself is not controlling the roots, what Mozilla says is that the individual governments of EU Member States will indeed be able to do precisely that, which means their intelligence services, for example, will be able to carry out surveillance of encrypted Web traffic.

European Signature Dialog concludes its reply to Mozilla’s analysis by asking “Why is Mozilla spreading this misinformation”, and answering its own question with: “Mozilla is generally perceived as a Google satellite, paving the way for Google to push through its own commercial interests”. Attacking the motives of Mozilla in this way, suggesting that it is just some “satellite” of Google, suggests a lack of confidence in the other arguments the European Signature Dialog has offered.

Moreover, the insinuation that this is just an attempt by Google to head off some pesky EU legislation is undercut by the fact that separately from Mozilla, 335 scientists and researchers from 32 countries and various NGOs have signed a joint statement criticizing the proposed eIDAS reform. If the latest text is adopted, they warn:

the government-controlled authority would then be able to intercept the web traffic of not only their own citizens, but all EU citizens, including banking information, legally privileged information, medical records and family photos. This would be true even when visiting non-EU websites, as such an authority could issue certificates for any website that all browsers would have to accept. Additionally, although much of eIDAS2.0 regulation carefully gives citizens the capability to opt out from usage of new services and functionality, this is not the case for Article 45. Every citizen would have to trust those certificates, and thus every citizen would see their online safety threatened.

It concludes:

This regulation does not eliminate any existing risk. Instead, by undermining the existing secure web authentication processes, introduces new risks with no gain by European citizens, businesses, and institutions. Moreover, if this regulation becomes a reality, it is only to be expected that other countries will put pressure on browsers to obtain similar privileges as EU Member States — as some have unsuccessfully attempted in the past — globally endangering web security.

Confirming the bad faith of the EU negotiators, these new and dangerous elements of eIDAS were added in closed-door meetings without any public consultation of experts. It’s a blatant power-grab by the EU, already attempting to circumvent encryption elsewhere with its Chat Control proposals. It must be stopped before it undermines core elements of the Internet’s security infrastructure not just in the EU, but globally too as result of its knock-on effects.

Follow me @glynmoody on Mastodon.

Posted on Techdirt - 26 October 2023 @ 12:53pm

EU Parliament Fails To Understand That The Right To Read Is The Right To Train

Walled Culture recently wrote about an unrealistic French legislative proposal that would require the listing of all the authors of material used for training generative AI systems. Unfortunately, the European Parliament has inserted a similarly impossible idea in its text for the upcoming Artificial Intelligence (AI) Act. The DisCo blog explains that MEPs added new copyright requirements to the Commission’s original proposal:

These requirements would oblige AI developers to disclose a summary of all copyrighted material used to train their AI systems. Burdensome and impractical are the right words to describe the proposed rules.

In some cases it would basically come down to providing a summary of half the internet.

Leaving aside the impossibly large volume of material that might need to be summarized, another issue is that it is by no means clear when something is under copyright, making compliance even more infeasible. In any case, as the DisCo post rightly points out, the EU Copyright Directive already provides a legal framework that addresses the issue of training AI systems:

The existing European copyright rules are very simple: developers can copy and analyse vast quantities of data from the internet, as long as the data is publicly available and rights holders do not object to this kind of use. So, rights holders already have the power to decide whether AI developers can use their content or not.

This is a classic case of the copyright industry always wanting more, no matter how much it gets. When the EU Copyright Directive was under discussion, many argued that an EU-wide copyright exception for text and data mining (TDM) and AI in the form of machine learning would be hugely beneficial for the economy and society. But as usual, the copyright world insisted on its right to double dip, and to be paid again if copyright materials were used for mining or machine learning, even if a license had already been obtained to access the material.

As I wrote in a column five years ago, that’s ridiculous, because the right to read is the right to mine. Updated for our AI world, that can be rephrased as “the right to read is the right to train”. By failing to recognize that, the European Parliament has sabotaged its own AI Act. Its amendment to the text will make it far harder for AI companies to thrive in the EU, which will inevitably encourage them to set up shop elsewhere.

If the final text of the AI Act still has this requirement to provide a summary of all copyright material that is used for training, I predict that the EU will become a backwater for AI. That would be a huge loss for the region, because generative AI is widely expected to be one of the most dynamic and important new tech sectors. If that happens, backward-looking copyright dogma will once again have throttled a promising digital future, just as it has done so often in the recent past.

Follow me @glynmoody on Mastodon. Originally posted to WalledCulture.

Posted on Techdirt - 25 October 2023 @ 07:22pm

NY Times Tried To Block The Internet Archive

The Intercept has an interesting article that reveals another reason why some newspaper publishers are not great fans of the site:

The New York Times tried to block a web crawler that was affiliated with the famous Internet Archive, a project whose easy-to-use comparisons of article versions has sometimes led to embarrassment for the newspaper.

As the article explains, one of the important uses of the Internet Archive’s Wayback Machine is to compare Web pages as they are updated over time. It allows the differences between the original and later versions of a page to be identified. In particular, this feature can be used to spot changes in news stories that have been made without any accompanying editorial notes, so-called stealth edits. Here’s why that has been awkward for The New York Times:

The Times has, in the past, faced public criticisms over some of its stealth edits. In a notorious 2016 incident, the paper revised an article about then-Democratic presidential candidate Sen. Bernie Sanders, I-Vt., so drastically after publication — changing the tone from one of praise to skepticism — that it came in for a round of opprobrium from other outlets as well as the Times’s own public editor. The blogger who first noticed the revisions and set off the firestorm demonstrated the changes by using the Wayback Machine.

More recently, the Times stealth-edited an article that originally listed “death” as one of six ways “you can still cancel your federal student loan debt.” Following the edit, the “death” section title was changed to a more opaque heading of “debt won’t carry on.”

This is not something that serious newspapers should do. If they make changes, they should flag them up so that people can see what has changed. This is also an opportunity for them to justify changing the text. Stealth edits suggest that there was no good reason for changing things, other than trying to cover up a blunder or infelicity in the original version.

However much The New York Times – or any other newspaper or magazine – may dislike being shown up in this way, it is absolutely vital for the public to know when changes have been made. Without the Internet Archive or similar sites that preserve the original and updated copies of texts, the idea of a trustworthy text for an article no longer exists. This, in its turn, robs such articles of their historical value, since there is no way to guarantee that the text won’t change again, and without notice. The Internet Archive is not only providing a valuable service to the public by making any changes visible, it is actually helping newspapers by encouraging them to be honest and transparent about their changes. It would seem that The New York Times has a problem with that, which is a pity.

Follow me @glynmoody on Mastodon. Originally posted to WalledCulture, where it is noted that the site is funded in part by the Kahle/Austin Foundation, created by the Internet Archive’s Brewster Kahle.

Posted on Techdirt - 24 October 2023 @ 03:37pm

New French AI Copyright Law Would Effectively Tax AI Companies, Enrich Collection Societies

This blog has written a number of times about the reaction of creators to generative AI. Legal academic and copyright expert Andres Guadamuz has spotted what may be the first attempt to draw up a new law to regulate generative AI. It comes from French politicians, who have developed something of a habit of bringing in new laws attempting to control digital technology that they rarely understand but definitely dislike.

There are only four articles in the text of the proposal, which are intended to be added as amendments to existing French laws. Despite being short, the proposal contains some impressively bad ideas. The first of these is found in Article 2, which, as Guadamuz summarises, “assigns ownership of the [AI-generated] work (now protected by copyright) to the authors or assignees of the works that enabled the creation of the said artificial work.” Here’s the huge problem with that idea:

How can one determine the author of the works that facilitated the conception of the AI-generated piece? While it might seem straightforward if AI works are viewed as collages or summaries of existing copyrighted works, this is far from the reality. As of now, I’m unaware of any method to extract specific text from ChatGPT or an image from Midjourney and enumerate all the works that contributed to its creation. That’s not how these models operate.

Since there is no way to find out exactly who the creators are whose work helped generate a new piece of AI material using aggregated statistics, Guadamuz suggests that the French lawmakers might want creators to be paid according to their contribution to the training material that went into creating the generative AI system itself. Using his own writings as an example, he calculates what fraction of any given payout he would receive with this approach. For ChatGPT’s output, Guadamuz estimates he might receive 0.00001% of any payout that was made. To give an example, even if the licensing fee for a some hugely popular work generated using AI were €1,000,000, Guadamuz would only receive 10 cents. Most real-life payouts to creators would be vanishingly small.

Article 3 of the French proposal builds on this ridiculous approach by requiring the names of all the creators who contributed to some AI-generated output to be included in that work. But as Guadamuz has already noted, there’s no way to find out exactly whose works have contributed to an output, leaving the only option to include the names of every single creator whose work is present in the training set – potentially millions of names.

Interestingly, Article 4 seems to recognize the payment problem raised above, and offers a way to deal with it. Guadamuz explains:

As it will be not possible to find the author of an AI work (which remember, has copyright and therefore isn’t in the public domain), the law will place a tax on the company that operates the service. So it’s sort of in the public domain, but it’s taxed, and the tax will be paid by OpenAI, Google, Midjourney, StabilityAI, etc. But also by any open source operator and other AI providers (Huggingface, etc). And the tax will be used to fund the collective societies in France… so unless people are willing to join these societies from abroad, they will get nothing, and these bodies will reap the rewards.

In other words, the net effect of the French proposal seems to be to tax the emerging AI giants (mostly US companies) and pay the money to French collecting societies. Guadumuz goes so far as to say: “in my view, this is the real intention of the legislation”. Anyone who thinks this is a good solution might want to read Chapter 7 of Walled Culture the book (free digital versions available), which quotes from a report revealing “a long history of corruption, mismanagement, confiscation of funds, and lack of transparency [by collecting societies] that has deprived artists of the revenues they earned”. Trying to fit generative AI into the straitjacket of an outdated copyright system designed for books is clearly unwise; using it as a pretext for funneling yet more money away from creators and towards collecting societies is just ridiculous.

Follow me @glynmoody on Mastodon. Originally posted to WalledCulture.

Posted on Techdirt - 23 October 2023 @ 01:34pm

Peering Through The Fog Of War With Open Source Intelligence

The fog of war” is a phrase that has been used for over a hundred years to describe the profound uncertainty that envelops armed conflicts while they are happening. Today, the uncertainty for non-combatants is exacerbated by the rapid-fire nature of social media, where people often like or re-post dubious war-related material without scrutinizing it first. The situation has become particularly bad on ExTwitter under Elon Musk’s stewardship, as a recent NewsGuard analysis published on Adweek revealed. The platform’s “verified” users pushed nearly three-quarters of the platform’s most viral false Israel-Hamas war-related claims, which were then spread widely by others:

The verified accounts promoted 10 false narratives, such as claims that Ukraine sold weapons to Hamas and a video of Israeli senior officials being captured by Hamas.

Collectively, posts promoting false claims garnered 1,349,979 likes, reposts, replies and bookmarks, and were viewed by more than 100 million people globally in a week, per NewsGuard.

A recent example of how difficult it is to tease out what happened in a fast-moving conflict with many civilian casualties is the explosion at the Al-Ahli Baptist Hospital in Gaza City. As Wired noted:

Within minutes, information about what had happened was distorted by partisan narratives, disinformation, and a rush to be first to post about the blast. Add in mainstream media outlets parroting official statements without verifying their veracity, and the result was a chaotic information environment in which no one was sure what had happened or how.

Open source intelligence – the analysis of information drawn from a variety of freely available sources, usually online – is emerging as one of the best ways to peer through the fog of war. For example both the Guardian newspaper and the UK’s Channel 4 news made use of open source intelligence in their attempts to work out who was responsible for the explosion at the hospital in Gaza. One of the leading journalistic practitioners of data analysis, the FT’s John Burn-Murdoch, believes that the absence of OSINT is why many traditional media outlets are failing so badly in their reporting of the Israel-Hamas war and elsewhere. As he wrote in a thread on ExTwitter:

With the proliferation of photos/footage, satellite imagery and map data, forensic video/image analysis and geolocation (~OSINT) has clearly been a key news gathering technique for several years now. A key news gathering technique *completely absent from most newsrooms*

According to Burn-Murdoch, this has had a terrible effect not just on the quality of reporting, but on the public’s trust in journalism, already greatly diminished as a result of constant attacks on the media by populist politicians around the world:

most mainstream news orgs today are either simply not equipped to determine for themselves what’s happening in some of the world’s biggest stories, or lack the confidence to allow their in-house technical specialists to cast doubt on a star reporter’s trusted source

So you end up with situations where huge, respected news organisations are reporting as fact things that have already been shown by technically adept news gatherers outside newsrooms to be false or at the very least highly uncertain. It’s hugely damaging to trust in journalism.

It’s great that a leading exponent of data journalism like Burn-Murdoch is calling for mainstream media to make the use of open source intelligence a regular and integral part of their reporting. Doing so is especially important at a time when the fog of war is thick, as is the case in the Middle East today. But it’s a pity that it has taken this long for the power of OSINT to be recognized in this way. Techdirt first wrote about what is still probably the leading practitioner of open source intelligence analysis, Bellingcat, over eight years ago.

Follow me @glynmoody on Mastodon.

More posts from Glyn Moody >>