Glyn Moody's Techdirt Profile

Glyn Moody

About Glyn Moody

Posted on Techdirt - 25 April 2024 @ 01:50pm

French Collection Society Wants A Tax On Generative AI, Payable To Collection Societies

Back in October last year, Walled Culture wrote about a proposed law in France that would see a tax imposed on AI companies, with the proceeds being paid to a collecting society. Now that the EU’s AI Act has been adopted, it is being invoked as another reason why just such a system should be set up. The French collecting society SPEDIDAM (which translates as “Society for the collection and distribution of performers rights”) has issued a press release on the idea, including the following (translation via DeepL):

SPEDIDAM advocates a right to remuneration for performers for AI-generated content without protectable human intervention, in the form of fair compensation that would benefit the entire community of artists, inspired by proven and virtuous collective management models, similar to that of remuneration for private copy.

This remuneration, collected from AI system suppliers, would also help support the cultural activities of collective management organizations, thus ensuring the future employment of artists and the constant renewal of the sources feeding these tools.

That sounds all well and good, but as we noted last year, collecting societies around the world have a terrible record when it comes to sharing that remuneration with the creators they supposedly represent. Walled Culture the book (free digital versions available), quotes from a report revealing “a long history of corruption, mismanagement, confiscation of funds, and lack of transparency [by collecting societies] that has deprived artists of the revenues they earned”. They also have a tendency to adopt a maximalist interpretation of their powers. Here are few choice examples of their actions over the years:

  • Soza (Slovenský Ochranný Zväz Autorský/Slovak Performing and Mechanical Rights Society), a Slovakian collecting society, has sought money from villages when their children sing. One case involved children singing to their mothers on Mothers’ Day.
  • SABAM (Société d’Auteurs Belge/Belgische Auteurs Maatschappij/Belgian Authors’ Society), a Belgian collecting society, sought expanded protection for readings of copyrighted works. One consequence of their action was that it would require librarians to pay a licence to read books to children in a children’s library.
  • SABAM sought a licensing fee from truck drivers who listened to the radio alone in their trucks.
  • The British collecting society PPL (Phonographic Performance Limited) sought a fee from a hardware store owner who listened to the radio in his store while cleaning it after he had closed.
  • The Performing Rights Society in the UK sought performance licensing fees from a woman who played classical music to her horses.

SPEDIDAM’s press release is interesting as perhaps the first hint of a wider pan-European campaign to bring in some form of levy on the use of training data for generative AI services. That would just take a new bad idea – taxing companies for simply analyzing training material – and add it to an old bad idea, that of hugely inefficient collecting societies. The resulting system would be a disaster for the European AI industry, since it would favor deep-pocketed US companies. Moreover, this approach would produce no meaningful benefit for creators, as the sorry history of collective societies has shown time and again.

Follow me @glynmoody on Mastodon. Originally posted to Walled Culture.

Posted on Techdirt - 22 April 2024 @ 08:15pm

More Open Access Training For Academics Would Lead To More Open Access

Open access publishing, which allows people to read academic papers without a subscription, seems such a good idea. It means that anyone, anywhere in the world, can read the latest research without needing to pay. Academic institutions can spend less to keep their scholars up-to-date with work in their field. It also helps disseminate research, which means that academics receive more recognition for their achievements, boosting their career paths.

And yet despite these manifest benefits, open access continues to struggle. As Walled Culture has noted several times, one reason is that traditional academic publishers have managed to subvert the open access system, apparently embracing it, but in such a way as to negate the cost savings for institutions. Many publishers also tightly control the extent to which academic researchers can share their own papers that are released as open access, which rather misses the point of moving to this approach.

Another reason why open access has failed to take off in the way that many hoped is that academics often don’t seem to care much about supporting it or even using it. Again, given the clear benefits for themselves, their institutions and their audience, that seems extraordinary. Some new research sheds a little light on why this may be happening. It is based on an online survey that was carried out regarding the extent and nature of training in open access offered to doctoral students, sources of respondents’ open access knowledge, and their perspectives on open access. The results are striking:

a large majority of current (81%) and recent (84%) doctoral students are or were not required to undertake mandatory open access training. Responses from doctoral supervisors aligned with this, with 66% stating that there was no mandatory training for doctoral students at their institution. The Don’t know figure was slightly higher for supervisors (16%), suggesting some uncertainty about what is required of doctoral students.

The surprisingly high figures quoted above matter, because

a statistically significant difference was observed between respondents who have completed training and those who have not. These findings provide some solid evidence that open access training has an impact on researcher knowledge and practices

One worrying aspect is where else researchers are obtaining their knowledge of open access principles and practices:

Web resources and colleagues were found to be the most highly rated sources, but publisher information also scored highly, which may be cause for some concern. While it is evident that publisher information about open access may be of value to researchers, if for no other reason than to explain the specific open access options available to authors submitting to a particular journal, publishers are naturally incentivised to describe positively the forms of open access they offer to authors, and therefore can hardly be said to represent an objective source of information about open access in general terms.

What this means in practice is that academics may simply accept the publishers’ version of open access, without calling into question why it is so expensive or so restrictive in allowing papers to be shared freely. It could explain why the publishers’ distorted form of the original open access approach does not meet greater resistance. On the plus side, the survey revealed widespread support for more open access training:

First, only 27% of respondents answered that the level of open access training offered as part of their doctoral studies was sufficient. Second, there was widespread agreement with a number of statements presented to respondents that related to actions institutions could take to support researcher understanding of open access. There was widest agreement with the notion that institutions should provide Web resources about open access specifically for doctoral students, followed by optional training for these students. The statement that suggested institutions should require doctoral students to undertake open access training received agreement or strong agreement from almost half of respondents (45%).

Although the research reveals widely differing views on requirements for open access training, and who exactly should provide it, there does seem to be an opportunity to increase researchers’ familiarity with the concept and its benefits. Rather than lamenting the diluted form of open access that major publishers now offer, open access advocates might usefully spend more time spreading the word about its benefits to the people who can make it happen – new and established researchers – by helping to provide training in a variety of forms.

Follow me @glynmoody on TwitterDiaspora, or Mastodon. Originally published to Walled Culture.

Posted on Techdirt - 9 April 2024 @ 07:46pm

How Copyright May Destroy Our Access To The World’s Academic Knowledge

The shift from analogue to digital has had a massive impact on most aspects of life. One area where that shift has the potential for huge benefits is in the world of academic publishing. Academic papers are costly to publish and distribute on paper, but in a digital format they can be shared globally for almost no cost. That’s one of the driving forces behind the open access movement. But as Walled Culture has reported, resistance from the traditional publishing world has slowed the shift to open access, and undercut the benefits that could flow from it.

That in itself is bad news, but new research from Martin Paul Eve (available as open access) shows that the way the shift to digital has been managed by publishers brings with it a new problem. For all their flaws, analogue publications have the great virtue that they are durable: once a library has a copy, it is likely to be available for decades, if not centuries. Digital scholarly articles come with no such guarantee. The Internet is constantly in flux, with many publishers and sites closing down each year, often without notice. That’s a problem when sites holding archival copies of scholarly articles vanish, making it harder, perhaps impossible, to access important papers. Eve explored whether publishers were placing copies of the articles they published in key archives. Ideally, digital papers would be available in multiple archives to ensure resilience, but the reality is that very few publishers did this. Ars Technica has a good summary of Eve’s results:

When Eve broke down the results by publisher, less than 1 percent of the 204 publishers had put the majority of their content into multiple archives. (The cutoff was 75 percent of their content in three or more archives.) Fewer than 10 percent had put more than half their content in at least two archives. And a full third seemed to be doing no organized archiving at all.

At the individual publication level, under 60 percent were present in at least one archive, and over a quarter didn’t appear to be in any of the archives at all. (Another 14 percent were published too recently to have been archived or had incomplete records.)

This very patchy coverage is concerning, for reasons outlined by Ars Technica:

The risk here is that, ultimately, we may lose access to some academic research. As Eve phrases it, knowledge gets expanded because we’re able to build upon a foundation of facts that we can trace back through a chain of references. If we start losing those links, then the foundation gets shakier. Archiving comes with its own set of challenges: It costs money, it has to be organized, consistent means of accessing the archived material need to be established, and so on.

Given the importance of ensuring the long-term availability of academic research the manifest failure of most publishers to guarantee that by putting articles in multiple archives is troubling. What makes things worse is that there is an easy way to improve the resilience of the academic research system. If all papers could be shared freely, there could be many new archives located around the world holding the contents of all academic journals. One or two such archives already exist, for example the well-established Sci-Hub, and the more recent Anna’s Archive, which currently claims to hold around 100,000,000 papers.

Despite the evident value to the academic world and society in general of such multiple, independent backups, traditional publishing houses are pursuing them in the courts, in an attempt to shut them down. It seems that preserving their intellectual monopoly is more important to publishers than preserving the world’s accumulated academic knowledge. It’s a further sign of copyright’s twisted values that those archives offering solutions to the failure of publishers to fulfil their obligations to learning are regarded not as public benefactors, but as public enemies.

Follow me @glynmoody on Mastodon and on Bluesky. Originally posted to Walled Culture.

Posted on Techdirt - 5 April 2024 @ 07:39pm

Germany Still Locking Up Some Laws Behind Copyright

It is often said that “ignorance of the law is no defense.” But the corollary of this statement is that laws must be freely available so that people can find them, read them and obey them. Secret laws, or laws that are hard to access, undermine the ability and thus the willingness of citizens to follow them. And yet just such a situation is found in many countries around the world, including Germany, as a post on the Communia blog by Judith Doleschal from the FragDenStaat (“Ask the Government”) organization describes. It concerns what are known as “law gazettes.” These are crucial documents that define German regulations for a wide range of areas such as work safety, health insurance tariffs, directives on the use of police tasers or guidelines for pesticide applications. They are not primary legislation, but they are nonetheless legally binding, which means that they should be freely available to anyone who might have to obey them. They are not, for reasons explained by Doleschal:

The [German] Federal Ministry of the Interior is the editor of the gazette. However, it is published by a private publishing house owned by the billion-dollar Wolters Kluwer group. Wolters Kluwer charges €1.70 per 8 pages for individual copies of the documents. If you were to buy all official issues of the [Federal Gazette of Ministerial Orders] with a total of 63,983 pages individually from the publisher, it would cost a whopping €13,596.

Given the healthy profits such pricing presumably generates from material that is provided by the German Germany government, the publisher is naturally unwilling to allow anyone else to provide free access to these official documents. The reason why it can do that is interesting:

[The publisher] doesn’t hold the copyright to the official documents. Instead, it argues that the database of the law gazette is protected under related rights („Leistungsschutzrecht“ in German).

This Leistungsschutzrecht is also known as an “ancillary copyright”, and is a good demonstration of how fans of copyright try to spread its monopoly beyond the usual domains. Whether to create a new Leistungsschutzrecht was one of the important battles that took place during the passage of the EU’s Copyright Directive, discussed at length in Walled Culture the book (free digital versions available). In that instance, it resulted in a new ancillary copyright for newspaper publishers that is another example of yet more money being channeled to the copyright world simply because they were able to lobby for it effectively. As usual, there is no corresponding benefit for the public flowing from this extension of copyright. In the case of the Leistungsschutzrecht claimed by the publisher of the German law gazettes, it results in a ridiculous situation:

the state publish[es] binding regulations in documents that are in the public domain, but still not publicly available without a paywall. A private billion-dollar publisher earns money referring to an alleged investment protection for the database. An absurd construction, but still quite convenient for the [German] Federal Ministry of Interior as it has zero costs and hardly any effort for the publication.

An absurd situation indeed, and one that FragDenStaat wants to change:

We at FragDenStaat are willing to take the risk of being sued for the publication of the law gazette as we believe that official documents of general interest belong in the public domain – not in the hands of private publishers. Free access to documents is not only lawful, but also necessary. So by publishing the most important state databases, we make available to the public what is already theirs. We will continue to open up more public databases in the next months.

That’s a laudable move, and one that everyone who cares about a society based on the rule of law, and therefore on publicly-accessible laws, should support. The publisher currently benefiting from this unjustified monopoly will doubtless fight this attempt to open up the German law gazettes, but FragDenStaat is optimistic, because it has managed to change official behavior before:

Four years ago our campaign „Offene Gesetze“ („Open Laws“) helped freeing the Federal Law Gazette in the same manner. All laws of the Federal Republic of Germany are published in the Federal Law Gazette. Laws only come into force when they are published there. Back then, the publisher was the Bundesanzeiger Verlag, which was privatized in 2006 and belongs to the Dumont publishing group. Anyone who wanted to search, copy or print out federal law gazettes needed to pay .

After we published the documents as freely reusable information, the Federal Ministry of Justice decided to publish the Law Gazette on its own open platform.

It’s great to see brave organizations like FragDenStaat righting the wrongs that copyright has enabled by locking up key public documents behind paywalls. But it is outrageous that it needs to.

Follow me @glynmoody on Mastodon and on Bluesky. Originally posted to Walled Culture.

Posted on Techdirt - 2 April 2024 @ 03:38pm

Forgotten Books And How To Save Them

On the Neglected Books site, there is a fine meditation on rescuing forgotten writers and their works from oblivion, and why this is important. As its author Brad Bigelow explains:

I have been searching for neglected books for over forty years and the one thing I can say with unshakeable confidence is that there are more great (and even just seriously good) books out there in the thickets off the beaten path of the canon than I or anyone else can ever hope to discover.

His post mentions three questions that “reissue” publishers must answer when looking at some of these neglected books as potential candidates for re-printing:

Is the book good (meaning of sufficient merit to justify being associated with the imprint)? Is the book in the public domain or are the rights attainable for a reasonable price? Will enough readers buy the book to recoup costs and, with some luck, earn a profit?

The first is an aesthetic judgement, but the other two are essentially about copyright. Walled Culture the book (free ebook versions available) discusses at length the issue of “orphan works” – works that are still in copyright, but which cannot be re-issued because it is not clear who owns the rights, and thus who could give permission for new editions. Bigelow makes a good point about why this is such a problem:

Even in the U.K., which has the advantage of a national database of wills, it can be practically impossible to track down who has inherited the copyrights from a dead author. The database, for one thing, is incomplete. There are millions of wills missing. There are plenty of writers who failed recognize their copyrights as inheritable assets and didn’t bother to mention them in the will. And there are plenty of writers who simply didn’t bother to have a will drawn up in the first place. Every publisher involved in the reissue business can name a dozen or more writers they’d love to publish, if only they could find legatees empowered to sign the necessary contracts.

The last question for publishers – will enough readers buy the book to recoup costs and earn a profit? – is the other main stumbling block to re-issuing out-of-print books for a new audience. Bigelow explains that this often comes down to a key challenge: how does a publisher get a reader who knows nothing about the book, the writer, or the publisher’s reputation to look at, let alone buy it?

If copyright terms were a more reasonable length, no more than the original 14 years (plus an option of renewal for 14 years) of the 1710 Statute of Anne, then both these problems would disappear. Relatively soon after the original publication of a book, before it sinks into obscurity, anyone could turn it into an ebook, and circulate it freely online under a public domain license. Publishers could do the same, perhaps adding forewords and other critical apparatus, and they could also print new, analogue editions without worrying about copyright issues. The costs for both book forms would be lower without the need for expensive legal searches, which would encourage more publishers to bring out new editions, and increase the availability of these works, perhaps guided by the online popularity of the freely-circulating copies made by individuals.

It is the absurdly long intellectual monopoly created by copyright – typically the author’s life plus 70 years more – that has created the near-impenetrable thickets that Bigelow refers to. Slash the copyright term, and you slash the thickets. If that could be done, the main obstacles to finding, reading, enjoying and – above all – sharing those great but forgotten books would all disappear at a stroke.

Follow me @glynmoody on Mastodon and on Bluesky. Originally posted to Walled Culture.

Posted on Techdirt - 29 March 2024 @ 01:34pm

Of True Fans And Superfans: The Rise Of An Alternative Business Model To Copyright

One of the commonest arguments from supporters of copyright is that creators need to be rewarded and that copyright is the only realistic way of doing that. The first statement may be true, but the second certainly isn’t. As Walled Culture the book (free digital versions available) notes, most art was created without copyright, when the dominant way of rewarding creators was patronage – from royalty, nobility, the church etc. Indeed, nearly all of the greatest works of art were produced under this system, not under copyright.

It’s true that it is no longer possible to depend on these outdated institutions to sustain a large-scale modern creative ecosystem, but the good news is we don’t have to. The rise of the Internet means that not only can anyone become a patron, sending money to their favorite creators, but that collectively that support can amount to serious sums of money. The first person to articulate this Internet-based approach was Kevin Kelly, in his 1998 2008 essay “1000 True Fans”:

A true fan is defined as a fan that will buy anything you produce. These diehard fans will drive 200 miles to see you sing; they will buy the hardback and paperback and audible versions of your book; they will purchase your next figurine sight unseen; they will pay for the “best-of” DVD version of your free youtube channel; they will come to your chef’s table once a month. If you have roughly a thousand of true fans like this (also known as super fans), you can make a living — if you are content to make a living but not a fortune.

It’s taken a while, but the music industry in particular is finally waking up to the potential of this approach. For example a 2023 post on MusicBusiness Worldwide, with the title “15% of the general population in the US are ‘superfans.’ Here’s what that means for the music business” reported that the incidence of superfans was probably even higher in some groups, for example among customers of Universal Music Group (UMG):

Speaking on UMG’s Q1 earnings call, Michael Nash, UMG’s EVP and Chief Digital Officer, indicated that an “artist-centric” model would look to increase revenue flow from “superfans” – or in other words, individuals who are willing to pay more for subscriptions in exchange for additional content.

“Our consumer research says that among [music streaming] subscribers, about 30% are superfans of one or more of our artists,” said Nash.

In January of this year, the head of UMG, Sir Lucian Grainge gave another signal that superfans were a key component of the company’s future strategy: “The next focus of our strategy will be to grow the pie for all artists, by strengthening the artist-fan relationship through superfan experiences and products.” Spotify, too, is joining the superfan fan club, writing that “we’re looking forward to a future of superfan clubs”. UMG started implementing its superfan strategy just a few weeks later. MusicBusiness Worldwide reported it was joining a move to create a new superfan destination:

A press release issued by Universal Music Group today stated that the NTWRK consortium’s acquisition of [the youth-orientated media platform] Complex will “create a new destination for ‘superfan’ culture that will define the future of commerce, digital media, and music”.

Here’s why leading music industry players are so interested in the superfan idea:

In Goldman’s latest Music In The Air report, it claimed that if 20% of paid streaming subscribers today could be categorized as ‘superfans’ and, furthermore, if these ‘superfans’ were willing to spend double what a non-superfan spends on digital music each year, it implies a $4.2 billion (currently untapped) annual revenue opportunity for the record industry.

For the music industry, then, it’s about making even more money from their customers – no surprise there. But this validation of the true fans/superfans idea goes well beyond that. By acknowledging the power and value of the relationship between creators and their most enthusiastic supporters, the music companies are also providing a huge hint to artists that there’s a better way than the unbalanced and unfair deals they currently sign up to. When it comes to making a decent living from creativity, what matters is not using heavy-handed enforcement of copyright law to make people pay, but building on the unique and natural connection between creators and their true fans, who want to pay.

Follow me @glynmoody on Mastodon and on Bluesky. Originally posted to Walled Culture.

Posted on Techdirt - 19 March 2024 @ 03:43pm

Italy’s Piracy Shield Blocks Innocent Web Sites And Makes It Hard For Them To Appeal

Italy’s newly-installed Piracy Shield system, put in place by the country’s national telecoms regulator, Autorità per le Garanzie nelle Comunicazioni (Authority for Communications Guarantees, AGCOM), is already failing in significant ways. One issue became evident in February, when the VPN provider AirVPN announced that it would no longer accept users resident in Italy because of the “burdensome” requirements of the new system. Shortly afterwards, TorrentFreak published a story about the system crashing under the weight of requests to block just a few hundred IP addresses. Since there are now around two billion copyright claims being made every year against YouTube material, it’s unlikely that Piracy Shield will be able to cope once takedown requests start ramping up, as they surely will.

That’s a future problem, but something that has already been encountered concerns one of the world’s largest and most important content delivery networks (CDN), Cloudflare. CDNs have a key function in the Internet’s ecology. They host and deliver digital material to users around the globe, using their large-scale infrastructure to provide this quickly and efficiently on behalf of Web site owners. Blocking CDN addresses is reckless: it risks affecting thousands or even millions of sites, and compromises some of the basic plumbing of the Internet. And yet according to a post on TorrentFreak, that is precisely what Piracy Shield has now done:

Around 16:13 on Saturday [24 February], an IP address within Cloudflare’s AS13335, which currently accounts for 42,243,794 domains according to IPInfo, was targeted for blocking [by Piracy Shield]. Ownership of IP address 188.114.97.7 can be linked to Cloudflare in a few seconds, and doubled checked in a few seconds more.

The service that rightsholders wanted to block was not the IP address’s sole user. There’s a significant chance of that being the case whenever Cloudflare IPs enter the equation; blocking this IP always risked taking out the target plus all other sites using it.

The TorrentFreak article lists a few of the evidently innocent sites that were indeed blocked by Piracy Shield, and notes:

Around five hours after the blockade was put in place, reports suggest that the order compelling ISPs to block Cloudflare simply vanished from the Piracy Shield system. Details are thin, but there is strong opinion that the deletion may represent a violation of the rules, if not the law.

That lack of transparency about what appears to be a major overblocking is part of a larger problem, which affects those who are wrongfully cut off. As TorrentFreak writes, AGCOM’s “rigorous complaint procedure” for Piracy Shield “effectively doesn’t exist”:

information about blocks that should be published to facilitate correction of blunders, is not being published, also in violation of the regulations.

That matters, because appeals against Piracy Shield’s blocks can only be made within five working days of their publication. As a result, the lack of information about erroneous blocks makes it almost impossible for those affected to appeal in time:

That raises the prospect of a blocked innocent third party having to a) proactively discover that their connectivity has been limited b) isolate the problem to Italy c) discover the existence of AGCOM d) learn Italian and e) find the blocking order relating to them.

No wonder, then that:

some ISPs, having seen the mess, have decided to unblock some IP addresses without permission from those who initiated the mess, thus contravening the rules themselves.

In other words, not only is the Piracy Shield system wrongly blocking innocent sites, and making it hard for them to appeal against such blocks, but its inability to follow the law correctly is causing ISPs to ignore its rulings, rendering the system pointless.

This combination of incompetence and ineffectiveness brings to mind an earlier failed attempt to stop people sharing unauthorized copies. It’s still early days, but there are already indications that Italy’s Piracy Shield could well turn out to be a copyright fiasco on the same level as France’s Hadopi system, discussed in detail in Walled Culture the book (digital versions available free).

Follow me @glynmoody on Mastodon and on Bluesky. Originally posted to Walled Culture.

Posted on Techdirt - 11 March 2024 @ 07:59pm

Vehicle Cloning — Another Reason Not To Use Automated License Plate Readers

Over the last decade, increasing numbers of automated license plate readers (ALPR) have been installed on roads, bringing with them a variety of privacy problems, as Techdirt has reported. It’s easy to see why ALPR is popular with the authorities: license plate readers seem a simple way to monitor driving behavior and to catch people breaking traffic laws, by speeding, for example.

Since the whole process can be automated, from reading the license plates to sending out fines, it looks like an efficient, low-cost alternative to placing large numbers of police officers around the road network. There’s just one problem: the whole system is based on the assumption that the license plate on the car is genuine, and can be used to identify the person responsible for the vehicle. As an article on “car cloning” in the Guardian reports, drivers in the UK are discovering that this assumption no longer holds.

The problem is that people are making copies of other drivers’ license plates, and using them on similar-looking vehicles — generally the same model and same color — to break the law with impunity. When the ALPR cameras catch the cloners speeding, or failing to pay fees for entering special zones like London’s Ultra Low Emission Zone (ULEZ), the fines are sent to the actual owner of the license plate, not the perpetrator. The result is misery for those unlucky enough to have their license plates cloned, since it is hard to convince the authorities that automated license plate readers have made a mistake when there is apparent photographic evidence they haven’t. The experience of one driver interviewed by the Guardian is typical:

The most recent incident happened in July 2021, when he received two penalty charge notices from different London councils — one for driving in a bus lane and the other for an illegal left turn. Both notices included photos purporting to show his five-door Audi A3 car.

Despite him providing extensive evidence that at the time of one of the offences his vehicle was in a car park, and demonstrating that the one in the photo appeared to be a three-door Audi A1, the council concerned rejected his appeal.

Only when he sent in photos of his vehicle type and the one in the CCTV image where he had “circled all the differences” was the matter dropped.

Even when no fines are involved, vehicle cloning can cause financial problems for innocent drivers, as another case mentioned by the Guardian shows:

Late last year, the Guardian was contacted by another driver who had fallen victim to car cloning. The 88-year-old’s insurance doubled at renewal to £1,259 [about $1600] and she was told this was because her Ford Fiesta had been involved in an accident on the M25 [London’s main ring road] .

Despite her pointing out that she had not driven on the M25 for more than a decade, and that she had been either at church or at home at the time of the accident — and the fact that she had reported that her car had been cloned to Hertfordshire police — her insurer, Zurich, refused to take the claim off her file. Only after the Guardian intervened did the firm restore her no-claims bonus and reduce her premium accordingly.

The more automated license plate readers are installed in order to stop people breaking traffic laws, the greater the incentive for criminals and the unscrupulous to use cloned plates to break those laws without any consequences. What may once have seemed the system’s great strength — the fact that it provides photographic evidence of law breaking — turns out to be a huge weakness that can be turned against it.

Follow me @glynmoody on Mastodon and on Bluesky.

Posted on Techdirt - 27 February 2024 @ 10:43am

Italy’s ‘Piracy Shield’ Creating Real Problems As VPNs Start Turning Away Italian Users

Back in October, Walled Culture wrote about the grandly named “Piracy Shield”. This is Italy’s new Internet blocking system, which assumes people are guilty until innocent, and gives the copyright industry a disproportionate power to control what is available online, no court orders required. Piracy Shield went live in December, and has just issued its first blocking orders. But a troubling new aspect of Piracy Shield has emerged, reported here by TorrentFreak:

A document detailing technical requirements of Italy’s Piracy Shield anti-piracy system confirms that ISPs are not alone in being required to block pirate IPTV services. All VPN and open DNS services must also comply with blocking orders, including through accreditation to the Piracy Shield platform. Google has already agreed to dynamically deindex sites and remove infringing adverts.

This is no mere theoretical threat. The VPN (Virtual Private Network) service AirVPN has just announced that it will no longer accept users residing in Italy. As AirVPN explains:

The list of IP addresses and domain names to be blocked is drawn up by private bodies authorised by AGCOM (currently, for example, Sky and DAZN). These private bodies enter the blocking lists in a specific platform. The blocks must be enforced within 30 minutes of their first appearance by operators offering any service to residents of Italy.

There is no judicial review and no review by AGCOM. The block must be enforced inaudita altera parte [without hearing the other party] and without the possibility of real time refusal, even in the case of manifest error. Any objection by the aggrieved party can only be made at a later stage, after the block has been imposed.

As a result, AirVPN says it can no longer offer its service in Italy:

The above requirements are too burdensome for AirVPN, both economically and technically. They are also incompatible with AirVPN’s mission and would negatively impact service performance. They pave the way for widespread blockages in all areas of human activity and possible interference with fundamental rights (whether accidental or deliberate). Whereas in the past each individual blockade was carefully evaluated either by the judiciary or by the authorities, now any review is completely lost. The power of those private entities authorized to compile the block lists becomes enormous as the blocks are not verified by any third party and the authorized entities are not subject to any specific fine or statutory damage for errors or over-blocking.

That’s a good summary of all that is wrong with Piracy Shield. Companies can compile block lists without any constraint or even oversight. If the blocks are unjustified, there are no statutory damages, which will obviously encourage overblocking. And proving they are unjustified is a slow and complex process, and only takes place after the block has been effected.

What is particularly troubling here is that Italian residents are now losing access to a popular VPN as a result of this new law. In a world where privacy threats from companies and governments are constantly increasing, VPNs are a vital tool, and it is crucial to have a range of them to choose from. The fact that AirVPN has been forced to discontinue this service for people in Italy is a further demonstration of how here, as elsewhere, copyright is evidently regarded by the authorities as more important than fundamental human rights such as privacy and security.

Follow me @glynmoody on Mastodon and on Bluesky. Originally posted to Walled Culture.

Posted on Techdirt - 26 February 2024 @ 08:15pm

A Swiftian Solution To Some Of Copyright’s Problems

Copyright is generally understood to be for the benefit of two groups of people: creators and their audience. Given that modern copyright often acts against the interests of the general public – forbidding even the most innocuous sharing of copyright material online – copyright intermediaries such as publishers, recording companies and film studios typically place great emphasis on how copyright helps artists. As Walled Culture the book spells out in detail (digital versions available free) the facts show otherwise. It is extremely hard for creators in any field to make a decent living from their profession. Mostly, artists are obliged to supplement their income in other ways. In fact, copyright doesn’t even work well for the top artists, particularly in the music world. That’s shown by the experience of one of the biggest stars in the world of music, Taylor Swift, reported here by The Guardian:

Swift is nearing the end of her project to re-record her first six albums – the ones originally made for Big Machine Records – as a putsch to highlight her claim that the originals had been sold out from under her: creative and commercial revenge served up album by album. Her public fight for ownership carried over to her 2018 deal with Republic Records, part of Universal Music Group (UMG), where an immovable condition was her owning her future master recordings and licensing them to the label.

It seems incredible that an artist as successful as Swift should be forced to re-record some of her albums in order to regain full control over them – control she lost because of the way that copyright works, splitting copyright between the written song and its performance (the “master recording”). A Walled Culture post back in 2021 explained that record label contracts typically contain a clause in which the artist grants the label an exclusive and total license to the master.

Swift’s need to re-record her albums through a massive but ultimately rather pointless project is unfortunate. However, some good seems to be coming of Swift’s determination to control both aspects of her songs – the score and the performance – as other musicians, notably female artists, follow her example:

Olivia Rodrigo made ownership of her own masters a precondition of signing with Geffen Records (also part of UMG) in 2020, citing Swift as a direct inspiration. In 2022, Zara Larsson bought back her recorded music catalogue and set up her own label, Sommer House. And in November 2023, Dua Lipa acquired her publishing from TaP Music Publishing, a division of the management company she left in early 2022.

It’s a trend that has been gaining in importance in recent years, as more musicians realize that they have been exploited by recording companies through the use of copyright, and that they have the power to change that. The Guardian article points out an interesting reason why musicians have an option today that was not available to them in the past:

This recalibration of the rules of engagement between artists and labels is also a result of the democratisation of information about the byzantine world of music contract law. At the turn of the 2000s, music industry information was highly esoteric and typically confined to the pages of trade publications such as Billboard, Music Week and Music & Copyright, or the books of Donald S Passman. Today, industry issues are debated in mainstream media outlets and artists can use social media to air grievances or call out heinous deal terms.

Pervasive use of the Internet means that artists’ fans are more aware of how the recording industry works, and thus better able to adjust their purchasing habits to punish the bad behavior, and reward the good. One factor driving this is that musicians can communicate directly to their fans through social media and other platforms. They no longer need the marketing departments of big recording companies to do that, which means that the messages to fans are no longer sanitized or censored.

This is another great example of how today’s digital world makes the old business models of the copyright industry redundant and vulnerable. That’s great news, because it is a step on the path to realizing that creators – whatever their field – don’t need copyright to thrive, despite today’s dogma that they do. What they require is precisely what innovative artists like Taylor Swift have achieved – full control over all aspects of their own creations – coupled with the Internet’s direct channels to their fans that let them turn that into fair recompense for their hard work.

Follow me @glynmoody on Mastodon and on Bluesky. Originally published on Walled Culture.

More posts from Glyn Moody >>