Glyn Moody’s Techdirt Profile


About Glyn Moody Techdirt Insider

Posted on Techdirt - 20 September 2021 @ 1:34pm

Hongkongers Battle Supporters Of Beijing For The Soul Of The Chinese-Language Wikipedia

from the beyond-edit-wars dept

When Wikipedia was first launched 20 years ago, it was widely derided as an impossible project, bound to fail or, at best, to produce worthless rubbish. And yet today, along with open source software, it is undoubtedly the best demonstration that a distributed team of volunteers can produce work that is not just free but arguably better than anything created for profit using traditional, top-down management approaches. But beyond that, Wikipedia has become something else: a unique repository of validated information and thus, implicitly, a store of "truth" about the past and the present. That has turned many pages of Wikipedia into a battleground, as people with different views fight in sometimes fierce "edit wars" over what counts as "verified". The choice of information and even how things are phrased often have considerable social, economic or political importance. No surprise, then, that there is a struggle taking place over what Wikipedia should say is happening in the contested space of Hong Kong. Back in July, an article in the Hong Kong Free Press explained:

As Hongkongers reckon with the closure of one of the city's mainstream news outlets [Apple Daily], drastic political changes and a sweeping national security law, the city's keyboard warriors on Wikipedia are also coming under pressure.

Battles between competing editors of the crowd-sourced encyclopaedia's articles about Hong Kong political events have been a daily occurrence since the beginning of the 2019 anti-extradition bill protests.

As increasing numbers of mainland Chinese contribute to the Chinese-language version of Wikipedia, those in Hong Kong worry that their perspectives will be lost:

In the war to set narratives using news sources that may have political biases, whether pro-Hong Kong or pro-China, the question of which news outlet gets a seal of "reliability" becomes a key battleground.

Since then, the situation has become so serious that the Wikimedia Foundation, which owns and operates all the different language editions of Wikipedia, has been forced to step in. Maggie Dennis, the Wikimedia Foundation's Vice President of Community Resilience & Sustainability, wrote this week of "infiltration" of Wikimedia systems, "including positions with access to personally identifiable information and elected bodies of influence." Dennis claims that "we know that some users have been physically harmed as a result. With this confirmed, we have no choice but to act swiftly and appropriately in response." The actions were as follows:

We have banned seven users and desysopped [removed administrator privileges from] a further 12 as a result of long and deep investigations into activities around some members of the unrecognized group Wikimedians of Mainland China. We have also reached out to a number of other editors with explanations around canvassing guidelines and doxing policies and requests to modify their behaviors.

Setting the narrative for these politically-sensitive events is so important to the Chinese government that it is unlikely that Wikimedia's moves will put an end to this "infiltration". On the contrary: we can probably expect the organization to come under even more pressure to tell things the way Beijing wants them portrayed, and to hell with Wikipedia's cherished neutral point of view.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

10 Comments | Leave a Comment..

Posted on Techdirt - 9 September 2021 @ 3:20am

Sci-Hub Celebrates 10 Years Of Existence, With A Record 88 Million Papers Available, And A Call For Funds To Help It Add AI And Go Open Source

from the paypal-not-accepted dept

To celebrate ten years offering a large proportion of the world's academic papers for free -- against all the odds, and in the face of repeated legal action -- Sci-Hub has launched a funding drive:

Sci-Hub is run fully on donations. Instead of charging for access to information, it creates a common pool of knowledge free for anyone to access.

The donations page says that "In the next few years Sci-Hub is going to dramatically improve", and lists a number of planned developments. These include a better search engine, a mobile app, and the use of neural networks to extract ideas from papers and make inferences and new hypotheses. Perhaps the most interesting idea is for the software behind Sci-Hub to become open source. The move would address in part a problem discussed by Techdirt back in May: the fact that Sci-Hub is a centralized service, with a single point of failure. Open sourcing the code -- and sharing the papers database -- would allow multiple mirrors to be set up around the world by different groups, increasing its resilience.

Donations can only be made in cryptocurrencies -- Sci-Hub accepts most of the main ones. A short interview with Sci-Hub's founder, Alexandra Elbakyan, on the donations page explains why she moved away from PayPal:

in the past I used also PayPal account to collect donations from abroad. It worked well for a while, but when I posted a message on Sci-Hub urging people to donate, if my memory is correct it was in 2013, donations started to come at a cosmic speed... in a couple of days two or three thousands of dollars were collected. But then the account was frozen by PayPal. It turned out that Elsevier has complained to PayPal about Sci-Hub so they froze the account.

Later I tried registering another PayPal account, and use it carefully, but after some time it also got frozen. I have several frozen PayPal accounts by now.

The main Sci-Hub site claims to hold some 87,977,763 papers -- an impressive number. It's a reminder of just how much research has been funded by the public, and how much could be available for researchers across the globe to access if unjustified claims of ownership were not made by academic publishers desperate to preserve their 35-40% profit margins.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

6 Comments | Leave a Comment..

Posted on Techdirt - 3 September 2021 @ 12:11pm

Sony Music Says DNS Service Is Implicated In Copyright Infringement At The Domains It Resolves

from the first-they-came-for-the-resolvers dept

One of the characteristics of maximalist copyright companies is their limitless sense of entitlement. No matter how much copyright is extended, be it in duration, or breadth of application, they want it extended even more. No matter how harsh the measures designed to tackle copyright infringement, they want them made yet harsher. And no matter how distantly connected to an alleged copyright infringement a company or organization or person may be, they want even those bystanders punished.

A worrying example of this concerns Quad9, a free, recursive, anycast DNS platform (Cloudflare has technical details on what "recursive" means in this context). It is operated by the Quad9 Foundation, a Swiss public-benefit, not-for-profit organization, whose operational budget comes from sponsorships and donations. In other words, it's one of the good guys, trying to protect millions of users around the world from malware and phishing, and receiving nothing in return. But that's not how Sony Music GmbH sees it:

In June, Quad9 was served with a notice from the Hamburg Germany court (310 O 99/21) stating that Quad9 must stop resolving certain domain names that Sony Music GmbH believed were implicated in infringement on properties that Sony claims are covered by their copyrights. Quad9 has no relationship with any of the parties who were involved in distributing or linking to the content, and Quad9 acts as a standard DNS recursive resolver for users in Germany to resolve those names and others.

Sony Music is not alleging that Quad9 is infringing on copyright directly, but that its DNS service allows people to access a Web site that has links to material on a second Web site that infringes on copyrights. On this basis, the Hamburg Court has used Germany's law on indirect liability to order Quad9 to cease resolving the names of those sites. But as the Gesellschaft für Freiheitsrechte explains, there's a crazy twist here. Under German law:

[Internet] service providers who provide access to unlawful information or transmit such information are expressly no longer liable for damages or responsible for removal, nor can an injunction be granted against them. However, the Hamburg Regional Court assumes that Quad9 cannot invoke this liability privilege because it does not itself route the copyright-infringing information from A to B, but merely provides indirect access to it. This understanding of the law leads to the contradictory result that Quad9 is deemed liable for copyright infringements precisely because it has even less to do with the copyright infringements than Internet access providers, who are equally not involved in copyright infringements but at least do transmit the data in question.

Quad9's FAQ on the case points out that if allowed to stand:

this would set a dangerous precedent for all services used in retrieving web pages. Providers of browsers, operating systems or antivirus software could be held liable as interferers on the same grounds if they do not prevent the accessibility of copyright-infringing websites.

The past history of media companies suggest that, given such a capability, they would indeed go after all of these incidental operators, as part of an insane quest to put every aspect of the Internet at the service of copyright.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

32 Comments | Leave a Comment..

Posted on Free Speech - 30 July 2021 @ 10:44am

Top German Court Says Facebook Must Inform Users About Deleting Their Posts Or Suspending Their Account, Explain Why, And Allow Them To Respond

from the hating-the-hate-speech-hate dept

We've just written about Germany's constitutional court grappling with the issue of whether government users of zero-days for surveillance have a responsibility to report the flaws they use to the relevant developers. Another senior court in the country has been pondering an even thornier question that is occupying judges and lawmakers around the world: how should social media police so-called "hate speech" on their services in a way that respects fundamental rights on all sides?

Germany's Federal Court of Justice issued its judgment regarding two similar cases (pointed out by Matthias C. Kettemann on Twitter). Both involved posts that Facebook removed because it said they went against the social network's community standards governing hate speech. In addition, Facebook temporarily blocked the accounts of the users who wrote the posts. When the lower German courts refused to overturn Facebook's moves completely, the users appealed to the Federal Court of Justice, which not only ordered Facebook to reactivate the two accounts, but also told it to refrain from blocking the re-posting of the deleted comments. The court ruled that Facebook's rules governing the removal of posts and the blocking of user accounts were "invalid", because "they unreasonably disadvantage the users of the network contrary to the requirements of good faith." The court went on to explain its reasoning (translation by DeepL of original in German):

In this case, the conflicting fundamental rights of the parties -- on the side of the users the freedom of expression from [Article 5 (1) sentence 1 of Germany's Basic Law], on the side of the defendant [Facebook] above all the freedom to exercise a profession from [Article 12 (1) sentence 1 of Germany's Basic Law] -- must be considered and balanced according to the principle of practical concordance in such a way that they become as effective as possible for all parties. This balancing shows that the defendant is in principle entitled to require the users of its network to comply with certain communication standards that go beyond the requirements of criminal law (e.g. insult, defamation or incitement of the people). It may reserve the right to remove posts and block the user account concerned in the event of a breach of the communication standards. However, in order to strike a balance between the conflicting fundamental rights in a manner that is in line with the interests of the parties, and thus to maintain reasonableness within the meaning of [Section 307 (1) sentence 1 of the Civil Code of Germany], it is necessary that the defendant undertakes in its terms and conditions to inform the user concerned about the removal of a post at least subsequently and about an intended blocking of his or her user account in advance, to inform him or her of the reason for this and to grant him or her an opportunity to respond, followed by a new decision.

Germany's Federal Court of Justice is trying to balance two conflicting rights -- freedom of speech, and freedom to exercise a profession. Its solution is to require companies like Facebook to inform users about the removal of a post -- at least retrospectively -- to tell them in advance about the blocking of an account, explain why, and to allow users to respond so that the decision can be reconsidered. That's a new, general approach that can be applied to a wide range of online services. However, as Matthias C. Kettemann pointed out on Twitter, it leaves important questions unanswered, including the issue of spam accounts, and of account suspensions, rather than deletions. Given their importance, we can probably expect future judgments to tackle these points in due course.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

16 Comments | Leave a Comment..

Posted on Techdirt - 29 July 2021 @ 8:40pm

Germany's Constitutional Court Ponders Whether Government Users Of Zero-Day Surveillance Malware Have A Duty To Tell Software Developers About The Flaws

from the resolving-conflicting-aims dept

As Techdirt has reported previously, the use of malware to spy on suspects -- or even innocent citizens -- has long been regarded as legitimate by the German authorities. The recent leak of thousands of telephone numbers that may or may not be victims of the Pegasus spyware has suddenly brought this surveillance technique out of the shadows and into the limelight. People are finally starting to ask questions about the legitimacy of this approach when used by governments, given how easily the software can be -- and apparently has been -- abused. An interesting decision from Germany's constitutional court shows that even one of the biggest fans of legal malware is trying to work out how such programs based on zero-days can be deployed in a way that's compatible with fundamental rights. The court's press release explains:

The complainants [to the constitutional court] essentially assert that, by enacting the authorisation laid down in [the German Police Act], the Land Baden-Württemberg violated the guarantee of the confidentiality and integrity of information technology systems -- a guarantee arising from fundamental rights -- because under that provision, the authorities have no interest in notifying developers of any vulnerabilities that come to their attention since they can exploit these vulnerabilities to infiltrate IT systems for the purpose of source telecommunications surveillance, which is permitted under [the German Police Act]. Yet if the developers are not notified, these vulnerabilities and the associated dangers -- in particular the danger of third-party attacks on IT systems -- will continue to exist.

That is, the failure to notify developers about the vulnerabilities means the authorities are putting IT systems in Germany at risk, and should therefore be stopped. The complainants went on to argue that if the court nonetheless ruled that the use of such malware was not considered "inherently incompatible with the state’s duty of protection", at the very least administrative procedures should be be established for evaluating the seriousness of the threat that leaving them unpatched would represent, and then deciding on a case-by-case basis whether the relevant developers should be notified.

The German constitutional court dismissed the complaint, but on largely technical grounds. The judgment said that the complainants did not have standing, because they had failed to substantiate a breach of the government's duty of protection. Moreover, the top court said the question should first be considered exhaustively by the lower courts, before finally moving to the constitutional court if necessary. However, the judges did recognize that there was a tension between a desire to use zero-days to carry out surveillance, and the German government's duty to protect the country and its computer systems:

In the present case, the duty of protection encompasses the obligation for the legislator to set out how the police are to handle such IT security vulnerabilities. Under constitutional law, it is not inherently impermissible from the outset for source surveillance to be performed by exploiting unknown security vulnerabilities, although stricter requirements for the justification of such surveillance apply due to the dangers posed to the security of IT systems. Furthermore, fundamental rights do not give rise to a claim that authorities must notify developers about any IT security vulnerabilities immediately and in all circumstances. However, the duty of protection does necessitate a legal framework that governs how -- in a manner compatible with fundamental rights -- an authority is to resolve the conflicting aims of protecting IT systems against third-party attacks that exploit unknown IT security vulnerabilities on the one hand, and on the other hand keeping such vulnerabilities open so that source surveillance can be carried out for the purpose of maintaining public security.

It's not clear whether that call for a legal framework to regulate how the authorities can deploy malware, and when they must alert developers to the flaw it exploits, will be heeded any time soon in Germany. But in the light of the Pegasus leak, it seems likely that other countries around the world will start to ponder this same issue. That's particularly the case since such malware is arguably the only way that the authorities can reliably circumvent encrypted communications without mandating the flawed and unworkable backdoors they have been calling for. If more countries decide to follow Germany in deploying these programs, the need for a legal framework to regulate their use will become even more pressing.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

3 Comments | Leave a Comment..

Posted on Techdirt - 19 July 2021 @ 10:50am

French Competition Authority Fines Google Nearly $600 Million For Failing To Negotiate A Nonsensical Deal With Publishers 'In Good Faith'

from the ma-foi dept

France has long been in the vanguard of passing bad copyright laws. For example, it rushed to bring in probably the worst implementation of the EU Copyright Directive's upload filters. It's also keen on forcing Google to pay French press publishers for sending traffic to them when it displays clickable snippets of their news stories for free. Last year, the French Competition Authority said Google had no choice in the matter, and ordered the company to negotiate with French news organizations and come up with a deal that pays them to display even short excerpts. A year on, it seems that the French Competition Authority is not happy with the way that Google has responded:

At the end of an in-depth investigation, the Autorité found that Google had not complied with several injunctions issued in April 2020. First of all, Google's negotiations with press publishers and agencies cannot be regarded as having been conducted in good faith, while Google imposed that the discussions necessarily take place within the framework of a new partnership, called Publisher Curated News, which included a new service called Showcase. In doing so, Google refused, as it has been asked on several occasions, to have a specific discussion on the remuneration due for current uses of content protected by related rights.

And to show how really, really cross it is, the Competition Authority has whacked Google with an immediate 500 million euro fine (nearly $600 million). Somehow the French government body believes the following about that ridiculous amount:

[It] takes into account the exceptional seriousness of the infringements observed and how Google's behaviour has led to further delay the proper implementation of the law on related rights, which aimed to better take into account the value of content from press publishers and agencies included on the platforms. The Autorité will be extremely vigilant on the proper application of its decision, as non-execution can now lead to periodic penalty payment.

That periodic penalty is an equally salty 900,000 euros -- around $1 million -- per day of "delay". These figures are truly extraordinary, not least because a rational observer can see that, if anything, it is the French press that ought to be paying Google for the massive amount of free advertising it receives, not the other way around. It's all further proof that France has been driven mad by its hatred of big US Internet companies, and its equally weird love of maximalist copyright monopolies.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

37 Comments | Leave a Comment..

Posted on Techdirt - 16 July 2021 @ 10:46am

Top EU Court's Adviser Regrettably Fails To Recommend Throwing Out Upload Filters, But Does Say They Should Block Only "Identical" Or "Equivalent" Copies

from the out-of-their-tiny-minds dept

One of the last hopes of getting the EU's terrible upload filters thrown out was an intriguing legal challenge brought by Poland at the region's highest court, the Court of Justice of the European Union (CJEU). As is usual in these cases, a preliminary opinion is offered by one of the CJEU's special advisers. It's not binding on the main court, but can offer interesting hints of what the final judgment might be. Unfortunately, in his analysis Advocate General Saugmandsgaard Øe recommends that the CJEU should dismiss the action brought by Poland (pdf), because in his view Article 17 of the EU Copyright Directive is compatible with freedom of expression and information.

That's a huge disappointment, since many hoped he would unequivocally rule that upload filters breach fundamental rights. However, the Advocate General's opinion is by no means a complete disaster for users of online sharing services. He recognizes the right of people to make "legitimate use of protected subject matter." Specifically, that means people must be able to rely on the EU's exceptions and limitations to copyright. Moreover:

In order for that right to be effective, providers of such [online sharing] services are not allowed to preventively block all content reproducing the protected subject matter identified by the rightholders, including lawful content. It would not be sufficient for users to have the possibility, under a complaints and redress mechanism, to have their legitimate content re-uploaded after such preventive blocking.

This is a huge point. It means that copyright companies cannot demand that upload filters block every use of their material, since that would prevent legal transformative uses such as memes, parodies, commentary etc. Saugmandsgaard Øe concludes with the following observation to the CJEU:

Consequently, sharing service providers must only detect and block content that is 'identical' or 'equivalent' to the protected subject matter identified by the rightholders, that is to say content the unlawfulness of which may be regarded as manifest in the light of the information provided by the rightholders. By contrast, in all ambiguous situations -- short extracts from works included in longer content, 'transformative' works, etc. -- in which, in particular, the application of exceptions and limitations to copyright is reasonably foreseeable, the content concerned should not be the subject of a preventive blocking measure. The risk of 'over-blocking' is thus minimised. Rightholders will have to request the removal or blocking of the content in question by means of substantiated notifications, or even refer the matter to a court for a ruling on the lawfulness of the content and, in the event that it is unlawful, order its removal and blocking.

Crucially, this says that unless it is absolutely clear-cut that there is copyright infringement -- because an identical, or equivalent copy is uploaded -- user uploads must not be blocked by default. Instead, a more detailed complaint must be made by copyright holders, possibly involving a request for courts to rule on the legality of a transformative use. That's very far from what those pushing for upload filters want, and represents a major limitation on the latter.

It's an obvious compromise position, and as such could well be adopted by the CJEU when it hands down its definitive judgment at a later date. Saugmandsgaard Øe says that yes, upload filters are acceptable in the EU, but can only be used to block identical, or near-identical copies. In his full opinion, he also affirms strongly and repeatedly that other legal uses of copyright material must not be blocked by upload filters. And there's a nice sting in the tail of his analysis. In a Postscript, the Advocate General comments on the European Commission's recent "guidance" to national governments on how they should implement Article 17. As Techdirt noted last month, this guidance introduced a huge loophole that would let copyright companies "earmark" any upload that they claim "could cause significant economic harm", even if likely to be a legitimate use of protected subject matter. Earmarked uploads would lack key legal protections, and Saugmandsgaard Øe is having none of it:

If this is to be understood as meaning that those same providers should block content ex ante [in advance] simply on the basis of an assertion of a risk of significant economic harm by rightholders -- since the guidance does not contain any other criterion objectively limiting the 'earmarking' mechanism to specific cases -- even if that content is not manifestly infringing, I cannot agree with this, unless I alter all the considerations set out in this Opinion.

This is basically Advocate General-speak for "you must be out of your tiny minds".

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

21 Comments | Leave a Comment..

Posted on Techdirt - 1 July 2021 @ 10:51am

Denmark's Media Companies Form 'Copyright Collective' To Force Google And Facebook To Pay More For Sending Them Traffic

from the still-a-bad-idea dept

One of the most outrageous ideas dreamt up by traditional media companies is that Internet companies like Google and Facebook should pay for the privilege of sending huge amounts of traffic to their sites. This "snippet tax", also known as the "link tax", was unfortunately enshrined in the EU Copyright Directive in 2019. More recently, Australia has brought in its own link tax, the News Media Bargaining Code, that is even worse than the EU approach.

The move from explicitly targeting snippets to forcing Internet companies to negotiate with the media is significant. It's a recognition that Google and Facebook could avoid paying the link tax if they stopped displaying snippets from media companies. The latter obviously don't want that, since they know it would cause a precipitous drop in the number of people visiting their titles. Instead they want Internet companies to pay up -- just "because". Media companies in Denmark have decided to do this as a group, reported here by the Financial Times (paywall):

Denmark's media industry is pioneering a new bargaining tactic with Google and Facebook over payments for news, with newspapers, broadcasters and internet start-ups joining forces to negotiate with the tech groups as a copyright collective.

Almost 30 Danish media companies will meet on Friday for their first general assembly as a collective bargaining organisation in a move they hope can provide inspiration for other countries in Europe and beyond.

Anders Krab-Johansen, chief executive of newspaper group Berlingske Media and head of the informal network behind the alliance, told the FT that the idea was to stop Google and Facebook negotiating a few deals that set the benchmark for the others. He hopes that the new "copyright collective" will have better luck squeezing more money out of Internet companies.

Changing the format of the negotiations doesn't hide the fact that this is still traditional media companies demanding to be paid for their own failure to innovate and move online quickly enough. It was a bad idea when it was framed as a link tax, and it's a bad idea now it's in the form of collective bargaining.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

14 Comments | Leave a Comment..

Posted on Techdirt - 25 June 2021 @ 1:50pm

Top EU Court Rules Online Platforms Are Not Liable For Copyright Infringements Of User Uploads, Unless They Actively Intervene

from the nice,-but-likely-to-be-superseded dept

One of the most contentious areas of Internet law is the extent to which sites are responsible for the actions of their users. One issue concerns user-uploaded materials: if these infringe on copyright, should the platform be held responsible too? The EU's highest court, the Court of Justice of the European Union (CJEU), has just ruled on two cases touching on this question. One concerned the posting of music recordings to YouTube, while the other involved medical textbooks published by Elsevier, which appeared on some filesharing sites. Both cases were before the Federal Court of Justice in Germany, which asked the CJEU to provide guidance on the liability of online platforms as regards to copyright materials posted by users. The basic decision is straightforward (pdf), explained here by the court's press release:

the Court emphasises the indispensable role played by the platform operator and the deliberate nature of its intervention. That platform operator makes an 'act of communication' when it intervenes, in full knowledge of the consequences of its action, to give its customers access to a protected work, particularly where, in the absence of that intervention, those customers would not, in principle, be able to enjoy the broadcast work.

In that context, the Court finds that the operator of a video-sharing platform or a file-hosting and -sharing platform, on which users can illegally make protected content available to the public, does not make a 'communication to the public' of that content, within the meaning of [EU] Directive 2001/29 [on copyright], unless it contributes, beyond merely making that platform available, to giving access to such content to the public in breach of copyright.

Put simply, platforms need to be involved in making material available in some active way before they can be held liable.

In an excellent Twitter thread, Julia Reda points out some interesting aspects of the ruling. First, she notes that copyright companies have long tried to push the idea that platforms like YouTube, largely based on user-uploaded material, are automatically playing an "active" role, and are therefore not mere conduits. The latest CJEU ruling says that for a platform to be liable under the EU's eCommerce Directive "it must have knowledge of or awareness of specific illegal acts committed by its users relating to protected content that was uploaded to its platform."

However, a platform may be required to use "appropriate technical measures" to "counter credibly and effectively copyright infringements on that platform". Within the full judgment is the following comment by the judges:

YouTube has put in place various technological measures in order to prevent and put an end to copyright infringements on its platform, such as, inter alia, a notification button and a special alert procedure for reporting and arranging for illegal content to be removed, as well as a content verification program for checking content and content recognition software for facilitating the identification and designation of such content. Thus, it is apparent that that operator has adopted technological measures to counter credibly and effectively copyright infringements on its platform.

Reda points out that YouTube's "technological measures" are regarded by the court as "credible" and "effective", even though they do not use an upload filter of the kind that Article 17 of the EU Copyright Directive is likely to need. As she writes: "Providing a button that allows rightholders to easily notify infringements can be an appropriate technical measure."

That's good news, but it's important to remember that the current CJEU ruling refers to EU law as it was before the Copyright Directive came into force. As such, its views on upload filters are likely to be superseded by the important case brought by Poland, seeking to have them thrown out completely. It may be that the CJEU rules that Article 17's strict upload filters are legal in the EU, which would therefore negate the current judgment's more lenient view of what is needed. The first indication of which way the court may rule will come next month, when a CJEU adviser will offer a preliminary opinion on the matter. Although the new CJEU position on technological measures is welcome, it is the future ruling on Article 17 that will be decisive.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

Read More | 12 Comments | Leave a Comment..

Posted on Techdirt - 7 June 2021 @ 8:05pm

European Commission Betrays Internet Users By Cravenly Introducing Huge Loophole For Copyright Companies In Upload Filter Guidance

from the over-to-you,-CJEU dept

As a recent Techdirt article noted, the European Commission was obliged to issue "guidance" on how to implement the infamous Article 17 upload filters required by the EU's Copyright Directive. It delayed doing so, evidently hoping that the adviser to the EU's top court, the Court of Justice of the European Union (CJEU), would release his opinion on Poland's attempt to get Article 17 struck down before the European Commission revealed its one-sided advice. That little gambit failed when the Advocate General announced that he would publish his opinion after the deadline for the release of the guidance. The European Commission has finally provided its advisory document on Article 17 and, as expected, it contains a real stinker of an idea. The best analysis of what the Commission has done, and why it is so disgraceful comes from Julia Reda and Paul Keller on the Kluwer Copyright Blog. Although Article 17 effectively made upload filters mandatory, it also included some (weak) protections for users, to allow people to upload copyright material for legal uses such as memes, parody, criticism etc. without being blocked. The copyright industry naturally hates any protections for users, and has persuaded the European Commission to eviscerate them:

According to the final guidance, rightholders can easily circumvent the principle that automatic blocking should be limited to manifestly infringing uses by "earmarking" content the "unauthorised online availability of which could cause significant economic harm to them" when requesting the blocking of those works. Uploads that include protected content thus "earmarked" do not benefit from the ex-ante protections for likely legitimate uses. The guidance does not establish any qualitative or quantitative requirements for rightholders to earmark their content. The mechanism is not limited to specific types of works, categories of rightholders, release windows, or any other objective criteria that could limit the application of this loophole.

The requirements that copyright companies must meet are so weak that it is probably inevitable that they will claim most uploads "could cause significant economic harm", and should therefore be earmarked. Here's what happens then: before it can be posted online, every earmarked upload requires a "rapid" human review of whether it is infringing or not. Leaving aside the fact that it is very hard for legal judgements to be both "rapid" and correct, there's also the problem that copyright companies will earmark millions of uploads (just look at DMCA notices), making it infeasible to carry out proper review. But the European Commission also says that if online platforms fail to carry out a human review of everything that is earmarked, and allow some unchecked items to be posted, they will lose their liability protection:

this means that service providers face the risk of losing the liability protections afforded to them by art. 17(4) unless they apply ex-ante human review to all uploads earmarked by rightholders as merely having the potential to "cause significant economic harm". This imposes a heavy burden on platform operators. Under these conditions rational service providers will have to revert to automatically blocking all uploads containing earmarked content at upload. The scenario described in the guidance is therefore identical to an implementation without safeguards: Platforms have no other choice but to block every upload that contains parts of a work that rightholders have told them is highly valuable.

Thus the already unsatisfactory user rights contained in Article 17 are rendered null and void because of the impossibility of following the European Commission's new guidance. That's evidently the result of recent lobbying from the copyright companies, since none of this was present in previous drafts of the guidance. Not content with making obligatory the upload filters that they swore would not be required, copyright maximalists now want to take away what few protections remain for users, thus ensuring that practically all legal uses of copyright material -- including memes -- are likely to be automatically blocked.

The Kluwer Copyright blog post points out that this approach was not at all necessary. As Techdirt reported a couple of weeks ago, Germany has managed to come up with an implementation of Article 17 that preserves most user rights, even if it is by no means perfect. The European Commission, by contrast, has cravenly given what the copyright industry has demanded, and effectively stripped out those rights. But this cowardly move may backfire. Reda and Keller explain:

the Commission does not provide any justification or rationale why users' fundamental rights do not apply in situations where rightholders claim that there is the potential for them to suffer significant economic harm. It's hard to imagine that the CJEU will consider that the version of the guidance published today provides meaningful protection for users' rights when it has to determine the compliance of the directive with fundamental rights [in the case brought by Poland]. The Commission appears to be acutely aware of this as well and so it has wisely included the following disclaimer in the introductory section of the guidance (emphasis ours):

"The judgment of the Court of Justice of the European Union in the case C-401/192 will have implications for the implementation by the Member States of Article 17 and for the guidance. The guidance may need to be reviewed following that judgment".

In the end this may turn out to be the most meaningful sentence in the entire guidance.

It would be a fitting punishment for betraying the 450 million citizens the European Commission is supposed to serve, but rarely does, if this final overreach causes upload filters to be thrown out completely.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

24 Comments | Leave a Comment..

Posted on Techdirt - 4 June 2021 @ 3:30am

Google, Facebook And Chaos Computer Club Join To Fight New German Law Allowing Government Spies And Police To Use Trojans Against Innocent Citizens

from the strange-bedfellows dept

One of the curious aspects of Germany's surveillance activities is the routine use of so-called "state trojans" -- software that is placed surreptitiously on a suspect's system by the authorities to allow it to be monitored and controlled in real time over the Internet. The big advantage of this approach is that it lets intelligence agencies get around end-to-end encryption without needing backdoors in the code. Instead, the trojan sits at one end of the conversation, outside the encryption, which lets it eavesdrop without any problem. This approach goes back at least a decade, and now seems to be an accepted technique in the country, which is rather surprising given Germany's unhappy history of state surveillance and control during the previous century. The German government likes state trojans so much it wants to give the option to even more of its services, as Netzpolitik explains (original in German, translation by DeepL):

At the end of each grand coalition's legislative period, there was always a small fireworks display of further surveillance measures. Unfortunately, you can always bet on that, and this thesis is confirmed this time as well.

The bill to amend the law on the protection of the [German] constitution is about to be passed by the grand coalition [of the CDU/CSU and SPD parties]. This will give all German intelligence services hacking powers and allow them to use state trojans in the future. At the same time, the Federal Police Act will also be passed, which will not only allow the authorities to use state trojans, but will also give them the power to hack people who have not committed a crime or are suspected of having done so.

The new law would require Internet service providers to cooperate actively in installing trojans on their customers' devices. Such an obligation would radically change and undermine the relationship between Internet suppliers and their customers. It's such a bad idea that it has managed to bring together the most unlikely bedfellows -- including Google, Facebook and the archetypal hacker group Chaos Computer Club. In a joint letter to the German government (original in German, translation by DeepL), they call for:

Not taking any further legal measures that would weaken or break encryption.

In particular, to waive the obligation for companies to cooperate in the reform of the Federal Law on the Protection of the Constitution, which would make companies the extended arm of the intelligence services and significantly jeopardize cybersecurity.

Not to rush the adaptation of the constitutional protection law with the duty to cooperate through the parliamentary procedure, but to involve the business community and civil society. This requires a dialog with citizens, civil society and industry.

In addition, we call on the federal government and the [national parliament] to strengthen encryption to protect private and professional communications in the medium and long term

It's good to see such a united front against this terrible idea. But the German government's love of state trojans is probably too ingrained now for an open letter to have much effect.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

21 Comments | Leave a Comment..

Posted on Techdirt - 26 May 2021 @ 1:37pm

Upload Filters Will Always Be A Bad Idea, But Germany's New Implementation Of Article 17 Is An Attempt To Provide Some Protection For Users, Which Others May Follow

from the but-what-will-the-CJEU-say? dept

The EU's Copyright Directive was passed back in 2019, and the two-year period for implementing the law in national legislation is almost up. The text's contradictory requirements to stop infringing material from being posted online without imposing a general requirement to monitor users, which is not permitted under EU law, has proved difficult for governments to deal with. France aims to solve this by ignoring even the limited user protections laid down by the Directive. Germany has been having a rather fraught debate about how exactly Article 17, which implicitly requires upload filters, should be implemented. One good idea to allow users to "pre-flag" as legal the material they upload was jettisoned. That led to fears that the country's implementation of Article 17 would be as bad as France's. But the final version of the law does attempt to ensure that automated filters -- now admitted as indispensable, despite earlier assurances they were optional -- do not block user uploads that are not infringing on copyright. Communia explains:

the German implementation relies on the concept of "uses presumably authorised by law", which must not be blocked automatically. For an upload to qualify as "presumably authorised by law", it needs to fulfil the following cumulative criteria:

The use must consist of less than 50% of the original protected work,
The use must combine the parts of the work with other content, and
The use must be minor (a non-commercial use of less than 15 seconds of audio or video, 160 characters of text or 125 kB of graphics) or, if it generates significant revenues or exceeds these thresholds, the user must flag it as being covered by an exception.

Although it's good that this "presumably authorised by law" use is enshrined in law, the actual limits are absurd. For example, the name alone of the EU Copyright Directive in German is longer than 160 characters. Copyright holders can still challenge the legality of material, but platforms have to keep the uploads online until the complaints have been reviewed by a human. As former Pirate Party MEP Julia Reda writes on the Gesellschaft für Freiheitsrechte (Society for Civil Liberties) site, there's another feature of the new German law that might help keep upload filters in check, at least a little (original in German, translation via DeepL):

In order to enforce freedom of expression and artistic freedom against upload filters gone wild, the draft law provides for a right of association action for non-commercial associations dedicated to protecting the interests of users. These associations can take legal action against platforms that repeatedly block legal content. In addition, users will be able to claim damages against false copyright claims. The Society for Civil Liberties will use these possibilities if it becomes necessary.

Those are important new options for getting material back online, and discouraging over-blocking. Reda notes that there is also good news for popular online activities such as caricature, parody and pastiche, which will be permitted without restrictions:

This copyright exception includes memes, remixes, mashups, fan fiction and fan art. The German government's draft proposal was to restrict this right to remix in the German copyright reform, although it was adopted as mandatory at the EU level as part of Article 17. The German government's proposal that these uses should only be allowed "provided that the extent of the use is justified by the specific purpose" generated an outcry in the fan fiction community. This has apparently had an effect, because the [German parliament's] legal committee has removed this restriction. Fans in Germany can now look forward to a solid legal basis for fan art and remix culture.

The new German copyright law also brings in copyright exceptions for online teaching, and text and data mining. Cultural institutions such as libraries or archives will be permitted to make their collections available online if these works are no longer commercially available. Those are all welcome, but it is the implementation of Article 17 that is likely to have the most impact. As the Communia blog post notes:

the German implementation law sets a new standard for the implementation of the [Digital Single Market] directive. This is especially true for the implementation of Article 17. With the Commission having missed the chance to issue guidance for the member states in a timely manner [discussed previously on Techdirt], other member states who seek to implement Article 17 in a fundamental rights-compliant way should look at the German law for guidance.

Since the new Germany copyright law could become the model for other EU nations as they implement Article 17 -- except for France, of course -- it's good news that it has a number of positive elements. Those are likely to prove crucial if -- or rather when -- the EU Copyright Directive faces another legal challenge at the CJEU, the Court of Justice of the European Union (Poland has already lodged one). The complaint is likely to be that Article 17 cannot be implemented without violating the fundamental rights of users. In that case, the CJEU will have to decide whether Germany's innovative approach goes far enough in preserving them.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

9 Comments | Leave a Comment..

Posted on Techdirt - 20 May 2021 @ 1:53pm

Redditors Launch A 'Rescue Mission' For Embattled Sci-Hub, With The Ultimate Aim Of Building A Decentralized Version

from the but-what-about-protecting-Elbakyan? dept

Techdirt has just written about belated news that the FBI gained access two years ago to the Apple account of Alexandra Elbakyan, the founder of Sci-Hub. This is part of a continuing attempt to stop the widespread sharing of academic papers, mostly paid for by the public, and currently trapped behind expensive paywalls. You might think somebody helping scholars spread their work to a wider audience would be rewarded with prizes and grants, not pursued by the FBI and DOJ. But of course not, because, well, copyright. It's easy to feel angry but helpless when confronted with this kind of bullying by publishing giants like Elsevier, but a group of publicly spirited Redditors aim to do something about it:

It's time we sent Elsevier and the USDOJ a clearer message about the fate of Sci-Hub and open science: we are the library, we do not get silenced, we do not shut down our computers, and we are many.

They have initiated what they term a "Rescue Mission for Sci-Hub", in order to prepare for a possible shutdown of the site:

A handful of Library Genesis seeders are currently seeding the Sci-Hub torrents. There are 850 scihub torrents, each containing 100,000 scientific articles, to a total of 85 million scientific articles: 77TB. This is the complete Sci-Hub database. We need to protect this.

The Redditors are calling for "85 datahoarders to store and seed 1TB of articles each, 10 torrents in total". The idea is to download 10 random torrents, then seed them for as long as possible. Once enough people start downloading random torrents using these seeds, the Sci-Hub holdings will be safe. That would then lead to the "final wave":

Development for an open source Sci-Hub. freereadorg/awesome-libgen is a collection of open source achievements based on the Sci-Hub and Library Genesis databases. Open source de-centralization of Sci-Hub is the ultimate goal here, and this begins with the data, but it is going to take years of developer sweat to carry these libraries into the future.

The centralized nature of Sci-Hub is certainly its greatest weakness, since it provides publishers with just a few targets to aim for, both legally and technically. A truly decentralized version would solve that problem, but requires a lot of work, as the Reddit post notes. Still, at least this "rescue plan" means people can do something practical to help Sci-Hub; sadly, protecting Elbakyan is harder.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

7 Comments | Leave a Comment..

Posted on Techdirt - 17 May 2021 @ 3:30pm

South Korean Real-Time Video 'Social Discovery' App Might Be The New ChatRoulette -- If It Can Keep Out The Lettuce Fornicators

from the because-content-moderation-is-easy,-right? dept

Remember ChatRoulette? Eleven years ago, the Web site that pairs random people together for webcam interactions was as hot as today's Clubhouse. A 2010 piece in New York Magazine has a perfect distillation of the ChatRoulette experience at the time:

There was a man who wore a deer head and opened every conversation with "What up DOE!?" A guy from Sweden was reportedly speed-drawing strangers' portraits. Someone with a guitar was improvising songs for anyone who'd give him a topic. One man popped up on people's screens in the act of fornicating with a head of lettuce. Others dressed like ninjas, tried to persuade women to expose themselves, and played spontaneous transcontinental games of Connect Four. Occasionally, people even made nonvirtual connections: One punk-music blogger met a group of people from Michigan who ended up driving eleven hours to crash at his house for a concert in New York. And then, of course, fairly often, there was this kind of thing: "I saw some hot chicks then all of a sudden there was a man with a glass in his butthole."

As a Techdirt post explored, more recently ChatRoulette has been trying to find a way to keep the best elements of the idea without it degenerating into a peepshow for exhibitionists and worse. A real-time video meetup service from South Korea, founded in 2014 and called Azar, is grappling with the same issue. An article on the Rest of the World site explains:

Because it's an app rather than a website, it benefits from tying users to their smartphones, making it harder for banned accounts to come back online under new names. The company says it also uses artificial intelligence to moderate inappropriate content and allows users to easily report violations themselves.

As an approach, it seems to be popular with users: there have been over 540 million cumulative downloads. Its owners claim it is the "highest grossing 1-on-1 live video chat app globally". Here's where the money comes from:

Much of that revenue is likely driven by in-app purchases. When users tap through Azar, they’re greeted by a barrage of prompts encouraging them to buy Gems -- tokens used to acquire everything from stickers and virtual gifts to extra daily matches. Users can also pay $14.99 to gain "VIP" status, which allows them to narrow matches down according to stated gender and country (the cost may vary in different markets).

In February, the original owner Hyperconnect was bought for $1.725 billion by Match Group. The latter already owns many similar dating services, including Tinder, Match, Meetic, OKCupid, Hinge, Pairs, PlentyOfFish and OurTime. One reason for the acquisition (pdf) may be that 77% of Hyperconnect's users are in Asia, with only 17% in Europe, and 6% in the US. An obvious move for Match Group would be to promote Hyperconnect's products outside Asia, where there seems plenty of room for growth. Azar may not be well-known in the West today, but that could change if its app-based approach and AI moderation allows it to catch on like ChatRoulette a decade ago, but without the lettuce fornicators.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

5 Comments | Leave a Comment..

Posted on Techdirt - 7 May 2021 @ 12:17pm

Uganda Said It Would Ban VPNs To Prevent Users From Dodging Its Absurd New Social Media Tax: Guess How That Worked Out?

from the OK,-we'll-just-tax-everything-instead dept

Three years ago we wrote about African countries that thought taxing blogs and social media was an easy way to raise money -- and to muzzle inconvenient voices. A year later, Techdirt was reporting on a sudden boom in VPN use among Ugandans keen to avoid that country's levy on social media use. As Karl Bode reported, back then the authorities were pressuring ISPs to ban the use of VPNs. A post on the Rest of the World site has a useful update on how things have worked out since then. First, the money:

after three years, the tax, which amounts to about 5 cents (200 Ugandan shillings) per day to access any of more than 60 social media platforms, has failed. It has neither helped the government raise significant revenue nor curtailed lively online discussions by young Ugandans.

In its first fiscal year, the Ugandan government was projected to collect about $77.8 million (248 billion Ugandan shillings) from social media tax. Instead, it raised only about $13.5 million (49.5 billion shillings). In the next fiscal year, Uganda lowered its expectations and projected to collect $16.5 million but only just slightly beat its target by raising $18.7 million.

The reason, as expected, is that people are turning to VPNs, often the free ones, despite the intrusive pop-up ads and questionable security. It turns out that it's harder to ban VPNs than the government thought. So the Ugandan authorities have come up with Plan B:

Thanks to VPNs users have found a way around the social media tax. That's why, on April 29, the government replaced the social media tax with a 12% excise duty on internet data that will likely hike the cost of internet access in the landlocked country that already has some of the highest internet costs in the region.

The idea here is presumably that even if people use VPNs, they will have to pay the data tax. That will probably work, but the move brings with it a huge problem. It will make using the Internet for any purpose in Uganda more expensive, which will not only discourage ordinary people from taking advantage of it, but will also throttle Ugandan online startups. However much the new tax brings in, it is likely to be far less than the deeper economic harm this short-sighted move will inflict.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

15 Comments | Leave a Comment..

Posted on Techdirt - 6 May 2021 @ 5:28am

Putin's Crackdown On Demonstrators Adds A Sadistic Twist: Using Surveillance Cameras To Identify People, But To Arrest Them Only Days Or Months Later

from the waiting-for-the-knock-on-the-door dept

It's hardly news that Vladimir Putin is cracking down on supporters of Alexey Navalny, or on the journalists who are brave enough to report on the wave of protests in support of the imprisoned opposition leader. But there are some interesting wrinkles to how this is happening. For example, in a move that will not surprise Techdirt readers, Moscow's massive facial recognition camera network -- supposedly set up to enforce quarantine restrictions, and to catch criminals -- has been re-purposed, as Bloomberg reports:

Police tapped the surveillance system to identify and detain dozens of people who attended last week's protests in the Russian capital in support of jailed Kremlin foe Alexey Navalny. More than 50 were picked up over the following days, including several journalists, according to OVD-Info, an independent human-rights monitoring group that gathers information on detentions.

Nothing too surprising there, perhaps. But the site points out an important shift in the Russian authorities' tactics. In the past, the police detained thousands of people who had participated in unsanctioned demonstrations. This time, a token two to three percent of the protesters at a rally were arrested, apparently allowing the rest to go free. However, this is actually part of a new, and even more cruel approach:

in recent days, Russian police have unveiled a new strategy, using surveillance-camera footage and other techniques to identify demonstrators and track them down, days after the event.

The opposition politician and political analyst Leonid Gozman explains:

"Now we have a different situation," he continued. "They are signaling to everyone: 'Go ahead and march, guys, but a year from now you can expect we'll come, expect a knock at your door. And we'll come or not as we wish....' Now they have placed everyone in that position."

It's a clever approach. It means anyone coming away from attending a demo is unsure whether they have been identified there. The absence of any immediate action by the authorities no longer means protesters have escaped notice. Instead, a kind of digital sword of Damocles hangs over them, waiting to fall at some future, unknown date. The painful uncertainty this generates will probably be enough to dissuade many people from taking part in future demos -- a big win for the authorities, obtained at very low cost.

This cat-and-mouse game with protesters is only possible thanks to Moscow's blanket surveillance cameras and advanced facial recognition systems. Where, in the past, police could only arrest people at a demonstration on the spot, because there was no sure way to find them afterwards, now their faces on CCTV are enough. Once photographed and identified, there is no need to arrest them immediately, which allows the authorities to create this new and debilitating anxiety among protestors that one day there will be that dreaded knock on the door.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

21 Comments | Leave a Comment..

Posted on Techdirt - 29 April 2021 @ 12:19pm

It Took Four Months And Thousands Of Dollars To Overturn One Manifestly Stupid Upload Block: Imagine How Bad It Will Soon Be With EU Copyright Directive's Blanket Use Of Filters

from the this-is-gonna-be-bad dept

The upload filters required by the EU's Copyright Directive are not yet in operation -- even though France seems keen to bring them in as soon as possible. So we have been spared for the moment the inevitable harm to freedom of speech and loss of online users' rights that this ill-conceived and dishonest legislation will cause. But a minor case in the Czech Republic provides a foretaste of what is to come. It concerns the Czech file-sharing and hosting site Ulož.to. TorrentFreak has the details:

Late last year, the Municipal Court in Prague ruled that Ulož must filter and block files that reference the word "Šarlatán" ('Charlatan') which is also the name of a Czech movie.

Blocking files that merely reference a particular word is a ridiculously crude approach: all kinds of material will be caught and blocked. Fortunately, the stupidity of this move, requested by the movie distributor Cinemart, was understood by the High Court in Prague when Ulož appealed against the order. However, it took four months to overturn the preliminary filtering order, during which time Ulož was obliged to comply with the lower court's instructions. Not unreasonably, it is now seeking compensation for the unnecessary work this entailed, as well for its legal costs:

Ulož is seeking 585,000 Czech Koruna (~$27,320) to compensate for the filtering and monitoring costs, and another 200,000 (~$9,340) to cover the legal costs and fees.

In itself, it's hardly a ground-breaking result. But even for this minor case, it required considerable amounts of time and money before a manifestly unjust ruling was thrown out. Imagine how things will be once the EU's new upload filters start to operate. They will give rise to many cases -- hundreds? thousands? more? -- where material is wrongly blocked, but where sites are unwilling to allow it to be posted on appeal. Most members of the public will give up at this point, deterred by the prospects of unknown costs for what are likely to be far more complex legal questions than the simple one considered in Prague. All-in-all, the isolated case of Ulož does not bode well for what will soon be the painful everyday reality of copyright across the whole of the EU.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

29 Comments | Leave a Comment..

Posted on Techdirt - 23 April 2021 @ 1:47pm

Irony Alert: US Could Block Personal Data Transfers To Ireland, European Home Of Digital Giants, Because GDPR Is Not Being Enforced Properly

from the biter-bit dept

Last year, the EU's top court threw out the Privacy Shield framework for transferring personal data between the EU and US. The court decided that the NSA's surveillance practices meant that the personal data of EU citizens was not protected to the degree required by the GDPR when it was sent to the US. This was the second time that such an agreement had been struck down: before, there was Safe Harbor, which failed for similar reasons. The absence of a simple procedure for sending EU personal data to the US is bad news for companies that need to do this on a regular basis. No wonder, then, that the US and EU are trying to come up with a new legal framework to allow it, as this CNBC story notes:

Officials from the EU and U.S. are "intensifying negotiations" on a new pact for transatlantic data transfers, trying to solve the messy issue of personal information that is transferred between the two regions.

Even if they manage to come up with one, there's no guarantee that it won't be shot down yet again by the courts, unless the underlying issues of NSA surveillance are addressed in some way -- no easy task. Meanwhile, there's been a fascinating development on the US side, reported here by The Irish Times:

The US Senate is to debate a proposal to limit foreign countries' access to US citizens' personal data and to introduce a licence requirement for foreign companies that trade in this information.

The draft "Protecting Americans' Data From Foreign Surveillance Act", presented on Thursday by Democratic Senator Ron Wyden of Oregon, is aimed primarily at curbing the sale and theft of data by "shady data brokers" to "hostile" foreign governments such as China.

The law may be aimed primarily at China, but its reach is wide, and it could hit an unlikely target. As the Irish Council for Civil Liberties (ICCL) explains, the new Bill (pdf) aims to stop the personal data of US citizens being transferred to locations with inadequate data protection -- just as the EU's GDPR does. But according to the ICCL, one country that may fall into this category of dodgy data handling is Ireland:

ICCL understands from those who wrote the draft Bill that Ireland's failure to enforce the GDPR is of particular concern. The Bill intentionally uses language from the GDPR, and targets this enforcement failure. The draft Bill makes clear that merely enacting strong data protection law such as the GDPR is not enough. That law must be enforced.

Most digital giants have their European headquarters in Ireland. Under the GDPR, it is Ireland's Data Protection Commission (DPC) that must investigate and ultimately fine these companies for their GDPR infringements anywhere in the EU. The DPC has opened many data privacy inquiries (pdf), but has so far failed to impose serious fines. Without strict enforcement by the Irish authorities, there is a growing feeling that the GDPR could be fatally undermined. Hence the risk that the US might not allow personal data to be transferred to Ireland, if the new "Protecting Americans' Data From Foreign Surveillance Act" becomes law. Given the long-standing concerns over the protection of personal data flows from the EU to the US, that would be a rather ironic turn of events.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

13 Comments | Leave a Comment..

Posted on Techdirt - 22 April 2021 @ 12:10pm

European Commission's Attempt To Backtrack On Its Promise To Defend Fundamental Rights In Upload Filter Implementations May Backfire Badly

from the well-played,-Advocate-General dept

One of the most disgraceful aspects of the EU Copyright Directive saga was the shameless way its supporters swore that upload filters would not be required -- despite the evident fact that there was no other way to implement the new law's requirements. And indeed, once the legislation was passed, France lost no time in pushing for upload filters. Worse, its own implementation ignored what few protections there were for users' fundamental rights. Fortunately, back in 2019, the Polish government made a formal request for the EU's top court, the Court of Justice of the European Union (CJEU), to throw out upload filters. That is still grinding its way through the EU's legal system, but its mere existence could play an important role as EU member states grapple with the impossible task of passing national laws to implement the EU Copyright Directive.

To help them do that, the European Commission said it would release guidance on how to reconcile the contradictory requirements of upload filters and user rights. As a post by Communia accompanying an open letter to the Commission (pdf) explains, the first draft made it clear that national implementations of Article 17 -- upload filters -- must contain built-in protection that limits the automated blocking of uploads to situations where material is "manifestly" infringing. Since that promising start, there has been no sign of the final guidance. Instead, as Communia notes, there has been lots of lobbying against user rights:

Over the past few months the final version of the guidance has been the object of intense, behind the scenes, political wrangling between different parts of the Commission. In February, MEPs critical of the principles expressed in the draft guidance held a closed door meeting with Commission representatives and select Member States opposing the Commission's position. In the following week a high ranking member of the Cabinet of Executive Vice President Magrete Vestager -- who oversees this file -- received a delegation of rightholder organisations who have been rallying against the principles underpinning the Commission's draft guidance to discuss the Copyright Directive.

It looks increasingly likely that the European Commission is planning to cave in to the usual copyright bullies. But it has a major problem in the form of the Polish action at the CJEU. If its guidance fails to defend fundamental user rights, and national implementations ignore them, it is more likely that the CJEU will strike down Article 17:

a weakened version of the guidance would also undermine the Commission's credibility with the CJEU, who ultimately needs to decide on the fundamental rights compliance of Article 17. Having argued that the upcoming guidance would signal a strong commitment to protecting users' fundamental rights, any weakening of this position by the Commission would give the Court additional reasons to annul Article 17 (as requested by the Republic of Poland).

It seems the European Commission has been unwilling to release its long-awaited and watered-down guidance because it wanted the CJEU's adviser, the Advocate General, to publish his opinion before the bad news came. But the Advocate General has outplayed the European Commission. He has announced that he won't be releasing his opinion until July, rather than this week, as originally planned. By then, the draft guidance will have been issued, because EU member states must implement the Copyright Directive soon. So either the European Commission defends user rights in its official implementation guidance, or else it risks having the Advocate General and CJEU throw out upload filters completely.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

5 Comments | Leave a Comment..

Posted on Techdirt - 15 April 2021 @ 1:43pm

France Plans To Repeat Hadopi's Costly Mistakes By Turning It Into An Even Bigger, Even More Wrong-headed Anti-Piracy Body Called Arcom

from the will-they-ever-learn? dept

Techdirt covered the story of France's "three strikes" law, later known as Hadopi, from the body overseeing it, for over ten years. What became a long-running farce eventually cost French taxpayers €82 million, and generated just €87,000 in fines. A rational government might draw the obvious conclusion that trying to stamp out unauthorized downloads using the crude instrument of fines and threats was the wrong approach. Oddly, though, the French government has decided that Hadopi was such a stunning, and embarrassing failure, it wants to do it again, but on an even grander scale, as a story on Euractiv reports:

A new super-regulator, the Autorité de régulation de la communication audiovisuelle et numérique (ARCOM) is to be created from the merger of the Haute Autorité pour la diffusion des œuvres et la protection des droits sur Internet (HADOPI) and the Conseil supérieur de l’audiovisuel (CSA) in order to "step up the fight against pirate sites and to include this action in a broader policy of regulating online content", according to the Ministry of Culture website.

The merger is part of a wide-ranging new law (original in French) that seeks to regulate many aspects of the online world in France, mostly in wrong-headed ways. Next Inpact has an excellent run-down on what is included in the proposed text (original in French). The main elements include tackling unauthorized downloads; propaganda aimed at convincing young people to love copyright; encouraging new services offering material (about the only sensible idea in the bill); and a mission to monitor the use of "technical protection measures" like DRM. In addition, the new law aims to combat sites with infringing material by using blacklists, to tackle mirror sites, and shut down unauthorized services offering sports content.

Given the French lawmakers' willingness to grant lazy copyright companies whatever new legal options they want, however unbalanced or disproportionate in terms of basic rights and freedoms, there seems little chance the bill will be thrown out or even substantially modified. France's Ministry of Culture is certainly fully behind it. In a press release, it went so far as to claim (original in French):

This ambitious bill is fundamental for the defense of French creativity.

It really isn't. Moreover, they said the similar things about Hadopi, and look what happened there.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

21 Comments | Leave a Comment..

More posts from Glyn Moody >>

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it