Glyn Moody’s Techdirt Profile

glynmoody

About Glyn MoodyTechdirt Insider




Posted on Techdirt - 24 September 2018 @ 10:44am

China Actively Collecting Zero-Days For Use By Its Intelligence Agencies -- Just Like The West

from the no-moral-high-ground-there,-then dept

It all seems so far away now, but in 2013, during the early days of the Snowden revelations, a story about the NSA's activities emerged that apparently came from a different source. Bloomberg reported (behind a paywall, summarized by Ars Technica) that Microsoft was providing the NSA with information about newly-discovered bugs in the company's software before it patched them. It gave the NSA a window of opportunity during which it could take advantage of those flaws in order to gain access to computer systems of interest. Later that year, the Washington Post reported that the NSA was spending millions of dollars per year to acquire other zero-days from malware vendors.

A stockpile of vulnerabilities and hacking tools is great -- until they leak out, which is precisely what seems to have happened several times with the NSA's collection. The harm that lapse can cause was vividly demonstrated by the WannaCry ransomware. It was built on a Microsoft zero-day that was part of the NSA's toolkit, and caused very serious problems to companies -- and hospitals -- around the world.

The other big problem with the NSA -- or the UK's GCHQ, or Germany's BND -- taking advantage of zero-days in this way is that it makes it inevitable that other actors will do the same. An article on the Access Now site confirms that China is indeed seeking out software flaws that it can use for attacking other systems:

In November 2017, Recorded Future published research on the publication speed for China's National Vulnerability Database (with the memorable acronym CNNVD). When they initially conducted this research, they concluded that China actually evaluates and reports vulnerabilities faster than the U.S. However, when they revisited their findings at a later date, they discovered that a majority of the figures had been altered to hide a much longer processing period during which the Chinese government could assess whether a vulnerability would be useful in intelligence operations.

As the Access Now article explains, the Chinese authorities have gone beyond simply keeping zero-days quiet for as long as possible. They are actively discouraging Chinese white hats from participating in international hacking competitions because this would help Western companies learn about bugs that might otherwise be exploitable by China's intelligence services. This is really bad news for the rest of us. It means that China's huge and growing pool of expert coders are no longer likely to report bugs to software companies when they find them. Instead, they will be passed to the CNNVD for assessment. Not only will bug fixes take longer to appear, exposing users to security risks, but the Chinese may even weaponize the zero-days in order to break into other systems.

Another regrettable aspect of this development is that Western countries like the US and UK can hardly point fingers here, since they have been using zero-days in precisely this way for years. The fact that China -- and presumably Russia, North Korea and Iran amongst others -- have joined the club underlines what a stupid move this was. It may have provided a short-term advantage for the West, but now that it's become the norm for intelligence agencies, the long-term effect is to reduce the security of computer systems everywhere by leaving known vulnerabilities unpatched. It's an unwinnable digital arms race that will be hard to stop now. It also underlines why adding any kind of weakness to cryptographic systems would be an incredibly reckless escalation of an approach that has already put lives at risk.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

15 Comments | Leave a Comment..

Posted on Free Speech - 19 September 2018 @ 11:59am

Tanzania Plans To Outlaw Fact-Checking Of Government Statistics

from the dodgy-data dept

Back in April, Techdirt wrote about a set of regulations brought in by the Tanzanian government that required people there to pay around $900 per year for a license to blog. Despite the very high costs it imposes on people -- Tanzania's GDP per capita was under $900 in 2016 -- it seems the authorities are serious about enforcing the law. The iAfrikan site reported in June:

Popular Tanzanian forums and "leaks" website, Jamii Forums, has been temporarily shut down by government as it has not complied with the new regulations and license fees required of online content creators in Tanzania. This comes after Tanzania Communications Regulatory Authority (TCRA) issued a notice to Jamii Forums reminding them that it is a legal offense to publish content on the Internet without having registered and paid for a license.

The Swahili-language site Jamii Forums is back online now. But the Tanzanian authorities are not resting on their laurels when it comes to introducing ridiculous laws. Here's another one that's arguably worse than charging bloggers to post:

[President John] Magufuli and his colleagues are now looking to outlaw fact checking thanks to proposed amendments to the Statistics Act, 2015.

"The principal Act is amended by adding immediately after section 24 the following: 24A.-(1) Any person who is authorised by the Bureau to process any official statistics, shall before publishing or communicating such information to the public, obtain an authorisation from the Bureau. (2) A person shall not disseminate or otherwise communicate to the public any statistical information which is intended to invalidate, distort, or discredit official statistics," reads the proposed amendments to Tanzania's Statistics Act, 2015 as published in the Gazette of the United Republic of Tanzania No. 23 Vol. 99.

As the iAfrikan article points out, the amendments will mean that statistics published by the Tanzanian government must be regarded as correct, however absurd or obviously erroneous they might be. Moreover, it will be illegal for independent researchers to publish any other figures that contradict, or even simply call into question, official statistics.

This is presumably born of a thin-skinned government that wants to avoid even the mildest criticism of its policies or plans. But it seems certain to backfire badly. If statistics are wrong, but no one can correct them, there is the risk that Tanzanian businesses, organizations and citizens will make bad decisions based on this dodgy data. That could lead to harmful consequences for the economy and society, which the Tanzanian government might well be tempted to cover up by issuing yet more incorrect statistics. Without open and honest feedback to correct this behavior, there could be an ever-worsening cascade of misinformation and lies until public trust in the government collapses completely. Does President Magufuli really want that?

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

17 Comments | Leave a Comment..

Posted on Techdirt - 17 September 2018 @ 7:49pm

Software Patch Claimed To Allow Aadhaar's Security To Be Bypassed, Calling Into Question Biometric Database's Integrity

from the but-it's-ok,-we-already-blacklisted-the-50,000-rogue-operators-that-we-found dept

Earlier this year, we wrote about what seemed to be a fairly serious breach of security at the world's largest biometric database, India's Aadhaar. The Indian edition of Huffington Post now reports on what looks like an even more grave problem:

The authenticity of the data stored in India's controversial Aadhaar identity database, which contains the biometrics and personal information of over 1 billion Indians, has been compromised by a software patch that disables critical security features of the software used to enrol new Aadhaar users, a three month-long investigation by HuffPost India reveals.

According to the article, the patch can be bought for just Rs 2,500 (around $35). The easy-to-install software removes three critical security features of Aadhaar:

The patch lets a user bypass critical security features such as biometric authentication of enrolment operators to generate unauthorised Aadhaar numbers.

The patch disables the enrolment software's in-built GPS security feature (used to identify the physical location of every enrolment centre), which means anyone anywhere in the world -- say, Beijing, Karachi or Kabul -- can use the software to enrol users.

The patch reduces the sensitivity of the enrolment software's iris-recognition system, making it easier to spoof the software with a photograph of a registered operator, rather than requiring the operator to be present in person.

As the Huffington Post article explains, creating a patch that is able to circumvent the main security features in this way was possible thanks to design choices made early on in the project. The unprecedented scale of the Aadhaar enrollment process -- so far around 1.2 billion people have been given an Aadhaar number and added to the database -- meant that a large number of private agencies and village-level computer kiosks were used for registration. Since connectivity was often poor, the main software was installed on local computers, rather than being run in the cloud. The patch can be used by anyone with local access to the computer system, and simply involves replacing a folder of Java libraries with versions lacking the security checks.

The Unique Identification Authority of India (UIDAI), the government body responsible for the Aadhaar project, has responded to the Huffington Post article, but in a rather odd way: as a Donald Trump-like stream of tweets. The Huffington Post points out: "[the UIDAI] has simply stated that its systems are completely secure without any supporting evidence." One of the Aadhaar tweets is as follows:

It is because of this stringent and robust system that as on date more that 50,000 operators have been blacklisted, UIDAI added.

The need to throw 50,000 operators off the system hardly inspires confidence in its overall security. What makes things worse is that the Indian government seems determined to make Aadhaar indispensable for Indian citizens who want to deal with it in any way, and to encourage business to do the same. Given the continuing questions about Aadhaar's overall security and integrity, that seems unwise, to say the least.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

9 Comments | Leave a Comment..

Posted on Techdirt - 13 September 2018 @ 12:17am

Corporate Sovereignty On The Wane, As Governments Realize It's More Trouble Than It's Worth

from the but-not-dead-yet dept

A few years ago, corporate sovereignty -- officially known as "investor-state dispute settlement" (ISDS) -- was an indispensable and important element of trade deals. As a result, it would crop up on Techdirt quite often. But the world is finally moving on, and old-style corporate sovereignty is losing its appeal. As we reported last year, the US Trade Representative, Robert Lighthizer, hinted that the US might not support ISDS in future trade deals, but it was not clear what that might mean in practice. The Canadian Broadcasting Corporation (CBC) site has an interesting article that explores the new contours of corporate sovereignty:

The preliminary trade agreement the U.S. recently reached with Mexico may offer a glimpse of what could happen with NAFTA's Chapter 11 [governing ISDS].

A U.S. official said the two countries wanted ISDS to be "limited" to cases of expropriation, bias against foreign companies or failure to treat all trading partners equally.

The new US thinking places Canada in a tricky position because the latter is involved in several trade deals, which take different approaches to corporate sovereignty. As well as the US-dominated NAFTA, there is CETA, the trade deal with Europe. For that, Canada is acquiescing to the EU's request to replace ISDS with the new Investment Court System (ICS). In TPP, however -- still lumbering on, despite the US withdrawal -- Canada seems to be going along with the traditional corporate sovereignty approach.

A willingness to move on from traditional ISDS can be seen in the often overlooked, but important, Regional Comprehensive Economic Partnership (RCEP) trade deal. India's Business Standard reports:

Despite treading diametrically opposite paths on tariffs and market access, India and China, along with other nations, have hit it off on talks regarding investment norms in the proposed Regional Comprehensive Economic Partnership (RCEP) pact.

In a bid to fast-track the deal, most nations have agreed to ease the investor-state-dispute settlement (ISDS) clauses.

As with NAFTA and CETA, it seems that the nations involved in RCEP no longer regard corporate sovereignty as a priority, and are willing to weaken its powers in order to reach agreement on other areas. Once the principle has been established that ISDS can be watered down, there's nothing to stop nations proposing that it should be dropped altogether. Given the astonishing awards and abuses that corporate sovereignty has led to in the past, that's a welcome development.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

7 Comments | Leave a Comment..

Posted on Techdirt - 10 September 2018 @ 8:01pm

Europe's New 'Plan S' For Open Access: Daft Name, Great News

from the admirably-strong dept

The journey towards open access has been a long one, with many disappointments along the way. But occasionally there are unequivocal and major victories. One such is the new "Plan S" from the inelegantly-named cOALition S:

On 4 September 2018, 11 national research funding organisation, with the support of the European Commission including the European Research Council (ERC), announced the launch of cOAlition S, an initiative to make full and immediate Open Access to research publications a reality. It is built around Plan S, which consists of one target and 10 principles.

cOAlition S signals the commitment to implement, by 1 January 2020, the necessary measures to fulfil its main principle: "By 2020 scientific publications that result from research funded by public grants provided by participating national and European research councils and funding bodies, must be published in compliant Open Access Journals or on compliant Open Access Platforms."

The plan and its ten principles (pdf) are usefully summed up by Peter Suber, one of the earliest and most influential open access advocates, as follows:

The plan is admirably strong. It aims to cover all European research, in the sciences and in the humanities, at the EU level and the member-state level. It's a plan for a mandate, not just an exhortation or encouragement. It keeps copyright in the hands of authors. It requires open licenses and prefers CC-BY. It abolishes or phases out embargoes. It does not support hybrid journals except as stepping stones to full-OA journals. It's willing to pay APCs [Article Processing Charges] but wants to cap them, and wants funders and universities to pay them, not authors. It will monitor compliance and sanction non-compliance. It's already backed by a dozen powerful, national funding agencies and calls for other funders and other stakeholders to join the coalition.

Keeping copyright in the hands of authors is crucial: too often, academics have been cajoled or bullied into handing over copyright for their articles to publishers, thus losing the ability to determine who can read them, and under what conditions. Similarly, the CC-BY license would allow commercial use by anyone -- many publishers try to release so-called open access articles under restrictive licenses like CC-BY-NC, which stop other publishers from distributing them.

Embargo periods are routinely used by publishers to delay the appearance of open access versions of articles; under Plan S, that would no longer be allowed. Finally, the new initiative discourages the use of "hybrid" journals that have often enabled publishers to "double dip". That is, they charge researchers who want to release their work as open access, but also require libraries to take out full-price subscriptions for journals that include these freely-available articles.

Suber has a number of (relatively minor) criticisms of Plan S, which are well-worth reading. All-in-all, though, this is a major breakthrough for open access in Europe, and thus the world. Once "admirably strong" open access mandates like Plan S have been established in one region, others tend to follow in due course. Let's just hope they choose better names.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

10 Comments | Leave a Comment..

Posted on Techdirt - 5 September 2018 @ 7:35pm

Leading Biomedical Funders Call For Open Peer Review Of Academic Research

from the nothing-to-hide dept

Techdirt has written many posts about open access -- the movement to make digital versions of academic research freely available to everyone. Open access is about how research is disseminated once it has been selected for publication. So far, there has been less emphasis on changing how academic work is selected in the first place, which is based on the time-honored approach of peer review. That is, papers submitted to journals are sent out to experts in the same or similar field, who are invited to comment on ways of improving the work, and on whether the research should be published. Traditionally, the process is shrouded in secrecy. The reviewers are generally anonymous, and the reports they make on the submissions are not made public. Now, however, the idea of making peer review more transparent as part of the general process of becoming more open is gaining increasing impetus.

A couple of weeks ago, representatives of two leading biomedical funders -- the UK Wellcome Trust and the Howard Hughes Medical Institute -- together with ASAPbio, a non-profit organization that encourages innovation in life-sciences publishing, wrote a commentary in Nature. In it, they called for "open review", which, they point out, encompasses two distinct forms of transparency:

'Open identities' means disclosing reviewers' names; 'open reports' (also called transparent reviews or published peer review) means publishing the content of reviews. Journals might offer one or the other, neither or both.

In a 2016 survey, 59% of 3,062 respondents were in favour of open reports. Only 31% favoured open identities, which they feared could cause reviewers to weaken their criticisms or could lead to retaliation from authors. Here, we advocate for open reports as the default and for open identities to be optional, not mandatory.

The authors of the commentary believe that there are a number of advantages to open reports:

The scientific community would learn from reviewers' and editors’ insights. Social scientists could collect data (for example, on biases among reviewers or the efficiency of error identification by reviewers) that might improve the process. Early-career researchers could learn by example. And the public would not be asked to place its faith in hidden assessments.

There are, of course risks. One concern mentioned is that published reviews might be used unfairly in subsequent evaluation of the authors for grants, jobs, awards or promotions. Another possibility is the 'weaponization' of reviewer reports:

Opponents of certain types of research (for example, on genetically modified organisms, climate change and vaccines) could take critical remarks in peer reviews out of context or mischaracterize disagreements to undermine public trust in the paper, the field or science as a whole.

Despite these and other concerns mentioned in the Nature commentary, an open letter published on the ASAPbio site lists dozens of major titles that have already instituted open reports, or promise to do so next year. As well as that indication that open reports are passing from concept to reality, it's worth bearing in mind that the UK Wellcome Trust and the Howard Hughes Medical Institute are major funders of biomedical research. It would be a relatively straightforward step for them to make the adoption of open reports a condition of receiving their grants -- something that would doubtless encourage uptake of the idea.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

20 Comments | Leave a Comment..

Posted on Techdirt - 17 August 2018 @ 7:39pm

As Academic Publishers Fight And Subvert Open Access, Preprints Offer An Alternative Approach For Sharing Knowledge Widely

from the this-is-the-future dept

The key idea behind open access is that everyone with an Internet connection should be able to read academic papers without needing to pay for them. Or rather without needing to pay again, since most research is funded using taxpayers' money. It's hard to argue against that proposition, or that making information available in this way is likely to increase the rate at which medical and scientific discoveries are made for the benefit of all. And yet, as Techdirt has reported, academic publishers that often enjoy profit margins of 30-40% have adopted a range of approaches to undermine open access and its aims -- and with considerable success. A recent opinion column in the Canadian journal University Affairs explains how traditional publishers have managed to subvert open access for their own benefit:

An ironic twist to the open-access movement is that it has actually made the publishers richer. They've jumped on the bandwagon by offering authors the option of paying article processing charges (APCs) in order to make their articles open access, while continuing to increase subscription charges to libraries at the institutions where those authors work. So, in many cases, the publishers are being paid twice for the same content -- often charging APCs higher than purely open access journals.

Another serious problem is the rise of so-called "predatory" open access publishers that have distorted the original ideas behind the movement even more. The Guardian reported recently:

More than 175,000 scientific articles have been produced by five of the largest "predatory open-access publishers", including India-based Omics publishing group and the Turkish World Academy of Science, Engineering and Technology, or Waset.

But the vast majority of those articles skip almost all of the traditional checks and balances of scientific publishing, from peer review to an editorial board. Instead, most journals run by those companies will publish anything submitted to them -- provided the required fee is paid.

These issues will be hard, if not impossible, to solve. As a result, many are now looking for a different solution to the problem of providing easy and cost-free access to academic knowledge, this time in the form of preprints. Techdirt reported earlier this year that there is evidence the published versions of papers add very little to the early, preprint version that is placed online directly by the authors. The negligible barriers to entry, the speed at which work can be published, and the extremely low costs involved have led many to see preprints as the best solution to providing open access to academic papers without needing to go through publishers at all.

Inevitably, perhaps, criticisms of the idea are starting to appear. Recently, Tom Sheldon, who is a senior press manager at the Science Media Centre in London, published a commentary in one of the leading academic journals, Nature, under the headline: "Preprints could promote confusion and distortion". As he noted, this grew out of an earlier discussion paper that he published on the Science Media Centre's blog. The Science Media Centre describes itself as "an independent press office helping to ensure that the public have access to the best scientific evidence and expertise through the news media when science hits the headlines." Its funding comes from "scientific institutions, science-based companies, charities, media organisations and government". Sheldon's concerns are not so much about preprints themselves, but their impact on how science is reported:

I am a big fan of bold and disruptive changes which can lead to fundamental culture change. My reading around work on reproducibility, open access and preprint make me proud to be part of a scientific community intent on finding ways to make science better. But I am concerned about how this change might affect the bit of science publication that we are involved with at the Science Media Centre. The bit which is all about the way scientific findings find their way to the wider public and policymakers via the mass media.

One of his concerns is the lack of embargoes for preprints. At the moment, when researchers have what they think is an important result or discovery appearing in a paper, they typically offer trusted journalists a chance to read it in advance on the understanding that they won't write about it until the paper is officially released. This has a number of advantages. It creates a level playing field for those journalists, who all get to see the paper at the same time. Crucially, it allows journalists to contact other experts to ask their opinion of the results, which helps to catch rogue papers, and also provides much-needed context. Sheldon writes:

Contrast this with preprints. As soon as research is in the public domain, there is nothing to stop a journalist writing about it, and rushing to be the first to do so. Imagine early findings that seem to show that climate change is natural or that a common vaccine is unsafe. Preprints on subjects such as those could, if they become a story that goes viral, end up misleading millions, whether or not that was the intention of the authors.

That's certainly true, but is easy to remedy. Academics who plan to publish a preprint could offer a copy of the paper to the group of trusted journalists under embargo -- just as they would with traditional papers. One sentence describing why it would be worth reading is all that is required by way of introduction. To the extent that the system works for today's published papers, it will also work for preprints. Some authors may publish without giving journalists time to check with other experts, but that's also true for current papers. Similarly, some journalists may hanker after full press releases that spoon-feed them the results, but if they can't be bothered working it out for themselves, or contacting the researchers and asking for an explanation, they probably wouldn't write a very good article anyway.

The other concern relates to the quality of preprints. One of the key differences between a preprint and a paper published in a journal is that the latter usually goes through the process of "peer review", whereby fellow academics read and critique it. But it is widely agreed that the peer review process has serious flaws, as many have pointed out for years -- and as Sheldon himself admits.

Indeed, as defenders note, preprints allow far more scrutiny to be applied than with traditional peer review, because they are open for all to read and spot mistakes. There are some new and interesting projects to formalize this kind of open review. Sheldon rightly has particular concerns about papers on public health matters, where lives might be put at risk by erroneous or misleading results. But major preprint sites like bioRxiv (for biology) and the upcoming medRxiv (for medicine and health sciences) are already trying to reduce that problem by actively screening preprints before they are posted.

Sheldon certainly raises some valid questions about the impact of preprints on the communication of science to a general audience. None of the issues is insurmountable, but it may require journalists as well as scientists to adapt to the changed landscape. However, changing how things are done is precisely the point about preprints. The present academic publishing system does not promote general access to knowledge that is largely funded by the taxpayer. The attempt by the open access movement to make that happen has arguably been neutered by shrewd moves on the part of traditional publishers, helped by complaisant politicians. Preprints are probably the best hope we have now for achieving a more equitable and efficient way of sharing knowledge and building on it more effectively.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

40 Comments | Leave a Comment..

Posted on Techdirt - 8 August 2018 @ 7:59pm

ICANN Loses Yet Again In Its Quixotic Quest To Obtain A Special Exemption From The EU's GDPR

from the oh,-do-give-it-a-rest dept

Back in May, we wrote about the bizarre attempt by the Internet Corporation for Assigned Names and Numbers (ICANN) to exempt itself from the EU's new privacy legislation, the GDPR. ICANN sought an injunction to force EPAG, a Tucows-owned registrar based in Bonn, Germany, to collect administrative and technical contacts as part of the domain name registration process. EPAG had refused, because it felt doing so would fall foul of the GDPR. A German court turned down ICANN's request, but without addressing the question whether gathering that information would breach the GDPR.

As the organization's timeline of the case indicates, ICANN then appealed to the Higher Regional Court of Cologne, Germany, against the ruling. Meanwhile, the lower court that issued the original judgment decided to re-visit the case, which it has the option to do upon receipt of an appeal. However, it did not change its view, and referred the matter to the upper Court. The Appellate Court of Cologne has issued its judgment (pdf), with a comprehensive smackdown of ICANN, yet again (via The Register):

Regardless of the fact that already in view of the convincing remarks of the Regional Court in its orders of 29 May 2018 and 16 July 2018 the existence of a claim for a preliminary injunction (Verfügungsanspruch) is doubtful, at least with regard to the main application, the granting the sought interim injunction fails in any case because the Applicant has not sufficiently explained and made credible a reason for a preliminary injunction (Verfügungsgrund).

The Appellate Court pointed out that ICANN could hardly claim it would suffer "irreparable harm" if it were not granted an injunction forcing EPAG to gather the additional data. If necessary, ICANN could collect that information at a later date, without any serious consequences. ICANN's case was further undermined by the fact that gathering administrative and technical contacts in the past had always been on a voluntary basis, so not doing so could hardly cause great damage.

Once more, then, the question of whether collecting this extra personal information was forbidden under the GDPR was not addressed, since ICANN's argument was found wanting irrespective of that privacy issue. And because no interpretation of the GDPR was required for the case, the Appellate Court also ruled there were no grounds for referring the question to the EU's highest court, the Court of Justice of the European Union.

ICANN says that it is "considering its next steps", but it's hard to see what those might be, given the unanimous verdict of the courts. Maybe it's time for ICANN to comply with the EU law like everybody else, and for it to stop wasting money in its forlorn attempts to get EU courts to grant it a special exemption from the GDPR's rules.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

22 Comments | Leave a Comment..

Posted on Techdirt - 2 August 2018 @ 3:26am

Facebook Granted 'Unprecedented' Leave To Appeal Over Referral Of Privacy Shield Case To Top EU Court

from the never-a-dull-moment dept

Back in April, we wrote about the latest development in the long, long saga of Max Schrem's legal challenge to Facebook's data transfers from the EU to the US. The Irish High Court referred the case to the EU's top court, asking the Court of Justice of the European Union (CJEU) to rule on eleven issues that the judge raised. Facebook tried to appeal against the Irish High Court's decision, but the received wisdom was that it was not an option for CJEU referrals of this kind. But as the Irish Times reports, to everyone's surprise, it seems the received wisdom was wrong:

The [Irish] Supreme Court has agreed to hear an unprecedented appeal by Facebook over a High Court judge's decision to refer to the European Court of Justice (CJEU) key issues concerning the validity of EU-US data transfer channels.

The Irish Chief Justice rejected arguments by the Irish Data Protection Commissioner and Schrems that Facebook could not seek to have the Supreme Court reverse certain disputed findings of fact by the High Court. The judge said that it was "at least arguable" Facebook could persuade the Supreme Court that some or all of the facts under challenge should be reversed. On that basis, the appeal could go ahead. Among the facts that would be considered were the following key points:

The chief justice said Facebook was essentially seeking that the Supreme Court "correct" the alleged errors, including the High Court findings of "mass indiscriminate" processing, that surveillance is legal unless forbidden, on the doctrine of legal standing in US law and in the consideration of other issues including safeguards.

Facebook also argues the High Court erred in finding the laws and practices of the US did not provide EU citizens with an effective remedy, as required under the Charter of Fundamental Rights of the EU, for breach of data privacy rights.

Those are crucial issues not just for Facebook, but also for the validity of the entire Privacy Shield framework, which is currently under pressure in the EU. It's not clear whether the Irish Supreme Court is really prepared to overrule the High Court judge, and to what extent the CJEU will take note anyway. One thing that is certain is that a complex and important case just took yet another surprising twist.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

3 Comments | Leave a Comment..

Posted on Techdirt - 26 July 2018 @ 8:06pm

EU And Japan Agree To Free Data Flows, Just As Tottering Privacy Shield Framework Threatens Transatlantic Transfers

from the cooperation-not-confrontation dept

The EU's strong data protection laws affect not only how personal data is handled within the European Union, but also where it can flow to. Under the GDPR, just as was the case with the preceding EU data protection directive, the personal data of EU citizens can only be sent to countries whose privacy laws meet the standard of "essential equivalence". That is, there may be differences in detail, but the overall effect has to be similar to the GDPR, something established as part of what is called an "adequacy decision". Just such an adequacy ruling by the European Commission has been agreed in favor of Japan:

This mutual adequacy arrangement will create the world's largest area of safe transfers of data based on a high level of protection for personal data. Europeans will benefit from strong protection of their personal data in line with EU privacy standards when their data is transferred to Japan. This arrangement will also complement the EU-Japan Economic Partnership Agreement, European companies will benefit from uninhibited flow of data with this key commercial partner, as well as from privileged access to the 127 million Japanese consumers. With this agreement, the EU and Japan affirm that, in the digital era, promoting high privacy standards and facilitating international trade go hand in hand. Under the GDPR, an adequacy decision is the most straightforward way to ensure secure and stable data flows.

Before the European Commission formally adopts the latest adequacy decision, Japan has agreed to tighten up certain aspects of its data protection laws by implementing the following:

A set of rules providing individuals in the EU whose personal data are transferred to Japan, with additional safeguards that will bridge several differences between the two data protection systems. These additional safeguards will strengthen, for example, the protection of sensitive data, the conditions under which EU data can be further transferred from Japan to another third country, the exercise of individual rights to access and rectification. These rules will be binding on Japanese companies importing data from the EU and enforceable by the Japanese independent data protection authority (PPC) and courts.

A complaint-handling mechanism to investigate and resolve complaints from Europeans regarding access to their data by Japanese public authorities. This new mechanism will be administered and supervised by the Japanese independent data protection authority.

It is precisely these areas that are proving so problematic for the data flow agreement between the EU and the US, known as the Privacy Shield framework. As Techdirt has reported, the European Commission is under increasing pressure to suspend Privacy Shield unless the US implements it fully -- something it has failed to do so far, despite repeated EU requests. Granting adequacy to Japan is an effective way to flag up that other major economies don't have any problems with the GDPR, and that the EU can turn its attention elsewhere if the US refuses to comply with the terms of the Privacy Shield agreement.

The new data deal with Japan still has several hurdles to overcome before it goes into effect. For example, the European Data Protection Board, the EU body in charge of applying the GDPR, must give its view on the adequacy ruling, as must the civil liberties committee of the European Parliament -- the one that has just called for Privacy Shield to be halted. Nonetheless, the European Commission will be keen to adopt the adequacy decision, not least to show that countries are still willing to reduce trade barriers, rather than to impose them, as the US is currently doing.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

1 Comments | Leave a Comment..

Posted on Techdirt - 25 July 2018 @ 3:27am

South Africa's Proposed Fair Use Right In Copyright Bill Is Surprisingly Good -- At The Moment

from the stand-back-for-the-lobbyist-attacks dept

Too often Techdirt writes about changes in copyright law that are only for the benefit of the big publishing and recording companies, and offer little to individual creators or the public. So it makes a pleasant change to be able to report that South Africa's efforts to update its creaking copyright laws seem, for the moment, to be bucking that trend. Specifically, those drafting the text seem to have listened to the calls for intelligent fair use rights fit for the digital world. As a post on infojustice.org explains, a key aspect of copyright reform is enshrining exceptions that give permission to Internet users to do all the usual online stuff -- things like sharing photos on social media, or making and distributing memes. The South African text does a good job in this respect:

A key benefit of the Bill is that its new exceptions are generally framed to be open to all works, uses, and users. Research shows that providing exceptions that are open to purposes, uses, works and users is correlated with both information technology industry growth and to increased production of works of knowledge creation.

The solution adopted for the draft of the new copyright law is a hybrid approach that contains both a set of specific modern exceptions for various purposes, along with an open general exception that can be used to assess any use not specifically authorized:

The key change is the addition of "such as" before the list of purposes covered by the right, making the provision applicable to a use for any purpose, as long as that use is fair to the author.

In order to test whether a use is fair, the standard four factors are to be considered:

(i) the nature of the work in question;

(ii) the amount and substantiality of the part of the work affected by the act in relation to the whole of the work;

(iii) the purpose and character of the use, including whether --

(aa) such use serves a purpose different from that of the work affected; and
(bb) it is of a commercial nature or for non-profit research, library or educational purposes; and

(iv) the substitution effect of the act upon the potential market for the work in question.

Crucially, the legislators rejected calls by some to include a fifth factor that would look at whether licenses for the intended use were available. As the infojustice.org post points out, had that factor been included, it would have made it considerably harder to claim fair use. That's one reason why the copyright world has been pushing so hard for licensing as the solution to everything -- whether it's orphan works, text and data mining, or the EU's revised copyright directive. That rejection sends an important signal to other politicians looking to update their copyright laws, and makes the South African text particularly welcome, as the infojustice.org post underlines:

We commend its Parliament on both the openness of this process and on the excellent drafting of the proposed fair use clause. We are confident it will become a model for other countries around the world that seek to modernize their copyright laws for the digital age.

However, for that very reason, the fair use proposal is like to come under heavy attack from the copyright companies and their lobbyists. It remains to be seen whether the good things in the present Bill will still be there in the final law.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

26 Comments | Leave a Comment..

Posted on Techdirt - 23 July 2018 @ 7:49pm

Applicant For Major EU Open Access Publishing Contract Proposes Open Source, Open Data And Open Peer Review As Solution

from the Elsevier-not-invited dept

We've just written about growing discontentment among open access advocates with the role that the publishing giant Elsevier will play in monitoring open science in the EU. That unhappiness probably just went up a notch, as a result of the following development, reported here by Nature:

Elsevier last week stopped thousands of scientists in Germany from reading its recent journal articles, as a row escalates over the cost of a nationwide open-access agreement.

The move comes just two weeks after researchers in Sweden lost access to the most recent Elsevier research papers, when negotiations on its contract broke down over the same issue.

The open science monitoring project involving Elsevier is only a very minor part of the EU's overall open science strategy, which itself is part of the €80 billion research and innovation program called Horizon 2020. A new post on the blog of the open access publisher Hindawi reveals that it has put in a bid in response to the European Commission's call for tenders to launch a major new open research publishing platform:

The Commission's aim is to build on their progressive Open Science agenda to provide an optional Open Access publishing platform for the articles of all researchers with Horizon 2020 grants. The platform will also provide incentives for researchers to adopt Open Science practices, such as publishing preprints, sharing data, and open peer review. The potential for this initiative to lead a systemic transformation in research practice and scholarly communication in Europe and more widely should not be underestimated.

That last sentence makes a bold claim. Hindawi's blog post provides some good analysis of why the transition to open access and open science is proving so hard. Hindawi's proposed solution is based on open source code, and openness in general:

Our proposal to the Commission involves the development of an end-to-end publishing platform that is fully Open Source, with an editorial model that incentivises Open Science practices including preprints, data sharing, and objective open peer review. Data about the impact of published outputs would also be fully open and available for independent scrutiny, and the policies and governance of the platform would be managed by the research community. In particular, researchers who are currently disenfranchised by the current academic reward system, including early career researchers and researchers whose primary research outputs include data and software code, would have a key role in developing the policies of the platform.

Recognizing the flaws in the current system of assessment and rewards is key here. Open access has been around for two decades, but the reliance on near-meaningless impact factors to judge the alleged influence of titles, and thus of the work published there, has barely changed at all. As the Hindawi blog post notes:

As long as journal rank and journal impact factor remain the currency used to judge academic quality, no amount of technological change or economic support for open outputs and open infrastructure will make research and researchers more open

Unfortunately, even a major project like the Horizon 2020 open research publishing platform -- whichever company wins the contract -- will not be able to change that culture on its own, however welcome it might be in itself. Core changes must come from within the academic world. Sadly, there are still precious few signs that those in positions of power are willing to embrace not just open access and even open science, but also a radical openness that extends to every aspect of the academic world, including evaluation and recognition.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

9 Comments | Leave a Comment..

Posted on Techdirt - 16 July 2018 @ 9:36am

Guy In Charge Of Pushing Draconian EU Copyright Directive, Evasive About His Own Use Of Copyright Protected Images

from the do-as-I-say,-not-as-I-do? dept

There's one person who wields more power than anyone to shape the awful EU Copyright Directive: the MEP Axel Voss. He's the head of the main Legal Affairs Committee (JURI) that is steering the Directive through the European Parliament. Voss took over from the previous MEP leading JURI, Therese Comodini Cachia, after she decided to return to her native Malta as member of the national parliament. Her draft version of the Directive was certainly not perfect, but it did possess the virtue of being broadly acceptable to all sides of the debate. When Voss took over last year, the text took a dramatic turn for the worse thanks to the infamous "snippet tax" (Article 11 of the proposed Directive) and the "upload filter" (Article 13).

As Mike reported a couple of weeks ago, Voss offered a pretty poor defense of his proposals, showing little understanding of the Internet. But he made clear that he thinks respecting copyright law is really important. For example, he said he was particularly concerned that material is being placed online, where "there is no remuneration of the concerned author." Given that background, it will probably come as no surprise to Techdirt readers to learn that questions are now being asked whether Voss himself has paid creators for material that he has used on his social media accounts:

BuzzFeed News Germany ... looked at the posts from the past 24 months on Voss's social media channels. In the two years, BuzzFeed News has found at least 17 copyrighted images from at least eight different image agencies, including the German press agency dpa.

As good journalists, BuzzFeed News Germany naturally contacted Axel Voss to ask whether he had paid to use all these copyrighted images:

Since last Thursday, 5 July, BuzzFeed News Germany has asked Voss's office and his personal assistant a total of five times in writing and several times over the phone whether Axel Voss or his social media team has paid for the use of these copyrighted photos. Voss's staff responded evasively five times. Asked if the office could provide us with licensing evidence, the Voss office responded: "We do not provide invoices to uninvolved third parties."

Such a simple question -- had Voss paid for the images he used? -- and yet one that seemed so hard for the Voss team to answer, even with the single word "yes". The article (original in German) took screenshots of the images the BuzzFeed Germany journalists had found. That's just as well, because shortly afterwards, 12 of the 17 posts with copyrighted images had been deleted. The journalists contacted Axel Voss once more, and asked why they had disappeared (original in German). To which Axel Voss's office replied: anyone can add and remove posts, if they wish. Which is true, but fails to answer the question, yet again. However, Axel Voss's office did offer an additional "explanation":

according to the current legal situation (...), if the right-holder informs us that we have violated their rights, we remove the image in question according to the notice and takedown procedure of the e-commerce directive.

That is, Axel Voss, or his office, seems to believe it's fine to post copyrighted material online provided you take it down if someone complains. But that's not how it works at all. The EU notice and takedown procedure applies to the Internet services hosting material, not to the individual users of those services. The fact that the team of the senior MEP responsible for pushing the deeply-flawed Copyright Directive through the European Parliament is ignorant of the current laws is bad enough. That he may have posted copyrighted material without paying for it while claiming to be worried that creators aren't being remunerated for their work, is beyond ridiculous.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

21 Comments | Leave a Comment..

Posted on Techdirt - 11 July 2018 @ 7:37pm

European Parliament Turns Up The Pressure On US-EU Privacy Shield Data Transfer Deal A Little More

from the how-much-longer-can-it-last? dept

Many stories on Techdirt seem to grind on forever, with new twists and turns constantly appearing, including unexpected developments -- or small, incremental changes. The transatlantic data transfer saga has seen a bit of both. Back in 2015, the EU's top court ruled that the existing legal framework for moving data across the Atlantic, Safe Harbor, was "invalid". That sounds mild, but it isn't. Safe Harbor was necessary in order for data transfers across the Atlantic to comply with EU data protection laws. A declaration that it was "invalid" meant that it could no longer be used to provide legal cover for huge numbers of commercial data flows that keep the Internet and e-commerce ticking over. The solution was to come up with a replacement, Privacy Shield, that supposedly addressed the shortcomings cited by the EU court.

The problem is that a growing number of influential voices don't believe that Privacy Shield does, in fact, solve the problems of the Safe Harbor deal. For example, in March last year, two leading civil liberties groups -- the American Civil Liberties Union and Human Rights Watch -- sent a joint letter to the EU's Commissioner for Justice, Consumers and Gender Equality, and other leading members of the European Commission and Parliament, urging the EU to re-examine the Privacy Shield agreement. In December, an obscure but influential advisory group of EU data protection officials asked the US to fix problems of Privacy Shield or expect the EU's top court to be asked to rule on its validity. In April of this year, the Irish High Court made just such a referral as a result of a complaint by the Austrian privacy expert Max Schrems. Since he was instrumental in getting Safe Harbor struck down, that's not something to be taken lightly.

Lastly, one of the European Parliament's powerful committees, which helps determine policy related to civil liberties, added its voice to the discussion. It called on the European Commission to suspend the Privacy Shield agreement unless the US fixed the problems that the committee discerned in its current implementation. At that point, it was just a committee making the call. However, in a recent plenary session, the European Parliament itself voted to back the idea, and by a healthy margin:

MEPs call on the EU Commission to suspend the EU-US Privacy Shield as it fails to provide enough data protection for EU citizens.

The data exchange deal should be suspended unless the US complies with EU data protection rules by 1 September 2018, say MEPs in a resolution passed on Thursday by 303 votes to 223, with 29 abstentions. MEPs add that the deal should remain suspended until the US authorities comply with its terms in full.

It's important to note that this vote is largely symbolic: if the US refuses to improve the data protection of EU citizens, there's nothing to force the European Commission to comply with the demand of the European Parliament. That said, the call by arguably the most democratic part of the EU -- MEPs are directly elected by European citizens -- piles more pressure on the European Commission, which is appointed by EU governments, not elected. If nothing else, this latest move adds to the general impression that Privacy Shield is not likely to survive in its present form much longer.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

5 Comments | Leave a Comment..

Posted on Techdirt - 9 July 2018 @ 7:36pm

Elsevier Will Monitor Open Science In EU Using Measurement System That Favors Its Own Titles

from the conflict-of-interest?-I've-heard-of-it dept

Back in April, we wrote about a curious decision to give the widely-hated publisher Elsevier the job of monitoring open science in the EU. That would include open access too, an area where the company has major investments. The fact that the European Commission seemed untroubled by that clear conflict of interest stunned supporters of open access. Now one of them -- the paleontologist Jon Tennant -- is calling on the European Commission to remove Elsevier, and to find another company with no conflicts of interest. As Tennant writes in the Guardian:

How is it reasonable for a multi-billion dollar publishing corporation to not only produce metrics that evaluate publishing impact [of scientific articles], but also to use them to monitor Open Science and help to define its future direction? Elsevier will be providing data through the monitor that will be used to help facilitate future policy making in the EU that it inevitably will benefit from. That's like having McDonald's monitor the eating habits of a nation and then using that to guide policy decisions.

Elsevier responded with a blog post challenging what it calls "misinformation" in Tennant's article:

We are one of the leading open access publishers, and we make more articles openly available than any other publisher. We make freely available open science products and services we have developed and acquired to enable scientists to collaborate, post their early findings, store their data and showcase their output.

It added:

We have co-developed CiteScore and Snowball Metrics with the research community -- all of which are open, transparent, and free indicators.

CiteScore may be "open, transparent, and free", but Tennant writes:

Consider Elsevier's CiteScore metric, a measure of the apparent impact of journals that competes with the impact factor based on citation data from Scopus. An independent analysis showed that titles owned by Springer Nature, perhaps Elsevier’s biggest competitor, scored 40% lower and Elsevier titles 25% higher when using CiteScore rather than previous journal impact factors.

In other words, one of the core metrics that Elsevier will be applying as part of the Open Science Monitor appears to show bias in favor of Elsevier's own titles. One result of that bias could be that when the Open Science Monitor publishes its results based on Elsevier's metrics, the European Commission and other institutions will start using Elsevier's academic journals in preference to its competitors. The use of CiteScore creates yet another conflict of interest for Elsevier.

As well as writing about his concerns, Tennant is also making a formal complaint to the European Commission Ombudsman regarding the relationship between Elsevier and the Open Science Monitor:

The reason we are pursuing this route is due to the fact that the opportunity to raise a formal appeal was denied to us. In the tender award statement, it states that "Within 2 months of the notification of the award decision you may lodge an appeal to the body referred to in VI.4.1.", which is the General Court in Luxembourg. The notification of the award was on January 11, 2018, and it was exactly 2 months and 1 day later when the role of Elsevier as subcontracted was first publicly disclosed. Due to this timing, we were unable to lodge an appeal.

In other words, it was only revealed that Elsevier was the sub-contractor when it was too late to appeal against that choice. A cynic might almost think those behind the move knew people would object, and kept it quiet until it was impossible under the rules to appeal. Open science? Not so much…

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

12 Comments | Leave a Comment..

Posted on Techdirt - 3 July 2018 @ 11:51am

Copyright Industries Reveal Their Ultimate Goal: An Internet Where Everything Online Requires A License From Them

from the now-would-be-a-good-time-to-stop-it-happening dept

Yesterday, Mike took apart an extraordinarily weak attempt by the UK's music collection society, PRS for Music, to counter what it claimed were "myths" about the deeply-harmful Article 13 of the proposed EU Copyright Directive. On the same day, the Guardian published a letter from the PRS and related organizations entitled "How the EU can make the internet play fair with musicians". It is essentially a condensed version of the "myth-busting" article, and repeats many of the same fallacious arguments. It also contains some extremely telling passages that are worth highlighting for the insights that they provide into the copyright industries' thinking and ultimate goal. Here is the main thrust of the letter:

This is not about censorship of the internet, as the likes of Google and Facebook would have you believe. The primary focus of this legislation is concerned with whether or not the internet functions as a fair and efficient marketplace -- and currently it doesn't.

Once again, there is no attempt to demonstrate that Article 13 is not about censorship, merely an assertion that it isn't, together with the usual claim that it's all being orchestrated by big US Internet companies. The fact that over two-thirds of a million people have signed an online petition calling for the "censorship machine" of Article 13 to be stopped rather punctures that tired argument.

More interesting is the second sentence, which essentially confirms that for the recording industry, the Copyright Directive -- and, indeed, the Internet itself -- is purely about getting as much money as possible. There is no sense that there are other important aspects -- like encouraging ordinary people to express themselves, and to be creative for the sheer joy of creating, or in order to amuse and engage with friends and strangers. The fact that all these non-commercial uses will be adversely affected by Article 13 is irrelevant to the recording industry, which seems to believe that making a profit takes precedence over everything else. However, even if they choose to ignore this side of the Internet, the signatories of the letter are well-aware that there is a huge backlash against the proposed law precisely because it is a threat to this kind of everyday online use. Attempting to counter this, they go on:

It is important to recognise that article 13 of the proposed EU copyright directive imposes no obligation on users. The obligations relate only to platforms and rightsholders. Contrary to some sensationalist headlines, internet memes will not be affected, as they are already covered by exceptions to copyright, and nothing in the proposed article will allow rightsholders to block the use of them.

Techdirt pointed out yesterday why the first part of that is intellectually dishonest. The Copyright Directive won't impose obligations on users directly, but on the platforms that people use, which amounts to the same thing in practice. The letter then trots out the claim that Internet memes will not be affected, and specifically says this is because they are already covered by EU exceptions to copyright.

This is simply not true. Article 5 of the EU's 2001 Directive on the "harmonization of certain aspects of copyright and related rights in the information society" lays down that "Member States may provide for exceptions or limitations", including "for the purpose of caricature, parody or pastiche". However, that is optional, not compulsory. In fact, nineteen EU Member States -- including the EU's most populous country, Germany -- have chosen not to provide an exception for parody. Even assuming that memes would be covered by parody exceptions -- by no means guaranteed -- they are in any case illegal in 19 EU nations.

Licensing is not an option here. There are many diverse sources for the material used in memes, most of which have no kind of organization that could give a license. The only way for online companies to comply with Article 13 would be to block all memes using any kind of pre-existing material in those 19 countries without a parody exception. Worse: because it will be hard to apply different censorship rules for each EU nation, it is likely that the upload filters will block all such memes in the whole EU, erring on the side of caution. It will then be up to the person whose meme has been censored to appeal against that decision, using an as-yet undefined appeals mechanism. The chilling effect this "guilty until proven innocent" approach will have on memes and much else is clear.

The blatant misinformation about whether memes would be blocked is bad enough. But in many ways, the most shocking phrase in the letter is the following:

Actually, article 13 makes it easier for users to create, post and share content online, as it requires platforms to get licences, and rightsholders to ensure these licences cover the acts of all individual users acting in a non-commercial capacity.

There, in black and white, is the end-game that the recording industry is seeking: that every online act of individual users, even the non-commercial ones, on the major platforms must be licensed. But the desire to control the online world, and to dictate who may do what there, is not limited to the recording companies: it's what all the copyright industries want. That can be seen in Article 11 of the Copyright Directive -- the so-called "snippet tax" -- which will require licensing for the use by online sites of even small excerpts of news material.

It's also at the root of the core problem with Article 3 of the proposed EU law. This section deals with the important new field of text and data mining (TDM), which takes existing texts and data, and seeks to extract new information by collating them and analyzing them using powerful computers. The current Copyright Directive text allows TDM to be carried out freely by non-profit research organisations, on material to which they have lawful access. However, companies must pay publishers for a new, additional, license to carry out TDM, even on material they have already licensed for traditional uses like reading. That short-sighted double-licensing approach pretty much guarantees that AI startups, which typically require frictionless access to large amounts of training data, won't choose to set up shop in the EU. But the publishing industry never cares about the collateral damage it inflicts, provided it attains its purely selfish goals.

Although it's rather breathtaking to see the copyright world openly admit that its ultimate aim is to turn the Internet into a space where everything is licensed, we shouldn't be surprised. Back in 2013, Techdirt wrote about the first stages of the EU's revision of its copyright law. One preliminary initiative was called "Licences for Europe", and its stated aim was to "explore the potential and limits of innovative licensing and technological solutions in making EU copyright law and practice fit for the digital age". What we are seeing now in the proposed Copyright Directive is simply a fulfillment of these ambitions, long-cherished by the copyright industries. If you aren't happy about that, now would be a good time to tell the EU Parliament to Save Your Internet. It may be your last chance.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

111 Comments | Leave a Comment..

Posted on Techdirt - 2 July 2018 @ 7:48pm

Researchers Reveal Details Of Printer Tracking Dots, Develop Free Software To Defeat It

from the whistleblowers-of-the-world,-rejoice,-but-still-be-careful dept

As Techdirt has reported previously in the case of Reality Leigh Winner, most modern color laser printers place tiny yellow tracking dots on every page printed -- what Wikipedia calls "printer steganography". The Electronic Frontier Foundation (EFF) first started warning about this sneaky form of surveillance back in 2005. It published a list of printers and whether it was known that they used tracking dots. In 2017, the EFF stopped updating the list, and wrote:

It appears likely that all recent commercial color laser printers print some kind of forensic tracking codes, not necessarily using yellow dots. This is true whether or not those codes are visible to the eye and whether or not the printer models are listed here. This also includes the printers that are listed here as not producing yellow dots.

Despite the EFF's early work in exposing the practice, there has been limited information available about the various tracking systems. Two German researchers at the Technical University in Dresden, Timo Richter and Stephan Escher, have now greatly extended our knowledge about the yellow dot code (via Netzpolitik.org). As the published paper on the work explains, the researchers looked at 1286 printed pages from 141 printers, produced by 18 different manufacturers. They discovered four different encoding systems, including one that was hitherto unknown. The yellow dots formed grids with 48, 64, 69 or 98 points; using the grid to encode binary data, the hidden information was repeated multiple times across the printed page. In all cases the researchers were able to extract the manufacturer's name, the model's serial number, and for some printers the date and time of printing too.

It's obviously good to have all this new information about tracking dots, but arguably even more important is a software tool that the researchers have written, and made freely available. It can be used to obfuscate tracking information that a printer places in one of the four grid patterns, thus ensuring that the hard copy documents cannot easily be used to trace who printed them. Printer manufacturers will doubtless come up with new ways of tracking documents, and may already be using some we don't know about, but this latest work at least makes it harder with existing models.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

40 Comments | Leave a Comment..

Posted on Techdirt - 2 July 2018 @ 3:23am

Over 60 Organizations Want Sanctions For EU Nations' Failure To Repeal 'Invalid' Data Retention Laws

from the attacking-the-data-retention-zombie dept

We recently wrote about a slight setback in the fight against mass surveillance in Europe. But in general, there have been some good decisions in this field by the EU's top court. In 2014, the Court of Justice of the European Union (CJEU) ruled that the region's Data Retention Directive was "invalid", in what is generally known as the "Digital Rights Ireland" case. In 2016, the CJEU took a similarly dim view of the UK's Data Retention and Investigatory Powers Bill (DRIPA), in the "Tele-2/Watson" judgment. Under EU law, those decisions had to be implemented by all the EU Member States. But a report by Privacy International published in September last year showed that compliance has been dismal (pdf):

in an alarmingly large number of Member States (roughly 40% of all countries surveyed in this report) the pre-Digital Rights Ireland regime transposing Directive 2006/24 is still in place.

That is, the national laws that implemented the Data Retention Directive had not been repealed, despite the CJEU's ruling that they were invalid, nor had new legislation been passed. The research also showed something interesting about the other countries that had repealed or amended their data retention laws:

What has emerged from our analysis is that as a rule of thumb repeal or amendments to data retention legislation have mainly occurred as a result of challenges in national courts, predominately by human rights NGOs, while Governments and legislators have been largely inactive.

In other words, governments have to be kicked into doing something, otherwise they just ignore the CJEU's ruling. Based on that fact, dozens of NGOs, community networks, academics and activists have decided to increase the pressure on Member States that are slacking:

60 organisations, community networks and academics in 19 EU Member States are sharing their concerns to the European Commission, to demand action, and to stand for the protection of fundamental rights enshrined in Articles 7, 8 and 11 of the Charter of Fundamental Rights of the European Union, as interpreted by the Grand Chamber of the CJEU. We call for the application of sanctions for non-compliant Member States by referring to the CJEU, which should logically strike down all current data retention national frameworks.

As the dedicated web site stopdataretention.eu indicates, there are now over 60 organizations backing the move and signatories to the formal letter of complaint sent to the European Commission (pdf). Given the CJEU's clear ruling against the earlier data retention frameworks, it seems likely that it will also strike down the national implementations of them. Whether the European Commission will send these cases to the CJEU, and how long it will take if it decides to do so, is less clear. If nothing else, the latest move underlines just how important it is for digital rights organizations to keep up the pressure -- and how hard it is to kill off bad EU laws once they are passed.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

Leave a Comment..

Posted on Techdirt - 21 June 2018 @ 11:58am

In A Surprising Decision, European Court Of Human Rights Says Sweden's Mass Surveillance Is Fine

from the but-top-EU-court's-views-may-matter-more dept

In the wake of Snowden's revelations of the scale of mass surveillance around the world, various cases have been brought before the courts in an attempt to stop or at least limit this activity. One involved Sweden's use of bulk interception for gathering foreign intelligence. A public interest law firm filed a complaint at the European Court of Human Rights (ECtHR). It alleged that governmental spying breached its privacy rights under Article 8 of the European Convention on Human Rights (pdf). The complaint said that the system of secret surveillance potentially affected all users of the Internet and mobile phones in Sweden, and pointed out that there was no system for citizens to use if they suspected their communications had been intercepted. The ECtHR has just ruled that "although there were some areas for improvement, overall the Swedish system of bulk interception provided adequate and sufficient guarantees against arbitrariness and the risk of abuse":

In particular, the scope of the signals intelligence measures and the treatment of intercepted data were clearly defined in law, permission for interception had to be by court order after a detailed examination, it was only permitted for communications crossing the Swedish border and not within Sweden itself, it could only be for a maximum of six months, and any renewal required a review. Furthermore, there were several independent bodies, in particular an inspectorate, tasked with the supervision and review of the system. Lastly, the lack of notification of surveillance measures was compensated for by the fact that there were a number of complaint mechanisms available, in particular via the inspectorate, the Parliamentary Ombudsmen and the Chancellor of Justice.

When coming to that conclusion, the Court took into account the State's discretionary powers in protecting national security, especially given the present-day threats of global terrorism and serious cross-border crime.

One expert in this area, TJ McIntyre, expressed on Twitter his disappointment with the judgment:

It might have been too much to expect bulk intercept ruled out in principle, but it is surprising to see a retreat from existing standards on safeguards.

McIntyre played a leading role in one of the key cases brought against mass surveillance, by Digital Rights Ireland in 2014. It resulted in the EU's top court, the Court of Justice of the European Union (CJEU), ruling the EU's Data Retention Directive was "invalid". As McIntyre notes, the detailed ECtHR analysis mentions the CJEU decision, but not the more recent ruling by the latter that struck down the "Safe Harbor" framework because of mass surveillance by the NSA.

The judgment significantly waters down safeguards previously developed by the ECtHR in relation to notification and possibility of a remedy against unlawful surveillance.

For example, McIntyre points out the ECtHR accepted that it is necessary for the Swedish signals intelligence service to store raw material before it can be manually processed:

Remarkably weak controls on storage and downstream use of intercept material were accepted by the ECtHR -- in particular, it was satisfied with retention of bulk intercept "raw material" for one year!

Something of a setback in terms of limiting mass surveillance, the latest judgment goes against the general trend of decisions by the arguably more important CJEU court. In 2014 the latter effectively ruled that its own decisions should take precedence over those of the ECtHR if they came into conflict. That is now more likely, given the CJEU's hardening position against mass surveillance, and the diverging judgment from the ECtHR, which shows some softening.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

16 Comments | Leave a Comment..

Posted on Techdirt - 20 June 2018 @ 7:38pm

China's Latest Censorship Crackdown Target: Videos Of Women Rubbing, Kissing And Licking Binaural Microphones

from the whisper-sweet-nothings-in-my-ear dept

A few weeks back, we wrote about some unpublished censorship guidelines that provided insights into what the Chinese government is trying to stamp out online. However, one of the more curious activities whose depiction was forbidden was "vulgar use of a microphone controller". That seemed both surprisingly specific, and yet tantalizingly vague. A new post on Abacus News may explain what was meant by that phrase. It reports on yet another censorship move by the Chinese authorities:

the country's anti-pornography office ordered a number of platforms to remove a lot of ASMR content -- because they say some are akin to softcore porn.

Autonomous sensory meridian response (ASMR) is defined by Wikipedia as follows:

a term used for an experience characterized by a static-like or tingling sensation on the skin that typically begins on the scalp and moves down the back of the neck and upper spine. It has been compared with auditory-tactile synesthesia. ASMR signifies the subjective experience of "low-grade euphoria" characterized by "a combination of positive feelings and a distinct static-like tingling sensation on the skin". It is most commonly triggered by specific auditory or visual stimuli, and less commonly by intentional attention control.

The banned videos in China typically show people -- well, nearly always young women -- whispering into special high-quality binaural microphones that aim to capture audio the same way our ears hear sounds. As well as producing extremely realistic results, the microphones also allow sounds to move from one ear to the other -- best experienced with headphones to enhance this effect -- as if the person speaking is right next to you, and moving around very close to you.

The women in the videos whisper, rather than speak, because it has been found to be the most effective way to produce ASMR's characteristic "tingling" sensation. But ASMR videos also include the sounds of people licking, kissing, and rubbing the microphones in various ways -- which may explain that "vulgar use of a microphone controller" the Chinese authorities want to censor. As a representative example, the Abacus News points to a two-hour long YouTube video of one of the ASMR stars in China, Xuanzi Giant 2 Rabbit:

In the video, she speaks softly into an ear-shaped microphone, taps it, covers it in plastic, even rubs a Q-tip inside it, creating a variety of sounds to trigger ASMR.

But she does it while dressed in the revealing outfit of Mai Shiranui from The King of Fighters, and whispers things like "Husband, your highness, do you have any instructions?" In another clip, wearing the same outfit, she strikes a provocative pose on the bed.

ASMR is even referred to as "in-skull orgasm" by many Chinese internet users, highlighting the sexual image of some videos.

It's not hard to see why China's anti-pornography department might want to block this kind of thing. However, as a short video by The New York Times exploring the phenomenon makes clear, mainstream ADMR is rather different from these Chinese variants. The aim is to relax rather than excite, and to tap into what may be a calming physiological response similar to that produced when animals groom each other. In any case, the Chinese attempt to censor ASMR videos seems pretty hopeless:

After hearing about this crackdown, we tried to search by the keyword "ASMR" on some of China's biggest streaming platforms, like Bilibili and Douyu. The searches yielded no results. But the videos still appear if you go directly to the playlists of many ASMR hosts. And since they're not banned in the West, many are available on YouTube.

This probably means we can expect yet another Chinese crackdown on ASMR videos at some point in the future, and yet another failure to eradicate that "vulgar use of a microphone controller".

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

17 Comments | Leave a Comment..

More posts from Glyn Moody >>