Glyn Moody’s Techdirt Profile


About Glyn MoodyTechdirt Insider

Posted on Techdirt - 10 January 2019 @ 6:59pm

PLOS ONE Topic Pages: Peer-Reviewed Articles That Are Also Wikipedia Entries: What's Not To Like?

from the good-for-the-academic-career,-too dept

It is hard to imagine life without Wikipedia. That's especially the case if you have school-age children, who now turn to it by default for information when working on homework. Less well-known is the importance of Wikipedia for scientists, who often use its pages for reliable explanations of basic concepts:

Physicists -- researchers, professors, and students -- use Wikipedia daily. When I need the transition temperature for a Bose-Einstein condensate (prefactor and all), or when I want to learn about the details of an unfamiliar quantum algorithm, Wikipedia is my first stop. When a graduate student sends me research notes that rely on unfamiliar algebraic structures, they reference Wikipedia.

That's from a blog post on the open access publisher Public Library of Science (PLOS) Web site. It's an announcement of an interesting new initiative to bolster the number of physicists contributing to Wikipedia by writing not just new articles for the online encyclopedia, but peer-reviewed ones. The additional element aims to ensure that the information provided is of the highest quality -- not always the case for Wikipedia articles, whatever their other merits. As the PLOS post explains, the new pages have two aspects:

A peer-reviewed 'article' in [the flagship online publication] PLOS ONE, which is fixed, peer-reviewed openly via the PLOS Wiki and citable, giving information about that particular topic.

That finalized article is then submitted to Wikipedia, which becomes a living version of the document that the community can refine, build on, and keep up to date.

The two-pronged approach of these "Topic Pages" has a number of benefits. It means that Wikipedia gains high-quality, peer-reviewed articles, written by experts; scientists just starting out gain an important new resource with accessible explanations of often highly-technical topics; and the scientists writing Topic Pages can add them to their list of citable publications -- an important consideration for their careers, and an added incentive to produce them.

Other PLOS titles such as PLOS Computational Biology and PLOS Genetics have produced a few Topic Pages previously, but the latest move represents a major extension of the idea. As the blog post notes, PLOS ONE is initially welcoming articles on topics in quantum physics, but over time it plans to expand to all of physics. Let's hope it's an idea that catches on and spreads across all academic disciplines, since everyone gains from the approach -- not least students researching their homework.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

13 Comments | Leave a Comment..

Posted on Techdirt - 7 January 2019 @ 7:33pm

China Starts Using Facial Recognition-Enabled 'Smart' Locks In Its Public Housing

from the just-wait-until-they-know-your-citizen-score-too dept

Surveillance using facial recognition is sweeping the world. That's partly for the usual reason that the underlying digital technology continues to become cheaper, more powerful and thus more cost-effective. But it's also because facial recognition can happen unobtrusively, at a distance, without people being aware of its deployment. In any case, many users of modern smartphones have been conditioned to accept it unthinkingly, because it's a quick and easy way to unlock their device. This normalization of facial recognition is potentially bad news for privacy and freedom, as this story in the South China Morning Post indicates:

Beijing is speeding up the adoption of facial recognition-enabled smart locks in its public housing programmes as part of efforts to clamp down on tenancy abuse, such as illegal subletting.

The face-scanning system is expected to cover all of Beijing's public housing projects, involving a total of 120,000 tenants, by the end of June 2019

Although a desire to stop tenancy abuses sounds reasonable enough, it's important to put the move in a broader context. As Techdirt reported back in 2017, China is creating a system storing the facial images of every Chinese citizen, with the ability to identify any one of them in three seconds. Although the latest use of facial recognition with "smart" locks is being run by the Beijing authorities, such systems don't exist in isolation. Everything is being cross-referenced and linked together to ensure a complete picture is built up of every citizen's activities -- resulting in what is called the "citizen score" or "social credit" of an individual. China said last year that it would start banning people with "bad" citizen scores from using planes and trains for up to a year. Once the "smart" locks are in place, it would be straightforward to make them part of the social credit system and its punishments -- for example by imposing a curfew on those living at an address, or only allowing certain "approved" visitors.

Even without using "smart" locks in this more extreme way, the facial recognition system could record everyone who came visiting, and how long they stayed, and transmit that data to a central monitoring station. The scope for abuse by the authorities is wide. If nothing else, it's a further reminder that if you are not living in China, where you may not have a choice, installing "smart" Internet of things devices voluntarily may not be that smart.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

13 Comments | Leave a Comment..

Posted on Techdirt - 19 December 2018 @ 7:59pm

German City Wants Names And Addresses Of Airbnb Hosts; Chinese Province Demands Full Details Of Every Guest Too

from the sharing,-but-not-like-that dept

Online services like Airbnb and Uber like to style themselves as part of the "sharing economy". In truth, they are just new twists on the rental sector, taking advantage of the Internet's widespread availability to broaden participation and ease negotiation. This has led to a tension between the online services and traditional local regulators, something Techdirt noted in the US, back in 2016. Similar battles are still being fought around the world. Here's what is happening in Germany, as reported by

The City of Munich asked Airbnb to provide it with all advertisements for rooms in the city which exceeded the permissible maximum lease period [of eight weeks in a calendar year]. Specifically, for the period from January 2017 to July 2018, it wanted Airbnb to disclose the addresses of the apartments offered as well as the names and addresses of the hosts.

Airbnb challenged the request before the administrative court in Munich, which has just ruled that the US company must comply with German laws, even though its European office is based in Ireland. It said that the request was lawful, and did not conflict with the EU's privacy regulations. Finally, it ruled that the City of Munich's threat to impose a €300,000 fine on Airbnb if it did not comply with its information request was also perfectly OK. Presumably Airbnb will appeal against the decision, but if it is confirmed it could encourage other cities in Germany to make similar requests. At least things there aren't as bad as in China. According to a post from TechNode:

The eastern Chinese province of Zhejiang will require online home-sharing platforms, including Airbnb, to report owner and guest information to the province's Public Security Department. The platforms will need to check, register, and report the identity of both parties, including the time the guest plans to arrive and leave the property.

That information provides a very handy way of keeping tabs on people travelling around the province who stay in Airbnb properties and the like. It's yet another example of how the Chinese authorities are forcing digital services to help keep an eye on every aspect of citizens' lives.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

28 Comments | Leave a Comment..

Posted on Techdirt - 14 December 2018 @ 3:40am

Top EU Court's Advocate General Says German Link Tax Should Not Be Applied -- But On A Technicality

from the nice,-but-it-won't-stop-article-11 dept

As numerous Techdirt posts have explained, there are two really problematic areas with the EU's proposed copyright directive. Article 13, which will require pretty much every major online site to filter uploaded content, and Article 11, the so-called "link tax", more formally known as an "ancillary copyright". It's yet another example of the copyright ratchet -- the fact that laws governing copyright only ever get stronger, in favor of the industry, never in the other direction, in favor of the public. We know for sure that Article 11 will be a disaster because it's already been tried twice -- in Germany and Spain -- and failed both times.

Despite that fact, the German and Spanish laws are still on the law books in their respective countries. VG Media, the German collective management organization handling copyright on behalf of press publishers and others lost no time in bringing a case against Google. It alleged that the US Internet company had used text excerpts, images and videos from press and media material produced by VG Media's members without paying a fee.

Alongside the issue of whether Google did indeed infringe on the new law, there is another consideration arising out of some fairly obscure EU legislation. If the new German ancillary copyright law is "a technical regulation specifically aimed at a particular information society service", then it would require prior notification to the European Commission in order to be applicable. The German court considering VG Media's case asked the Court of Justice of the European Union, (CJEU), the EU's top court, to decide whether or not the link tax law is indeed a "technical regulation" of that kind. As is usual for CJEU cases, one of the court's Advocates General has offered a preliminary opinion before the main ruling is handed down (pdf). It concludes:

the Court should rule that national provisions such as those at issue, which prohibit only commercial operators of search engines and commercial service providers which edit content, but not other users, including commercial users, from making press products or parts thereof (excluding individual words and very short text excerpts) available to the public constitute rules specifically aimed at information society services. Further, national provisions such as those at issue constitute a technical regulation, subject to the notification obligation under that Directive.

It follows therefore, that in the absence of notification of these national provisions to the [European] Commission, these new German copyright rules cannot be applied by the German courts.

Although that sounds great, there are two caveats. One is that the CJEU is not obliged to follow the Advocate General's reasoning, although it often does. This means that it is quite likely that the top EU court will rule that Germany's link tax cannot be applied, and thus that Google has not infringed on any snippets produced by VG Media's members. The more important caveat is that even if the CJEU does take that view, it won't affect Article 11, which is EU, not national, legislation, and not finalized yet. So we are still facing the dire prospect of an EU-wide ancillary copyright that not only won't work, but also is something that many publishers don't even want.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

14 Comments | Leave a Comment..

Posted on Techdirt - 11 December 2018 @ 3:31pm

How Bike-Sharing Services And Electric Vehicles Are Sending Personal Data To The Chinese Government

from the why-we-can't-have-nice-things dept

A year ago, Techdirt wrote about the interesting economics of bike-sharing services in China. As the post noted, competition is fierce, and the profit margins slim. The real money may be coming from gathering information about where people riding these bikes go, and what they may be doing, and selling it to companies and government departments. As we warned, this was something that customers in the West might like to bear in mind as these Chinese bike-sharing startups expand abroad. And now, the privacy expert Alexander Hanff has come across exactly this problem with the Berlin service of the world's largest bike-sharing operator, Mobike:

data [from the associated Mobike smartphone app] is sent back to Mobike's servers in China, it is shared with multiple third parties (the privacy policy limits this sharing in no way whatsoever) and they are using what is effectively a social credit system to decrease your "score" if you prop the bike against a lamp post to go and buy a loaf of bread.

Detailed location data of this kind is far from innocuous. It can be mined to provide a disconcertingly complete picture of your habits and life:

through the collection and analysis of this data the Chinese Government now likely have access to your name, address (yes it will track your address based on the location data it collects), where you work, what devices you use, who your friends are (yes it will track the places you regularly stop and if they are residential it is likely they will be friends and family). They also buy data from other sources to find out more information by combining this data with the data they collect directly. They know what your routines are such as when you are likely to be out of the house either at work, shopping or engaging in social activities; and for how long.

As Hanff points out, most of this is likely to be illegal under the EU's GDPR. But Mobike's services are available around the world, including in the US. Although Mobike's practices can be challenged in the EU, elsewhere there may be little that can be done.

And if you think the surveillance made possible by bike sharing is bad, wait till you see what can be done with larger vehicles. As many people have noted, today's complex devices no longer have computers built in: they are, essentially, computers with specialized capabilities. For example, electric cars are computers with an engine and wheels. That means they are constantly producing large quantities of highly-detailed data about every aspect of the vehicle's activity. As such, the data from electric cars is a powerful tool for surveillance even deeper than that offered by bike sharing. According to a recent article from Associated Press, it is an opportunity that the authorities have been quick to seize in China:

More than 200 manufacturers, including Tesla, Volkswagen, BMW, Daimler, Ford, General Motors, Nissan, Mitsubishi and U.S.-listed electric vehicle start-up NIO, transmit position information and dozens of other data points to [Chinese] government-backed monitoring centers, The Associated Press has found. Generally, it happens without car owners' knowledge.

What both these stories reveal is how the addition of digital capabilities to everyday objects -- either indirectly through smartphone apps, as with Mobike, or directly in the case of computerized electric vehicles -- brings with it the risk of pervasive monitoring by companies and the authorities. It's part of a much larger problem of how to enjoy the benefits of amazing technology without paying an unacceptably high price in terms of sacrificing privacy.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

12 Comments | Leave a Comment..

Posted on Techdirt - 5 December 2018 @ 7:36pm

Some EU Nations Still Haven't Implemented The 2013 Marrakesh Treaty For The Blind

from the copyright-trumps-compassion dept

The annals of copyright are littered with acts of extraordinary stupidity and selfishness on the part of the publishers, recording industry and film studios. But few can match the refusal by the publishing industry to make it easier for the blind to gain access to reading material that would otherwise be blocked by copyright laws. Indeed, the fact that it took so long for what came to be known as the Marrakesh Treaty to be adopted is a shameful testimony to the publishing industry's belief that copyright maximalism is more important than the rights of the visually impaired. As James Love, Director of Knowledge Ecology International (KEI), wrote in 2013, when the treaty was finally adopted:

It difficult to comprehend why this treaty generated so much opposition from publishers and patent holders, and why it took five years to achieve this result. As we celebrate and savor this moment, we should thank all of those who resisted the constant calls to lower expectations and accept an outcome far less important than what was achieved today.

Even once the treaty was agreed, the publishing industry continued to fight against making it easier for the visually impaired to enjoy better access to books. In 2016, Techdirt reported that the Association of American Publishers was still lobbying to water down the US ratification package. Fortunately, as an international treaty, the Marrakesh Treaty came into force around the world anyway, despite the US foot-dragging.

Thanks to heavy lobbying by the region's publishers, the EU has been just as bad. It only formally ratified the Marrakesh Treaty in October of this year. As an article on the IPKat blog explains, the EU has the authority to sign and ratify treaties on behalf of the EU Member States, but it then requires the treaty to be implemented in national law:

In this case, the EU asked that national legislators reform their domestic copyright law by transposing the 2017/1564 Directive of 13 September 2017. The Directive requires that all necessary national measures be implemented by 12 October 2018. Not all member states complied by this deadline, whereby the EU Commission introduced infringement procedures against them for non-compliance. The list of the non-compliant countries is as follows:

Belgium, Cyprus, Czech Republic, Germany, Estonia, Greece, Finland, France, Italy, Lithuania, Luxembourg, Latvia, Poland, Portugal, Romania, Slovenia, UK

The IPKat post points out that some of the countries listed there, such as the UK and France, have in fact introduced exceptions to copyright to enable the making of accessible copies to the visually impaired. It's still a bit of mystery why they are on the list:

At the moment, the Commission has not published details regarding the claimed non-compliance by the countries listed. We cannot assume that the non-compliance proceedings were launched because the countries failed to introduce the exceptions in full, because countries can also be sanctioned if the scope of the exception implemented is too broad, so much so that it is disproportionately harmful to the interest of rightsholders. So we will have to wait and see what part of the implementation was deemed not up to scratch by the Commission.

As that indicates, it's possible that some of the countries mentioned are being criticized for non-compliance because they were too generous to the visually impaired. If it turns out that industry lobbyists are behind this, it would be yet another astonishing demonstration of selfishness from publishers whose behavior in connection with the Marrakesh Treaty has been nothing short of disgusting.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

15 Comments | Leave a Comment..

Posted on Techdirt - 27 November 2018 @ 2:15am

New GDPR Ruling In France Could Dramatically Re-shape Online Advertising

from the not-going-with-the-consent-flow dept

The EU's General Data Protection Regulation only came into force in May of this year. Since then, privacy regulators across the EU have been trying to work out what it means in practice. As Techdirt has reported, some of the judgments that have emerged were pretty bad. A new GDPR ruling from France has just appeared that looks likely to have a major impact on how online advertising works in the EU, and therefore probably further afield, given the global nature of the Internet.

The original decision in French is rather dense, although it does include the use of the delightful word "mobinaute", which is apparently the French term for someone accessing the Internet on a mobile device. If you'd like to read something in English, Techcrunch has a long and clear explanation. There's also a good, shorter take from Johnny Ryan of the browser company Brave, which is particularly interesting for reasons I'll explain below.

First, the facts of the case. The small French company Vectaury gathers personal information, including geolocation, about millions of users of thousands of mobile apps on behalf of the companies that created them. It analyzes the data to create user profiles that companies might want to advertise to:

We continuously analyse, classify and enrich hundreds of thousands of profiles in order to offer you big data predictive models and actionable audience segments at any time. Our geo-profiling algorithm relies on a framework of more than 80 million points of interest around the world, grouped into 450 categories.

Vectaury sells access to those profiles using a standard industry technique known as "real-time bidding" (RTB). This really does happen in real-time: advertisers can bid to display their ads on Web pages as they are loading on a user's mobile. The key benefit is that it allows ads to be tightly targeted to audiences that are more likely to respond to them. However, to do this, personal information has to be sent to many potential advertisers so that they can submit their (automated) bids.

That's a problem under the GDPR, since users are supposed to give their consent before personal data is transmitted to companies in this way. To get around that problem, the industry has developed what are known as consent management platforms (CMP). In theory, these allow users to pick and choose exactly what kind of information is sent to which advertisers. But in practice they usually amount to a top-level button marked "I accept", which everyone clicks on because it's too much effort going through the subsidiary pages that lie underneath. The top-level acceptance grants permission to all the bundled advertisers, hidden in lower levels of the CMP, to use personal data as they wish.

When the French data protection authority CNIL carried out an on-site inspection of Vectaury, it found the company was holding the personal data of 67.6 million people. However, it did not accept that Vectaury had been given meaningful permission to use that data through the use of the bundled permission system. In a trail-blazing decision, CNIL said that Vectaury couldn't simply point to contracts that required its partners to ask users for permission to share personal data: Vectaury had to be able to show that it had checked it really did have permission from everyone whose data it had acquired.

That ruling is not just a big problem for Vectaury -- it's hard to see how it could possibly confirm consent for the 67.6 million people whose data it holds. It's also a problem for the online advertising industry in Europe, which uses a framework for GDPR "consent flow" that has been created by industry trade association and standards body, IAB Europe. Vectaury's system is essentially the same as IAB Europe's, so it would seem that the latest ruling by the French data protection authority also calls into question the industry standard technique for obtaining consent that is vital for the RTB process. Without that "consent flow", it is not possible to share personal data so that automated real-time bids can be submitted.

If that interpretation is correct, it would mean that RTB as currently practiced in the EU will no longer be allowed. In fact, the RTB system was already under threat because of a GDPR complaint filed a couple of months ago with the Irish Data Protection Commissioner and the UK Information Commissioner, which notes:

Every time a person visits a website and is shown a "behavioural" ad on a website, intimate personal data that describes each visitor, and what they are watching online, is broadcast to tens or hundreds of companies. Advertising technology companies broadcast these data widely in order to solicit potential advertisers' bids for the attention of the specific individual visiting the website.

A data breach occurs because this broadcast, known as an "bid request" in the online industry, fails to protect these intimate data against unauthorized access. Under the GDPR this is unlawful.

The three complainants are Jim Killock, Executive Director of the Open Rights Group, Michael Veale of University College London, and Johnny Ryan of Brave, mentioned above. His blog post about the new French GDPR ruling concludes:

This is the latest in a series of decisions published by CNIL against adtech companies. ... What marks this decision apart are the broad implications for RTB, and for the IAB consent framework.

It could also be a problem for Google, which relies on a similar approach for its own real-time ad bidding system. The potential implications of the CNIL ruling across the EU are a further indication of the massive long-term impact the GDPR will have on the Internet, perhaps in multiple and unexpected ways.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

27 Comments | Leave a Comment..

Posted on Techdirt - 8 November 2018 @ 7:01pm

Leading Open Access Supporters Ask EU To Investigate Elsevier's Alleged 'Anti-Competitive Practices'

from the are-you-listening,-Commissioner-Vestager? dept

Back in the summer, we wrote about the paleontologist Jon Tennant, who had submitted a formal complaint to the European Commission regarding the relationship between the publishing giant Elsevier and the EU's Open Science Monitor. Now Tennant has joined with another leading supporter of open access, Björn Brembs, in an even more direct attack on the company and its practices, reported here by the site Research Europe:

Two academics have demanded the European Commission investigate the academic publisher Elsevier for what they say is a breach of EU competition rules that is harming research.

Palaeontologist Jon Tennant and neuroscientist Björn Brembs, who are both advocates for making research results openly available, say the academic publishing market "is clearly not functioning well" in an official complaint about Elsevier's parent company RELX Group.

The pair claim RELX and Elsevier are in breach of EU rules both due to general problems with the academic publishing market and "abuse of a dominant position within this market".

The 22-page complaint spells out what the problem is. It makes the following important point about the unusual economics of the academic publishing market:

For research to progress, access to all available relevant sources is required, which means that there is no ability to transfer or substitute products, and there is little to no inter-brand competition from the viewpoint of consumers. If a research team requires access to knowledge contained within a journal, they must have access to that specific journal, and cannot substitute it for a similar one published by a competitor. Indeed, the entire corpus of research knowledge is built on this vital and fundamental process of building on previously published works, which drives up demand for all relevant published content. As such, publishers do not realistically compete with each other, as all their products are fundamentally unique (i.e., each publisher has a 100% market share for each journal or article), and unequivocally in high demand due to the way scholarly research works. The result of this is that consumers (i.e., research institutions and libraries) have little power to make cost-benefit evaluations to decide whether or not to purchase, and have no choice but to pay whatever price the publishers asks with little transparency over costs, which we believe is a primary factor that has contributed to more than a 300% rise in journal prices above inflation since 1986. Thus, we believe that a functional and competitive market is not currently able to form due to the practices of dominant players, like Elsevier, in this sector.

Most of the complaint is a detailed analysis of why academic publishing has become so dysfunctional, and is well-worth reading by anyone interested in understanding the background to open access and its struggles.

As to what the complaint might realistically achieve, Tennant told Techdirt that there are three main possibilities. The European Commission can simply ignore it. It can respond and say that it doesn't think there is a case to answer, in which case Tennant says he will push the Commission to explain why. Finally, in the most optimistic outcome, the EU could initiate a formal investigation of Elsevier and the wider academic publishing market. Although that might seem too much to hope for, it's worth noting that the EU Competition Authority is ultimately under the Competition Commissioner, Margrethe Vestager. She has been very energetic in her pursuit of Internet giants like Google. It could certainly be a hugely significant moment for open access if she started to take an interest in Elsevier in the same way.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

Read More | 11 Comments | Leave a Comment..

Posted on Techdirt - 6 November 2018 @ 3:37pm

Big Boost For Open Access As Wellcome And Bill & Melinda Gates Foundation Back EU's 'Plan S'

from the no-embargoes,-and-cc-by dept

Back in September, Techdirt wrote about the oddly-named 'Plan S', which was nonetheless an important step forward for open access in Europe. As we remarked then, the hope was that others would support the initiative, and that has now happened, with two of the biggest names in the science funding world signing up to the approach:

To ensure that research findings are shared widely and are made freely available at the time of publication, Wellcome and the Bill & Melinda Gates Foundation have today (Monday) joined cOAlition S and endorsed the principles of Plan S.

An article in Nature on the move notes that Wellcome gave out $1.4 billion in grants in 2016–17, while the Gates Foundation spent $4.7 billion in 2017, although not all of that was on science. So the backing of these two organizations is a massive vote of confidence in Plan S and its requirements. Wellcome has also unveiled its new, more stringent open access policy, which includes a number of important changes, including the following:

All Wellcome-funded research articles must be made freely available through PubMed Central (PMC) and Europe PMC at the time of publication. We previously allowed a six-month embargo period. This change will make sure that the peer-reviewed version is freely available to everyone at the time of publication.

This move finally rectifies one of the biggest blunders by academic funding organizations: allowing publishers to impose an embargo -- typically six or even 12 months -- before publicly-funded research work was freely available as open access. There was absolutely no reason to allow this. After all, the funding organizations could simply have said to publishers: "if you want to publish work we paid for, you must follow our rules". But in a moment of weakness, they allowed themselves to be bamboozled by publishers, granting an unnecessary monopoly on published papers, and slowing down the dissemination of research.

All articles must be published under a Creative Commons attribution licence (CC-BY). We previously only required this licence when an article processing charge (APC) was paid. This change will make sure that others -- including commercial entities and AI/text-data mining services -- can reuse our funded research to discover new knowledge.

Although a more subtle change, it's an important one. It establishes unequivocally that anyone, including companies, may build on research financed by Wellcome. In particular, it explicitly allows anyone to carry out text and data mining (TDM), and to use papers and their data for training machine-learning systems. That's particularly important in the light of the EU's stupid decision to prevent companies in Europe from carrying out either TDM or training machine-learning systems on material to which they do not have legal access to unless they pay an additional licensing fee to publishers. This pretty much guarantees that the EU will become a backwater for AI compared to the US and China, where no such obstacles are placed in the way of companies.

Like Plan S, Wellcome's open access policy no longer supports double-dipping "hybrid journals", which charge researchers who want to release their work as open access, but also require libraries to take out full-price subscriptions for journals that include these freely-available articles. An innovative aspect of the new policy is that it will require some research to be published as preprints in advance of formal publication in journals:

Where there is a significant public health benefit to preprints being shared widely and rapidly, such as a disease outbreak, these preprints must be published:

before peer review

on an approved platform that supports immediate publication of the complete manuscript under a CC-BY licence.

That's eminently sensible -- in the event of public health emergencies, you want the latest research to be out there in the hands of health workers as soon as possible. It's also a nice boost for preprints, which are rapidly emerging as an important way of sharing knowledge.

The Gates Foundation has said that it will update its open access policy, which in any case is already broadly in line with the principles of Plan S, over the next 12 months. Even without that revision, the latest announcement by these two funding heavyweights is highly significant, and is likely to make the argument for similar organizations around the world to align their open access policies with Plan S hard to resist. We can therefore probably expect more to join cOAlition S and help bring the world closer to the long-cherished dream of full open access to the world's research, with no embargoes, and under a permissive CC-BY license.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

7 Comments | Leave a Comment..

Posted on Techdirt - 25 October 2018 @ 10:49am

EU Copyright Directive Update: Fresh (But Slim) Hope Of Stopping Link Taxes And Upload Filters

from the and-ways-to-make-them-less-awful-if-we-can't dept

The awful EU Copyright Directive is not done and dusted. As Techdirt reported last month, the European Parliament may have failed to do its duty and protect the EU Internet for the region's citizens, but the proposed law has not yet passed. Instead, it has entered the so-called "trilogue" discussions. Pirate Party MEP Julia Reda explains:

In this series of closed-door meetings, the European Parliament and the Council (representing the member state governments) hammer out a final text acceptable to both institutions. It's the last chance to make changes before the Directive gets adopted. Meetings are currently scheduled until Christmas, although whether the process will be concluded by then is up in the air.

A recent decision by the General Court of the European Union has ruled that the European Parliament can no longer deny the public access to trilogue documents (pdf). As a result, Reda has promised to provide updates on what is happening in those hitherto secretive meetings. She just published her report on the second trilogue negotiation, and there's good and bad news. The good news is that a change of government in Italy has led to that country shifting its stance: it is now against the worst parts of the EU Copyright Directive. An EFF post explains the implications of that important development:

There may now be sufficiently large opposition to the articles [11 and 13] to create a blocking minority if they all vote together, but the new bloc has not settled on a united answer. Other countries are suspicious of Italy's no-compromise approach. They want to add extra safeguards to the two articles, not kill them entirely. That includes some of the countries that were originally opposed in May, including Germany.

In other words, there is now at least a slim chance that Article 11 and Article 13 could be dropped entirely, or at least improved in terms of the safeguards they contain. Against that, there is some unexpected bad news, explained here by Reda:

Council, on the other hand, has now completely out of the blue proposed a new Article 17a that says that existing exceptions for education, text and data mining or preservation can only be maintained if they don't contradict the rules of the newly introduced mandatory exceptions. In the case of teaching, this would mean that national teaching exceptions that don't require limiting access to the educational material by using a "secure electronic environment" would no longer apply!

This is outrageous given that the whole stated purpose of the new mandatory exceptions was to make research and education easier, not to erect new barriers. If as a consequence of the new mandatory teaching exception, teaching activities in some countries that have been legal all along would no longer be legal, then the reform would have spectacularly failed at even its most modest goal of facilitating research and education.

Since this is a completely new proposal, it's not clear how the European Parliament will respond. As Reda writes, the European Parliament ought to insist that any copyright exception that is legal under existing EU copyright law remains legal under the new Directive, once passed. Otherwise the exercise of "making copyright fit for the digital age" -- the supposed justification for the new law -- will have been even more of a fiasco than it currently it is.

There are two other pieces of good news. Yet another proposed extension of EU copyright, this time to create a special new form of copyright for sporting events, seems to have zero support among the EU's Member States, and thus is likely to be dropped. Reda also notes that Belgium, Finland, Germany, the Netherlands, Italy, Estonia and the Czech Republic are in favor of expanding the scope of the proposed copyright exception for text and data mining to include businesses. That's something that the AI industry in Europe desperately needs if it is to keep up with the US and China in using massive text and data stores to train AI systems.

The important message to take away here is that the EU Copyright Directive is certainly a potential disaster for the Internet in Europe, but it's not over yet. It's still worth trying to make the politicians understand how harmful it would be in its present form, and to improve the law before it's too late. That's precisely what the EFF is attempting to do with a note that it has sent to every member of the EU bodies negotiating the final text in the trilogue meetings. It has two suggestions, both addressing serious flaws in the current versions. One concerns the fact that there are zero penalties for making false copyright claims that could result in material being filtered by Article 13:

Based on EFF's decades-long experience with notice-and-takedown regimes in the United States, and private copyright filters such as YouTube's ContentID, we know that the low evidentiary standards required for copyright complaints, coupled with the lack of consequences for false copyright claims, are a form of moral hazard that results in illegitimate acts of censorship from both knowing and inadvertent false copyright claims.

The EFF goes on to make several sensible proposals for ways to minimize this problem. The other suggestion concerns Article 11, the so-called "link tax". Here the issue is that the proposed measure is very poorly worded:

The existing Article 11 language does not define when quotation amounts to a use that must be licensed, though proponents have argued that quoting more than a single word requires a license.

Again, the EFF offers concrete suggestions for at least making the law less ambiguous and slightly less harmful. However, as the EFF rightly notes, tinkering with the text of these section is not the right solution:

In closing, we would like to reiterate that the flaws enumerated above are merely those elements of Articles 11 and 13 that are incoherent or not fit for purpose. At root, however, Articles 11 and 13 are bad ideas that have no place in the Directive. Instead of effecting some piecemeal fixes to the most glaring problems in these Articles, the Trilogue take a simpler approach, and cut them from the Directive altogether.

Although that seems a long shot, there is still hope, not least because Italy's reversal of position on parts of the proposed directive makes the arithmetic of the voting considerably less certain than it seemed before. In particular, it's still worth contacting the ministries responsible in EU Member States for copyright matters to explain why Articles 11 and 13 need to go if the Internet in the EU is to thrive.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

10 Comments | Leave a Comment..

Posted on Techdirt - 19 October 2018 @ 3:35pm

Whistleblowing About Swiss Banks' Bad Behavior Just Became Safer

from the terms-and-conditions-apply dept

Whistleblowers play a vital role in releasing information the powerful would rather keep secret. But the former pay a high price for their bravery, as the experiences of recent whistleblowers such as Chelsea Manning and Edward Snowden make plain. Another whistleblower whose life has become very difficult after leaking is Rudolf Elmer. He has a Web site about his actions and his subsequent problems, but it's not the easiest to navigate. Here's Wikipedia's summary of who he is and what he did:

In 2008, Elmer illegally disclosed confidential bank documents to WikiLeaks detailing the activities of [the Swiss multinational private bank] Julius Bär in the Cayman Islands and its role in alleged tax evasion. In January 2011, he was convicted in Switzerland of breaching secrecy laws and other offenses. He was rearrested immediately thereafter for having again distributed illegally obtained data to WikiLeaks. Julius Bär as well as select Swiss and German newspapers alleges that Elmer has doctored evidence to suggest the bank engaged in tax evasion.

According to a new article about him in the Economist, Elmer has undergone no less than 48 prosecutorial interrogations, spent six months in solitary confinement and faced 70 court rulings. The good news is that he has finally won an important court case at Switzerland's Supreme Court. The court ruled that since Elmer was employed by the Cayman Islands affiliate of the Zurich-based Julius Bär bank, he was not bound by Switzerland's strict secrecy laws when he passed information to WikiLeaks. Here's why that is a big deal, and not just for Elmer:

The ruling matters because Swiss banks are among the world's most international. They employ thousands of private bankers offshore, and many more in outsourcing operations in countries like India and Poland. Many foreign employees are involved in creating structures comprising overseas companies and trusts linked to a Swiss bank account. Thanks to the ruling, as long as their employment contract is local they can now leak information on suspected tax evasion or other shenanigans without fear of falling under Switzerland's draconian secrecy law, which imposes jail terms of up to five years on whistleblowers.

Sadly, Elmer's problems aren't over. According to the Economist article, he was found guilty of forging a letter and making a threat, and has been ordered to pay SFr320,000 ($325,000) towards the costs of the case. He maintains this was imposed on him as "revenge" for prevailing in the main part of his case. Certainly, in the light of the Supreme Court's ruling in favor of whistleblowing, he is unlikely to have won any new friends in the world of Swiss banking.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

4 Comments | Leave a Comment..

Posted on Techdirt - 12 October 2018 @ 3:43am

Politicians Start To Push For Autonomous Vehicle Data To Be Protected By Copyright Or Database Rights

from the battle-for-the-internet-of-things dept

Autonomous vehicles are much in the news these days, and seem poised to enter the mainstream soon. One of their key aspects is that they are digital systems -- essentially, computers with wheels. As such they gather and generate huge amounts of data as they move around and interact with their surroundings. This kind of data is increasingly valuable, so an important question poses itself: what should happen to all that information from autonomous vehicles?

The issue came up recently in a meeting of the European Parliament's legal affairs committee, which was drawing up a document to summarize its views on autonomous driving in the EU (pdf). It's an area now being explored by the EU with a view to bringing in relevant regulations where they are needed. Topics under consideration include civil liability, data protection, and who gets access to the data produced by autonomous vehicles. On that topic, the Swedish Greens MEP Max Andersson suggested the following amendment (pdf) to the committee's proposed text:

Notes that data generated during autonomous transport are automatically generated and are by nature not creative, thus making copyright protection or the right on databases inapplicable.

Pretty inoffensive stuff, you might think. But not for the center-right EPP politicians present. They demanded a vote on Andersson's amendment, and then proceeded to block its inclusion in the committee's final report.

This is a classic example of the copyright ratchet in action: copyright only ever gets longer, stronger and broader. Here a signal is being sent that copyright or a database right should be extended to apply not just to works created by people, but also to the data streams generated by autonomous vehicles. Given their political leanings, it is highly unlikely that the EPP politicians believe that data belongs to the owner of the vehicle. They presumably think that the manufacturer retains rights to it, even after the vehicle has left the factory and been sold.

That's bad enough, but there's a bigger threat here. Autonomous vehicles are just part of a much larger wave of connected digital devices that generate huge quantities of data, what is generally called the Internet of Things. The next major front in the copyright wars -- the next upward move of the copyright ratchet -- will be over what happens to all that data, and who, if anyone, owns it.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

36 Comments | Leave a Comment..

Posted on Techdirt - 10 October 2018 @ 12:00pm

As Everyone Knows, In The Age Of The Internet, Privacy Is Dead -- Which Is Awkward If You Are A Russian Spy

from the not-just-here-for-the-medieval-church-architecture dept

Judging by the headlines, there are Russian spies everywhere these days. Of course, Russia routinely denies everything, but its attempts at deflection are growing a little feeble. For example, the UK government identified two men it claimed were responsible for the novichok attack on the Skripals in Salisbury. It said they were agents from GRU, Russia's largest military intelligence agency, and one of several groups authorized to spy for the Russian government. The two men appeared later on Russian television, where they denied they were spies, and insisted they were just lovers of English medieval architecture who were in Salisbury to admire the cathedral's 123-meter spire.

More recently, Dutch military intelligence claimed that four officers from GRU had flown into the Netherlands in order to carry out an online attack on the headquarters of the international chemical weapons watchdog that was investigating the Salisbury poisoning. In this case, the Russian government didn't even bother insisting that the men were actually in town to look at Amsterdam's canals. That was probably wise, since a variety of information available online seems to confirm their links to GRU, as the Guardian explained:

One of the suspected agents, tipped as a "human intelligence source" by Dutch investigators, had registered five vehicles at a north-western Moscow address better known as the Aquarium, the GRU finishing school for military attaches and elite spies. According to online listings, which are not official but are publicly available to anyone on Google, he drove a Honda Civic, then moved on to an Alfa Romeo. In case the address did not tip investigators off, he also listed the base number of the Military-Diplomatic Academy.

One of the men, Aleksei Morenets, an alleged hacker, appeared to have set up a dating profile.

Another played for an amateur Moscow football team "known as the security services team" a current player told the Moscow Times. "Almost everyone works for an intelligence agency." The team rosters are publicly available.

The "open source intelligence" group Bellingcat came up with even more astonishing details when they started digging online. Bellingcat found one of the four Russians named by the Dutch authorities in Russia's vehicle ownership database. The car was registered to Komsomolsky Prospekt 20, which happens to be the address of military unit 26165, described by Dutch and US law enforcement agencies as GRU's digital warfare department. By searching the database for other vehicles registered at the same address, Bellingcat came up with a list of 305 individuals linked with the GRU division. The database entries included their full names and passport numbers, as well as mobile phone numbers in most cases. Bellingcat points out that if these are indeed GRU operatives, this discovery would be one of the largest breaches of personal data of an intelligence agency in recent years.

An interesting thread on Twitter by Alexander Gabuev, Senior Fellow and Chair of Russia in Asia-Pacific Program at Carnegie Moscow Center, explains why Bellingcat was able to find such sensitive information online. He says:

the Russian Traffic Authority is notoriously corrupt even by Russian standards, it's inexhaustible source of dark Russian humor. No surprise its database is very easy to buy in the black market since 1990s

In the 1990s, black market information was mostly of interest to specialists, hard to find, and had limited circulation. Today, even sensitive data almost inevitably ends up posted online somewhere, because everything digital has a tendency to end up online once it's available. It's then only a matter of time before groups like Bellingcat find it as they follow up their leads. Combine that with a wealth of information contained in social media posts or on Web sites, and spies have a problem keeping in the shadows. Techdirt has written many stories about how the privacy of ordinary people has been compromised by leaks of personal information that is later made available online. There's no doubt that can be embarrassing and inconvenient for those affected. But if it's any consolation, it's even worse when you are a Russian spy.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

25 Comments | Leave a Comment..

Posted on Techdirt - 5 October 2018 @ 1:42pm

Broad Alliance Calls For Australian Government To Listen To Experts' Warnings About Flaws In New Compelled Access Legislation

from the nah,-we're-ramming-it-through-anyway dept

The battle against encryption is being waged around the world by numerous governments, no matter how often experts explain, often quite slowly, that it's a really bad idea. As Techdirt reported back in August, Australia is mounting its own attack against privacy and security in the form of a compelled access law. The pushback there has just taken an interesting turn with the formation of a Alliance for a Safe and Secure Internet:

The Alliance is campaigning for the Government to slow down, stop ignoring the concerns of technology experts, and listen to its citizens when they raise legitimate concerns. For a piece of legislation that could have such far ranging impacts, a proper and transparent dialogue is needed, and care taken to ensure it does not have the unintended consequence of making all Australians less safe.

The Alliance for a Safe and Secure Internet represents an unusually wide range of interests. It includes Amnesty International and the well-known local group Digital Rights Watch, the Communications Alliance, the main industry body for Australian telecoms, and DIGI, which counts Facebook, Google, Twitter and Yahoo among its members. One disturbing development since we last wrote about the proposed law is the following:

The draft Bill was made public in mid-August and, following a three week consultation process, a large number of submissions from concerned citizens and organisation were received by the Department of Home Affairs. Only a week after the consultation closed the Bill was rushed into Parliament with only very minor amendments, meaning that almost all the expert recommendations for changes to the Bill were ignored by Government.

The Bill has now been referred to the Parliamentary Joint Committee on Intelligence and Security (PJCIS), where again processes have been truncated, setting the stage for it to be passed into law within months.

That's a clear indication that the Australian government intends to ram this law through the legislative process as quickly as possible, and that it has little intention of taking any notice of what the experts say on the matter -- yet again.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

15 Comments | Leave a Comment..

Posted on Techdirt - 5 October 2018 @ 3:06am

Most Chinese Patents Are Being Discarded By Their Owners Because They Are Not Worth The Maintenance Fees To Keep Them

from the more-patents-do-not-mean-more-innovation dept

Techdirt has been writing about China and patents for years. One recurrent theme is that the West is foolish to encourage China to embrace patents more enthusiastically, since the inevitable result will be more Chinese companies suing Western ones for alleged infringement. The second theme -- related to the first -- is that the Chinese government is unwise to use patents as proxies for innovation by offering incentives to its researchers and companies to file for patents. That leads people to file as much as possible, regardless of whether the ideas are original enough to warrant patent protection. One of the surest guides to the value of a patent is whether those who filed for them are willing to pay maintenance fees. Clearly, if patents were really as valuable as many claim they are, there would be no question about paying. An article in Bloomberg reveals how that is working out in China:

Despite huge numbers of filings, most patents are discarded by their fifth year as licensees balk at paying escalating fees. When it comes to design, more than nine out of every ten lapses -- almost the mirror opposite of the U.S.

The high attrition rate is a symptom of the way China has pushed universities, companies and backyard inventors to transform the country into a self-sufficient powerhouse. Subsidies and other incentives are geared toward making patent filings, rather than making sure those claims are useful. So the volume doesn't translate into quality, with the country still dependent on others for innovative ideas, such as modern smartphones.

The discard rate varies according to the patent type. China issues patents for three different categories: invention, utility model and design. Invention patents are "classical" patents, and require a notable breakthrough of some kind, at least in theory. A design patent could be just the shape of a product, while a utility model would include something as minor as sliding to unlock a smartphone. According to the Bloomberg article, 91% of design patents granted in 2013 had been discarded because people stopped paying to maintain them, while 61% of utility patents lapsed within five years. Even the relatively rigorous invention patents saw 37% dumped, compared to around 15% of US patents that were not maintained after five years.

This latest news usefully confirms that the simplistic equation "more patents = more innovation" is false, as Techdirt has been warning for years. It also suggests that China still has some way to go before it can match the West in real inventiveness, rather than the sham kind based purely on meaningless patent statistics.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

8 Comments | Leave a Comment..

Posted on Techdirt - 3 October 2018 @ 7:28pm

African Countries Shooting Themselves In The Digital Foot By Imposing Taxes And Levies On Internet Use

from the how-not-to-do-it dept

Techdirt has written a number of stories recently about unfortunate developments taking place in the African digital world. The Alliance for Affordable Internet (A4AI ) site has usefully pulled together what's been happening across the continent -- and it doesn't look good:

A4AI's recent mobile broadband pricing update shows that Africans face the highest cost to connect to the internet -- just 1GB of mobile data costs the average user in Africa nearly 9% of their monthly income, while their counterparts in the Asia-Pacific region pay one-fifth of that price (around 1.5% of monthly income). Despite this already high cost to connect, we're seeing a worrying trend of governments across Africa imposing a variety of taxes on some of the most popular internet applications and services.

The article goes on to list the following examples.


imposes a daily fee of UGX 200 ($0.05) to access social media sites and many common Internet-based messaging and voice applications, as well as a tax on mobile money transactions.


has announced it will levy a 30 ngwee ($0.03) daily tax on social network use.


requires bloggers to pay a government license fee roughly equivalent to the average annual income for the country.


aims to impose additional taxation on the Internet, with proposed levies on telecommunications and on mobile money transfers.


imposed a 5 CFCA ($0.01) per megabyte fee to access social media sites, messaging, and Voice-over-IP applications, causing a 250% increase in the price for 1GB of mobile data.

The article explains that the last of these was rescinded within days because of public pressure, while Kenya's tax is currently on hold thanks to a court order. Nonetheless, there is a clear tendency among some African governments to see the Internet as a handy new source of tax income. That's clearly a very short-sighted move. At a time when the digital world in Africa is advancing rapidly, with innovation hubs and startups appearing all over the continent, making it more expensive and thus harder for ordinary people to access the Internet threatens to throttle this growth. Whatever the short-term benefits from the moves listed above, countries imposing taxes and levies of whatever kind risk cutting their citizens off from the exciting digital future being forged elsewhere in Africa. As the A4AI post rightly says:

Africa, with the largest digital divide of any geographic region, has the greatest untapped potential with regards to improving affordable access and meaningful use of the internet. With affordable internet access, African economies can grow sustainably and inclusively.

Sadly, in certain African countries, that seems unlikely to happen.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

15 Comments | Leave a Comment..

Posted on Techdirt - 26 September 2018 @ 3:43pm

Indian Supreme Court Rules Aadhaar Does Not Violate Privacy Rights, But Places Limits On Its Use

from the mixed-result dept

Techdirt wrote recently about what seems to be yet another problem with India's massive Aadhaar biometric identity system. Alongside these specific security issues, there is the larger question of whether Aadhaar as a whole is a violation of Indian citizens' fundamental privacy rights. That question was made all the more pertinent in the light of the country's Supreme Court ruling last year that "Privacy is the constitutional core of human dignity." It led many to hope that the same court would strike down Aadhaar completely following constitutional challenges to the project. However, in a mixed result for both privacy organizations and Aadhaar proponents, India's Supreme Court has handed down a judgment that the identity system does not fundamentally violate privacy rights, but that its use must be strictly circumscribed. As The New York Times explains:

The five-judge panel limited the use of the program, called Aadhaar, to the distribution of certain benefits. It struck down the government's use of the system for unrelated issues like identifying students taking school exams. The court also said that private companies like banks and cellphone providers could not require users to prove their identities with Aadhaar.

The majority opinion of the court said that an Indian's Aadhaar identity was unique and "unparalleled" and empowered marginalized people, such as those who are illiterate.

The decision affects everything from government welfare programs, such as food aid and pensions, to private businesses, which have used the digital ID as a fast, efficient way to verify customers' identities. Some states, such as Andhra Pradesh, had also planned to integrate the ID system into far-reaching surveillance programs, raising the specter of widespread government spying.

In essence, the Supreme Court seems to have felt that although Aadhaar's problems were undeniable, its advantages, particularly for India's poorest citizens, outweighed those concerns. However, its ruling also sought to limit function creep by stipulating that Aadhaar's compulsory use had to be restricted to the original aim of distributing government benefits. Although that seems a reasonable compromise, it may not be quite as clear-cut as it seems. The Guardian writes that it still may be possible to use Aadhaar for commercial purposes:

Sharad Sharma, the co-founder of a Bangalore-based technology think tank which has worked closely with Aadhaar's administrators, said Wednesday's judgment did not totally eliminate that vision for the future of the scheme, but that private use of Aadhaar details would now need to be voluntary.

"Nothing has been said [by the court] about voluntary usage and nothing has been said about regulating bodies mandating it for services," Sharma said. "So access to private parties for voluntary use is permitted."

That looks to be a potentially large loophole in the Supreme Court's attempt to keep the benefits of Aadhaar while stopping it turning into a compulsory identity system for accessing all government and business services. No doubt in the coming years we will see companies exploring just how far they can go in demanding a "voluntary" use of Aadhaar, as well as legal action by privacy advocates trying to stop them from doing so.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

3 Comments | Leave a Comment..

Posted on Techdirt - 24 September 2018 @ 10:44am

China Actively Collecting Zero-Days For Use By Its Intelligence Agencies -- Just Like The West

from the no-moral-high-ground-there,-then dept

It all seems so far away now, but in 2013, during the early days of the Snowden revelations, a story about the NSA's activities emerged that apparently came from a different source. Bloomberg reported (behind a paywall, summarized by Ars Technica) that Microsoft was providing the NSA with information about newly-discovered bugs in the company's software before it patched them. It gave the NSA a window of opportunity during which it could take advantage of those flaws in order to gain access to computer systems of interest. Later that year, the Washington Post reported that the NSA was spending millions of dollars per year to acquire other zero-days from malware vendors.

A stockpile of vulnerabilities and hacking tools is great -- until they leak out, which is precisely what seems to have happened several times with the NSA's collection. The harm that lapse can cause was vividly demonstrated by the WannaCry ransomware. It was built on a Microsoft zero-day that was part of the NSA's toolkit, and caused very serious problems to companies -- and hospitals -- around the world.

The other big problem with the NSA -- or the UK's GCHQ, or Germany's BND -- taking advantage of zero-days in this way is that it makes it inevitable that other actors will do the same. An article on the Access Now site confirms that China is indeed seeking out software flaws that it can use for attacking other systems:

In November 2017, Recorded Future published research on the publication speed for China's National Vulnerability Database (with the memorable acronym CNNVD). When they initially conducted this research, they concluded that China actually evaluates and reports vulnerabilities faster than the U.S. However, when they revisited their findings at a later date, they discovered that a majority of the figures had been altered to hide a much longer processing period during which the Chinese government could assess whether a vulnerability would be useful in intelligence operations.

As the Access Now article explains, the Chinese authorities have gone beyond simply keeping zero-days quiet for as long as possible. They are actively discouraging Chinese white hats from participating in international hacking competitions because this would help Western companies learn about bugs that might otherwise be exploitable by China's intelligence services. This is really bad news for the rest of us. It means that China's huge and growing pool of expert coders are no longer likely to report bugs to software companies when they find them. Instead, they will be passed to the CNNVD for assessment. Not only will bug fixes take longer to appear, exposing users to security risks, but the Chinese may even weaponize the zero-days in order to break into other systems.

Another regrettable aspect of this development is that Western countries like the US and UK can hardly point fingers here, since they have been using zero-days in precisely this way for years. The fact that China -- and presumably Russia, North Korea and Iran amongst others -- have joined the club underlines what a stupid move this was. It may have provided a short-term advantage for the West, but now that it's become the norm for intelligence agencies, the long-term effect is to reduce the security of computer systems everywhere by leaving known vulnerabilities unpatched. It's an unwinnable digital arms race that will be hard to stop now. It also underlines why adding any kind of weakness to cryptographic systems would be an incredibly reckless escalation of an approach that has already put lives at risk.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

15 Comments | Leave a Comment..

Posted on Free Speech - 19 September 2018 @ 11:59am

Tanzania Plans To Outlaw Fact-Checking Of Government Statistics

from the dodgy-data dept

Back in April, Techdirt wrote about a set of regulations brought in by the Tanzanian government that required people there to pay around $900 per year for a license to blog. Despite the very high costs it imposes on people -- Tanzania's GDP per capita was under $900 in 2016 -- it seems the authorities are serious about enforcing the law. The iAfrikan site reported in June:

Popular Tanzanian forums and "leaks" website, Jamii Forums, has been temporarily shut down by government as it has not complied with the new regulations and license fees required of online content creators in Tanzania. This comes after Tanzania Communications Regulatory Authority (TCRA) issued a notice to Jamii Forums reminding them that it is a legal offense to publish content on the Internet without having registered and paid for a license.

The Swahili-language site Jamii Forums is back online now. But the Tanzanian authorities are not resting on their laurels when it comes to introducing ridiculous laws. Here's another one that's arguably worse than charging bloggers to post:

[President John] Magufuli and his colleagues are now looking to outlaw fact checking thanks to proposed amendments to the Statistics Act, 2015.

"The principal Act is amended by adding immediately after section 24 the following: 24A.-(1) Any person who is authorised by the Bureau to process any official statistics, shall before publishing or communicating such information to the public, obtain an authorisation from the Bureau. (2) A person shall not disseminate or otherwise communicate to the public any statistical information which is intended to invalidate, distort, or discredit official statistics," reads the proposed amendments to Tanzania's Statistics Act, 2015 as published in the Gazette of the United Republic of Tanzania No. 23 Vol. 99.

As the iAfrikan article points out, the amendments will mean that statistics published by the Tanzanian government must be regarded as correct, however absurd or obviously erroneous they might be. Moreover, it will be illegal for independent researchers to publish any other figures that contradict, or even simply call into question, official statistics.

This is presumably born of a thin-skinned government that wants to avoid even the mildest criticism of its policies or plans. But it seems certain to backfire badly. If statistics are wrong, but no one can correct them, there is the risk that Tanzanian businesses, organizations and citizens will make bad decisions based on this dodgy data. That could lead to harmful consequences for the economy and society, which the Tanzanian government might well be tempted to cover up by issuing yet more incorrect statistics. Without open and honest feedback to correct this behavior, there could be an ever-worsening cascade of misinformation and lies until public trust in the government collapses completely. Does President Magufuli really want that?

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

17 Comments | Leave a Comment..

Posted on Techdirt - 17 September 2018 @ 7:49pm

Software Patch Claimed To Allow Aadhaar's Security To Be Bypassed, Calling Into Question Biometric Database's Integrity

from the but-it's-ok,-we-already-blacklisted-the-50,000-rogue-operators-that-we-found dept

Earlier this year, we wrote about what seemed to be a fairly serious breach of security at the world's largest biometric database, India's Aadhaar. The Indian edition of Huffington Post now reports on what looks like an even more grave problem:

The authenticity of the data stored in India's controversial Aadhaar identity database, which contains the biometrics and personal information of over 1 billion Indians, has been compromised by a software patch that disables critical security features of the software used to enrol new Aadhaar users, a three month-long investigation by HuffPost India reveals.

According to the article, the patch can be bought for just Rs 2,500 (around $35). The easy-to-install software removes three critical security features of Aadhaar:

The patch lets a user bypass critical security features such as biometric authentication of enrolment operators to generate unauthorised Aadhaar numbers.

The patch disables the enrolment software's in-built GPS security feature (used to identify the physical location of every enrolment centre), which means anyone anywhere in the world -- say, Beijing, Karachi or Kabul -- can use the software to enrol users.

The patch reduces the sensitivity of the enrolment software's iris-recognition system, making it easier to spoof the software with a photograph of a registered operator, rather than requiring the operator to be present in person.

As the Huffington Post article explains, creating a patch that is able to circumvent the main security features in this way was possible thanks to design choices made early on in the project. The unprecedented scale of the Aadhaar enrollment process -- so far around 1.2 billion people have been given an Aadhaar number and added to the database -- meant that a large number of private agencies and village-level computer kiosks were used for registration. Since connectivity was often poor, the main software was installed on local computers, rather than being run in the cloud. The patch can be used by anyone with local access to the computer system, and simply involves replacing a folder of Java libraries with versions lacking the security checks.

The Unique Identification Authority of India (UIDAI), the government body responsible for the Aadhaar project, has responded to the Huffington Post article, but in a rather odd way: as a Donald Trump-like stream of tweets. The Huffington Post points out: "[the UIDAI] has simply stated that its systems are completely secure without any supporting evidence." One of the Aadhaar tweets is as follows:

It is because of this stringent and robust system that as on date more that 50,000 operators have been blacklisted, UIDAI added.

The need to throw 50,000 operators off the system hardly inspires confidence in its overall security. What makes things worse is that the Indian government seems determined to make Aadhaar indispensable for Indian citizens who want to deal with it in any way, and to encourage business to do the same. Given the continuing questions about Aadhaar's overall security and integrity, that seems unwise, to say the least.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

9 Comments | Leave a Comment..

More posts from Glyn Moody >>