Glyn Moody’s Techdirt Profile

glynmoody

About Glyn MoodyTechdirt Insider




Posted on Techdirt - 8 November 2018 @ 7:01pm

Leading Open Access Supporters Ask EU To Investigate Elsevier's Alleged 'Anti-Competitive Practices'

from the are-you-listening,-Commissioner-Vestager? dept

Back in the summer, we wrote about the paleontologist Jon Tennant, who had submitted a formal complaint to the European Commission regarding the relationship between the publishing giant Elsevier and the EU's Open Science Monitor. Now Tennant has joined with another leading supporter of open access, Björn Brembs, in an even more direct attack on the company and its practices, reported here by the site Research Europe:

Two academics have demanded the European Commission investigate the academic publisher Elsevier for what they say is a breach of EU competition rules that is harming research.

Palaeontologist Jon Tennant and neuroscientist Björn Brembs, who are both advocates for making research results openly available, say the academic publishing market "is clearly not functioning well" in an official complaint about Elsevier's parent company RELX Group.

The pair claim RELX and Elsevier are in breach of EU rules both due to general problems with the academic publishing market and "abuse of a dominant position within this market".

The 22-page complaint spells out what the problem is. It makes the following important point about the unusual economics of the academic publishing market:

For research to progress, access to all available relevant sources is required, which means that there is no ability to transfer or substitute products, and there is little to no inter-brand competition from the viewpoint of consumers. If a research team requires access to knowledge contained within a journal, they must have access to that specific journal, and cannot substitute it for a similar one published by a competitor. Indeed, the entire corpus of research knowledge is built on this vital and fundamental process of building on previously published works, which drives up demand for all relevant published content. As such, publishers do not realistically compete with each other, as all their products are fundamentally unique (i.e., each publisher has a 100% market share for each journal or article), and unequivocally in high demand due to the way scholarly research works. The result of this is that consumers (i.e., research institutions and libraries) have little power to make cost-benefit evaluations to decide whether or not to purchase, and have no choice but to pay whatever price the publishers asks with little transparency over costs, which we believe is a primary factor that has contributed to more than a 300% rise in journal prices above inflation since 1986. Thus, we believe that a functional and competitive market is not currently able to form due to the practices of dominant players, like Elsevier, in this sector.

Most of the complaint is a detailed analysis of why academic publishing has become so dysfunctional, and is well-worth reading by anyone interested in understanding the background to open access and its struggles.

As to what the complaint might realistically achieve, Tennant told Techdirt that there are three main possibilities. The European Commission can simply ignore it. It can respond and say that it doesn't think there is a case to answer, in which case Tennant says he will push the Commission to explain why. Finally, in the most optimistic outcome, the EU could initiate a formal investigation of Elsevier and the wider academic publishing market. Although that might seem too much to hope for, it's worth noting that the EU Competition Authority is ultimately under the Competition Commissioner, Margrethe Vestager. She has been very energetic in her pursuit of Internet giants like Google. It could certainly be a hugely significant moment for open access if she started to take an interest in Elsevier in the same way.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

Read More | 11 Comments | Leave a Comment..

Posted on Techdirt - 6 November 2018 @ 3:37pm

Big Boost For Open Access As Wellcome And Bill & Melinda Gates Foundation Back EU's 'Plan S'

from the no-embargoes,-and-cc-by dept

Back in September, Techdirt wrote about the oddly-named 'Plan S', which was nonetheless an important step forward for open access in Europe. As we remarked then, the hope was that others would support the initiative, and that has now happened, with two of the biggest names in the science funding world signing up to the approach:

To ensure that research findings are shared widely and are made freely available at the time of publication, Wellcome and the Bill & Melinda Gates Foundation have today (Monday) joined cOAlition S and endorsed the principles of Plan S.

An article in Nature on the move notes that Wellcome gave out $1.4 billion in grants in 2016–17, while the Gates Foundation spent $4.7 billion in 2017, although not all of that was on science. So the backing of these two organizations is a massive vote of confidence in Plan S and its requirements. Wellcome has also unveiled its new, more stringent open access policy, which includes a number of important changes, including the following:

All Wellcome-funded research articles must be made freely available through PubMed Central (PMC) and Europe PMC at the time of publication. We previously allowed a six-month embargo period. This change will make sure that the peer-reviewed version is freely available to everyone at the time of publication.

This move finally rectifies one of the biggest blunders by academic funding organizations: allowing publishers to impose an embargo -- typically six or even 12 months -- before publicly-funded research work was freely available as open access. There was absolutely no reason to allow this. After all, the funding organizations could simply have said to publishers: "if you want to publish work we paid for, you must follow our rules". But in a moment of weakness, they allowed themselves to be bamboozled by publishers, granting an unnecessary monopoly on published papers, and slowing down the dissemination of research.

All articles must be published under a Creative Commons attribution licence (CC-BY). We previously only required this licence when an article processing charge (APC) was paid. This change will make sure that others -- including commercial entities and AI/text-data mining services -- can reuse our funded research to discover new knowledge.

Although a more subtle change, it's an important one. It establishes unequivocally that anyone, including companies, may build on research financed by Wellcome. In particular, it explicitly allows anyone to carry out text and data mining (TDM), and to use papers and their data for training machine-learning systems. That's particularly important in the light of the EU's stupid decision to prevent companies in Europe from carrying out either TDM or training machine-learning systems on material to which they do not have legal access to unless they pay an additional licensing fee to publishers. This pretty much guarantees that the EU will become a backwater for AI compared to the US and China, where no such obstacles are placed in the way of companies.

Like Plan S, Wellcome's open access policy no longer supports double-dipping "hybrid journals", which charge researchers who want to release their work as open access, but also require libraries to take out full-price subscriptions for journals that include these freely-available articles. An innovative aspect of the new policy is that it will require some research to be published as preprints in advance of formal publication in journals:

Where there is a significant public health benefit to preprints being shared widely and rapidly, such as a disease outbreak, these preprints must be published:

before peer review

on an approved platform that supports immediate publication of the complete manuscript under a CC-BY licence.

That's eminently sensible -- in the event of public health emergencies, you want the latest research to be out there in the hands of health workers as soon as possible. It's also a nice boost for preprints, which are rapidly emerging as an important way of sharing knowledge.

The Gates Foundation has said that it will update its open access policy, which in any case is already broadly in line with the principles of Plan S, over the next 12 months. Even without that revision, the latest announcement by these two funding heavyweights is highly significant, and is likely to make the argument for similar organizations around the world to align their open access policies with Plan S hard to resist. We can therefore probably expect more to join cOAlition S and help bring the world closer to the long-cherished dream of full open access to the world's research, with no embargoes, and under a permissive CC-BY license.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

7 Comments | Leave a Comment..

Posted on Techdirt - 25 October 2018 @ 10:49am

EU Copyright Directive Update: Fresh (But Slim) Hope Of Stopping Link Taxes And Upload Filters

from the and-ways-to-make-them-less-awful-if-we-can't dept

The awful EU Copyright Directive is not done and dusted. As Techdirt reported last month, the European Parliament may have failed to do its duty and protect the EU Internet for the region's citizens, but the proposed law has not yet passed. Instead, it has entered the so-called "trilogue" discussions. Pirate Party MEP Julia Reda explains:

In this series of closed-door meetings, the European Parliament and the Council (representing the member state governments) hammer out a final text acceptable to both institutions. It's the last chance to make changes before the Directive gets adopted. Meetings are currently scheduled until Christmas, although whether the process will be concluded by then is up in the air.

A recent decision by the General Court of the European Union has ruled that the European Parliament can no longer deny the public access to trilogue documents (pdf). As a result, Reda has promised to provide updates on what is happening in those hitherto secretive meetings. She just published her report on the second trilogue negotiation, and there's good and bad news. The good news is that a change of government in Italy has led to that country shifting its stance: it is now against the worst parts of the EU Copyright Directive. An EFF post explains the implications of that important development:

There may now be sufficiently large opposition to the articles [11 and 13] to create a blocking minority if they all vote together, but the new bloc has not settled on a united answer. Other countries are suspicious of Italy's no-compromise approach. They want to add extra safeguards to the two articles, not kill them entirely. That includes some of the countries that were originally opposed in May, including Germany.

In other words, there is now at least a slim chance that Article 11 and Article 13 could be dropped entirely, or at least improved in terms of the safeguards they contain. Against that, there is some unexpected bad news, explained here by Reda:

Council, on the other hand, has now completely out of the blue proposed a new Article 17a that says that existing exceptions for education, text and data mining or preservation can only be maintained if they don't contradict the rules of the newly introduced mandatory exceptions. In the case of teaching, this would mean that national teaching exceptions that don't require limiting access to the educational material by using a "secure electronic environment" would no longer apply!

This is outrageous given that the whole stated purpose of the new mandatory exceptions was to make research and education easier, not to erect new barriers. If as a consequence of the new mandatory teaching exception, teaching activities in some countries that have been legal all along would no longer be legal, then the reform would have spectacularly failed at even its most modest goal of facilitating research and education.

Since this is a completely new proposal, it's not clear how the European Parliament will respond. As Reda writes, the European Parliament ought to insist that any copyright exception that is legal under existing EU copyright law remains legal under the new Directive, once passed. Otherwise the exercise of "making copyright fit for the digital age" -- the supposed justification for the new law -- will have been even more of a fiasco than it currently it is.

There are two other pieces of good news. Yet another proposed extension of EU copyright, this time to create a special new form of copyright for sporting events, seems to have zero support among the EU's Member States, and thus is likely to be dropped. Reda also notes that Belgium, Finland, Germany, the Netherlands, Italy, Estonia and the Czech Republic are in favor of expanding the scope of the proposed copyright exception for text and data mining to include businesses. That's something that the AI industry in Europe desperately needs if it is to keep up with the US and China in using massive text and data stores to train AI systems.

The important message to take away here is that the EU Copyright Directive is certainly a potential disaster for the Internet in Europe, but it's not over yet. It's still worth trying to make the politicians understand how harmful it would be in its present form, and to improve the law before it's too late. That's precisely what the EFF is attempting to do with a note that it has sent to every member of the EU bodies negotiating the final text in the trilogue meetings. It has two suggestions, both addressing serious flaws in the current versions. One concerns the fact that there are zero penalties for making false copyright claims that could result in material being filtered by Article 13:

Based on EFF's decades-long experience with notice-and-takedown regimes in the United States, and private copyright filters such as YouTube's ContentID, we know that the low evidentiary standards required for copyright complaints, coupled with the lack of consequences for false copyright claims, are a form of moral hazard that results in illegitimate acts of censorship from both knowing and inadvertent false copyright claims.

The EFF goes on to make several sensible proposals for ways to minimize this problem. The other suggestion concerns Article 11, the so-called "link tax". Here the issue is that the proposed measure is very poorly worded:

The existing Article 11 language does not define when quotation amounts to a use that must be licensed, though proponents have argued that quoting more than a single word requires a license.

Again, the EFF offers concrete suggestions for at least making the law less ambiguous and slightly less harmful. However, as the EFF rightly notes, tinkering with the text of these section is not the right solution:

In closing, we would like to reiterate that the flaws enumerated above are merely those elements of Articles 11 and 13 that are incoherent or not fit for purpose. At root, however, Articles 11 and 13 are bad ideas that have no place in the Directive. Instead of effecting some piecemeal fixes to the most glaring problems in these Articles, the Trilogue take a simpler approach, and cut them from the Directive altogether.

Although that seems a long shot, there is still hope, not least because Italy's reversal of position on parts of the proposed directive makes the arithmetic of the voting considerably less certain than it seemed before. In particular, it's still worth contacting the ministries responsible in EU Member States for copyright matters to explain why Articles 11 and 13 need to go if the Internet in the EU is to thrive.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

10 Comments | Leave a Comment..

Posted on Techdirt - 19 October 2018 @ 3:35pm

Whistleblowing About Swiss Banks' Bad Behavior Just Became Safer

from the terms-and-conditions-apply dept

Whistleblowers play a vital role in releasing information the powerful would rather keep secret. But the former pay a high price for their bravery, as the experiences of recent whistleblowers such as Chelsea Manning and Edward Snowden make plain. Another whistleblower whose life has become very difficult after leaking is Rudolf Elmer. He has a Web site about his actions and his subsequent problems, but it's not the easiest to navigate. Here's Wikipedia's summary of who he is and what he did:

In 2008, Elmer illegally disclosed confidential bank documents to WikiLeaks detailing the activities of [the Swiss multinational private bank] Julius Bär in the Cayman Islands and its role in alleged tax evasion. In January 2011, he was convicted in Switzerland of breaching secrecy laws and other offenses. He was rearrested immediately thereafter for having again distributed illegally obtained data to WikiLeaks. Julius Bär as well as select Swiss and German newspapers alleges that Elmer has doctored evidence to suggest the bank engaged in tax evasion.

According to a new article about him in the Economist, Elmer has undergone no less than 48 prosecutorial interrogations, spent six months in solitary confinement and faced 70 court rulings. The good news is that he has finally won an important court case at Switzerland's Supreme Court. The court ruled that since Elmer was employed by the Cayman Islands affiliate of the Zurich-based Julius Bär bank, he was not bound by Switzerland's strict secrecy laws when he passed information to WikiLeaks. Here's why that is a big deal, and not just for Elmer:

The ruling matters because Swiss banks are among the world's most international. They employ thousands of private bankers offshore, and many more in outsourcing operations in countries like India and Poland. Many foreign employees are involved in creating structures comprising overseas companies and trusts linked to a Swiss bank account. Thanks to the ruling, as long as their employment contract is local they can now leak information on suspected tax evasion or other shenanigans without fear of falling under Switzerland's draconian secrecy law, which imposes jail terms of up to five years on whistleblowers.

Sadly, Elmer's problems aren't over. According to the Economist article, he was found guilty of forging a letter and making a threat, and has been ordered to pay SFr320,000 ($325,000) towards the costs of the case. He maintains this was imposed on him as "revenge" for prevailing in the main part of his case. Certainly, in the light of the Supreme Court's ruling in favor of whistleblowing, he is unlikely to have won any new friends in the world of Swiss banking.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

4 Comments | Leave a Comment..

Posted on Techdirt - 12 October 2018 @ 3:43am

Politicians Start To Push For Autonomous Vehicle Data To Be Protected By Copyright Or Database Rights

from the battle-for-the-internet-of-things dept

Autonomous vehicles are much in the news these days, and seem poised to enter the mainstream soon. One of their key aspects is that they are digital systems -- essentially, computers with wheels. As such they gather and generate huge amounts of data as they move around and interact with their surroundings. This kind of data is increasingly valuable, so an important question poses itself: what should happen to all that information from autonomous vehicles?

The issue came up recently in a meeting of the European Parliament's legal affairs committee, which was drawing up a document to summarize its views on autonomous driving in the EU (pdf). It's an area now being explored by the EU with a view to bringing in relevant regulations where they are needed. Topics under consideration include civil liability, data protection, and who gets access to the data produced by autonomous vehicles. On that topic, the Swedish Greens MEP Max Andersson suggested the following amendment (pdf) to the committee's proposed text:

Notes that data generated during autonomous transport are automatically generated and are by nature not creative, thus making copyright protection or the right on databases inapplicable.

Pretty inoffensive stuff, you might think. But not for the center-right EPP politicians present. They demanded a vote on Andersson's amendment, and then proceeded to block its inclusion in the committee's final report.

This is a classic example of the copyright ratchet in action: copyright only ever gets longer, stronger and broader. Here a signal is being sent that copyright or a database right should be extended to apply not just to works created by people, but also to the data streams generated by autonomous vehicles. Given their political leanings, it is highly unlikely that the EPP politicians believe that data belongs to the owner of the vehicle. They presumably think that the manufacturer retains rights to it, even after the vehicle has left the factory and been sold.

That's bad enough, but there's a bigger threat here. Autonomous vehicles are just part of a much larger wave of connected digital devices that generate huge quantities of data, what is generally called the Internet of Things. The next major front in the copyright wars -- the next upward move of the copyright ratchet -- will be over what happens to all that data, and who, if anyone, owns it.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

36 Comments | Leave a Comment..

Posted on Techdirt - 10 October 2018 @ 12:00pm

As Everyone Knows, In The Age Of The Internet, Privacy Is Dead -- Which Is Awkward If You Are A Russian Spy

from the not-just-here-for-the-medieval-church-architecture dept

Judging by the headlines, there are Russian spies everywhere these days. Of course, Russia routinely denies everything, but its attempts at deflection are growing a little feeble. For example, the UK government identified two men it claimed were responsible for the novichok attack on the Skripals in Salisbury. It said they were agents from GRU, Russia's largest military intelligence agency, and one of several groups authorized to spy for the Russian government. The two men appeared later on Russian television, where they denied they were spies, and insisted they were just lovers of English medieval architecture who were in Salisbury to admire the cathedral's 123-meter spire.

More recently, Dutch military intelligence claimed that four officers from GRU had flown into the Netherlands in order to carry out an online attack on the headquarters of the international chemical weapons watchdog that was investigating the Salisbury poisoning. In this case, the Russian government didn't even bother insisting that the men were actually in town to look at Amsterdam's canals. That was probably wise, since a variety of information available online seems to confirm their links to GRU, as the Guardian explained:

One of the suspected agents, tipped as a "human intelligence source" by Dutch investigators, had registered five vehicles at a north-western Moscow address better known as the Aquarium, the GRU finishing school for military attaches and elite spies. According to online listings, which are not official but are publicly available to anyone on Google, he drove a Honda Civic, then moved on to an Alfa Romeo. In case the address did not tip investigators off, he also listed the base number of the Military-Diplomatic Academy.

One of the men, Aleksei Morenets, an alleged hacker, appeared to have set up a dating profile.

Another played for an amateur Moscow football team "known as the security services team" a current player told the Moscow Times. "Almost everyone works for an intelligence agency." The team rosters are publicly available.

The "open source intelligence" group Bellingcat came up with even more astonishing details when they started digging online. Bellingcat found one of the four Russians named by the Dutch authorities in Russia's vehicle ownership database. The car was registered to Komsomolsky Prospekt 20, which happens to be the address of military unit 26165, described by Dutch and US law enforcement agencies as GRU's digital warfare department. By searching the database for other vehicles registered at the same address, Bellingcat came up with a list of 305 individuals linked with the GRU division. The database entries included their full names and passport numbers, as well as mobile phone numbers in most cases. Bellingcat points out that if these are indeed GRU operatives, this discovery would be one of the largest breaches of personal data of an intelligence agency in recent years.

An interesting thread on Twitter by Alexander Gabuev, Senior Fellow and Chair of Russia in Asia-Pacific Program at Carnegie Moscow Center, explains why Bellingcat was able to find such sensitive information online. He says:

the Russian Traffic Authority is notoriously corrupt even by Russian standards, it's inexhaustible source of dark Russian humor. No surprise its database is very easy to buy in the black market since 1990s

In the 1990s, black market information was mostly of interest to specialists, hard to find, and had limited circulation. Today, even sensitive data almost inevitably ends up posted online somewhere, because everything digital has a tendency to end up online once it's available. It's then only a matter of time before groups like Bellingcat find it as they follow up their leads. Combine that with a wealth of information contained in social media posts or on Web sites, and spies have a problem keeping in the shadows. Techdirt has written many stories about how the privacy of ordinary people has been compromised by leaks of personal information that is later made available online. There's no doubt that can be embarrassing and inconvenient for those affected. But if it's any consolation, it's even worse when you are a Russian spy.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

25 Comments | Leave a Comment..

Posted on Techdirt - 5 October 2018 @ 1:42pm

Broad Alliance Calls For Australian Government To Listen To Experts' Warnings About Flaws In New Compelled Access Legislation

from the nah,-we're-ramming-it-through-anyway dept

The battle against encryption is being waged around the world by numerous governments, no matter how often experts explain, often quite slowly, that it's a really bad idea. As Techdirt reported back in August, Australia is mounting its own attack against privacy and security in the form of a compelled access law. The pushback there has just taken an interesting turn with the formation of a Alliance for a Safe and Secure Internet:

The Alliance is campaigning for the Government to slow down, stop ignoring the concerns of technology experts, and listen to its citizens when they raise legitimate concerns. For a piece of legislation that could have such far ranging impacts, a proper and transparent dialogue is needed, and care taken to ensure it does not have the unintended consequence of making all Australians less safe.

The Alliance for a Safe and Secure Internet represents an unusually wide range of interests. It includes Amnesty International and the well-known local group Digital Rights Watch, the Communications Alliance, the main industry body for Australian telecoms, and DIGI, which counts Facebook, Google, Twitter and Yahoo among its members. One disturbing development since we last wrote about the proposed law is the following:

The draft Bill was made public in mid-August and, following a three week consultation process, a large number of submissions from concerned citizens and organisation were received by the Department of Home Affairs. Only a week after the consultation closed the Bill was rushed into Parliament with only very minor amendments, meaning that almost all the expert recommendations for changes to the Bill were ignored by Government.

The Bill has now been referred to the Parliamentary Joint Committee on Intelligence and Security (PJCIS), where again processes have been truncated, setting the stage for it to be passed into law within months.

That's a clear indication that the Australian government intends to ram this law through the legislative process as quickly as possible, and that it has little intention of taking any notice of what the experts say on the matter -- yet again.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

15 Comments | Leave a Comment..

Posted on Techdirt - 5 October 2018 @ 3:06am

Most Chinese Patents Are Being Discarded By Their Owners Because They Are Not Worth The Maintenance Fees To Keep Them

from the more-patents-do-not-mean-more-innovation dept

Techdirt has been writing about China and patents for years. One recurrent theme is that the West is foolish to encourage China to embrace patents more enthusiastically, since the inevitable result will be more Chinese companies suing Western ones for alleged infringement. The second theme -- related to the first -- is that the Chinese government is unwise to use patents as proxies for innovation by offering incentives to its researchers and companies to file for patents. That leads people to file as much as possible, regardless of whether the ideas are original enough to warrant patent protection. One of the surest guides to the value of a patent is whether those who filed for them are willing to pay maintenance fees. Clearly, if patents were really as valuable as many claim they are, there would be no question about paying. An article in Bloomberg reveals how that is working out in China:

Despite huge numbers of filings, most patents are discarded by their fifth year as licensees balk at paying escalating fees. When it comes to design, more than nine out of every ten lapses -- almost the mirror opposite of the U.S.

The high attrition rate is a symptom of the way China has pushed universities, companies and backyard inventors to transform the country into a self-sufficient powerhouse. Subsidies and other incentives are geared toward making patent filings, rather than making sure those claims are useful. So the volume doesn't translate into quality, with the country still dependent on others for innovative ideas, such as modern smartphones.

The discard rate varies according to the patent type. China issues patents for three different categories: invention, utility model and design. Invention patents are "classical" patents, and require a notable breakthrough of some kind, at least in theory. A design patent could be just the shape of a product, while a utility model would include something as minor as sliding to unlock a smartphone. According to the Bloomberg article, 91% of design patents granted in 2013 had been discarded because people stopped paying to maintain them, while 61% of utility patents lapsed within five years. Even the relatively rigorous invention patents saw 37% dumped, compared to around 15% of US patents that were not maintained after five years.

This latest news usefully confirms that the simplistic equation "more patents = more innovation" is false, as Techdirt has been warning for years. It also suggests that China still has some way to go before it can match the West in real inventiveness, rather than the sham kind based purely on meaningless patent statistics.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

8 Comments | Leave a Comment..

Posted on Techdirt - 3 October 2018 @ 7:28pm

African Countries Shooting Themselves In The Digital Foot By Imposing Taxes And Levies On Internet Use

from the how-not-to-do-it dept

Techdirt has written a number of stories recently about unfortunate developments taking place in the African digital world. The Alliance for Affordable Internet (A4AI ) site has usefully pulled together what's been happening across the continent -- and it doesn't look good:

A4AI's recent mobile broadband pricing update shows that Africans face the highest cost to connect to the internet -- just 1GB of mobile data costs the average user in Africa nearly 9% of their monthly income, while their counterparts in the Asia-Pacific region pay one-fifth of that price (around 1.5% of monthly income). Despite this already high cost to connect, we're seeing a worrying trend of governments across Africa imposing a variety of taxes on some of the most popular internet applications and services.

The article goes on to list the following examples.

Uganda

imposes a daily fee of UGX 200 ($0.05) to access social media sites and many common Internet-based messaging and voice applications, as well as a tax on mobile money transactions.

Zambia

has announced it will levy a 30 ngwee ($0.03) daily tax on social network use.

Tanzania

requires bloggers to pay a government license fee roughly equivalent to the average annual income for the country.

Kenya

aims to impose additional taxation on the Internet, with proposed levies on telecommunications and on mobile money transfers.

Benin

imposed a 5 CFCA ($0.01) per megabyte fee to access social media sites, messaging, and Voice-over-IP applications, causing a 250% increase in the price for 1GB of mobile data.

The article explains that the last of these was rescinded within days because of public pressure, while Kenya's tax is currently on hold thanks to a court order. Nonetheless, there is a clear tendency among some African governments to see the Internet as a handy new source of tax income. That's clearly a very short-sighted move. At a time when the digital world in Africa is advancing rapidly, with innovation hubs and startups appearing all over the continent, making it more expensive and thus harder for ordinary people to access the Internet threatens to throttle this growth. Whatever the short-term benefits from the moves listed above, countries imposing taxes and levies of whatever kind risk cutting their citizens off from the exciting digital future being forged elsewhere in Africa. As the A4AI post rightly says:

Africa, with the largest digital divide of any geographic region, has the greatest untapped potential with regards to improving affordable access and meaningful use of the internet. With affordable internet access, African economies can grow sustainably and inclusively.

Sadly, in certain African countries, that seems unlikely to happen.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

15 Comments | Leave a Comment..

Posted on Techdirt - 26 September 2018 @ 3:43pm

Indian Supreme Court Rules Aadhaar Does Not Violate Privacy Rights, But Places Limits On Its Use

from the mixed-result dept

Techdirt wrote recently about what seems to be yet another problem with India's massive Aadhaar biometric identity system. Alongside these specific security issues, there is the larger question of whether Aadhaar as a whole is a violation of Indian citizens' fundamental privacy rights. That question was made all the more pertinent in the light of the country's Supreme Court ruling last year that "Privacy is the constitutional core of human dignity." It led many to hope that the same court would strike down Aadhaar completely following constitutional challenges to the project. However, in a mixed result for both privacy organizations and Aadhaar proponents, India's Supreme Court has handed down a judgment that the identity system does not fundamentally violate privacy rights, but that its use must be strictly circumscribed. As The New York Times explains:

The five-judge panel limited the use of the program, called Aadhaar, to the distribution of certain benefits. It struck down the government's use of the system for unrelated issues like identifying students taking school exams. The court also said that private companies like banks and cellphone providers could not require users to prove their identities with Aadhaar.

The majority opinion of the court said that an Indian's Aadhaar identity was unique and "unparalleled" and empowered marginalized people, such as those who are illiterate.

The decision affects everything from government welfare programs, such as food aid and pensions, to private businesses, which have used the digital ID as a fast, efficient way to verify customers' identities. Some states, such as Andhra Pradesh, had also planned to integrate the ID system into far-reaching surveillance programs, raising the specter of widespread government spying.

In essence, the Supreme Court seems to have felt that although Aadhaar's problems were undeniable, its advantages, particularly for India's poorest citizens, outweighed those concerns. However, its ruling also sought to limit function creep by stipulating that Aadhaar's compulsory use had to be restricted to the original aim of distributing government benefits. Although that seems a reasonable compromise, it may not be quite as clear-cut as it seems. The Guardian writes that it still may be possible to use Aadhaar for commercial purposes:

Sharad Sharma, the co-founder of a Bangalore-based technology think tank which has worked closely with Aadhaar's administrators, said Wednesday's judgment did not totally eliminate that vision for the future of the scheme, but that private use of Aadhaar details would now need to be voluntary.

"Nothing has been said [by the court] about voluntary usage and nothing has been said about regulating bodies mandating it for services," Sharma said. "So access to private parties for voluntary use is permitted."

That looks to be a potentially large loophole in the Supreme Court's attempt to keep the benefits of Aadhaar while stopping it turning into a compulsory identity system for accessing all government and business services. No doubt in the coming years we will see companies exploring just how far they can go in demanding a "voluntary" use of Aadhaar, as well as legal action by privacy advocates trying to stop them from doing so.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

3 Comments | Leave a Comment..

Posted on Techdirt - 24 September 2018 @ 10:44am

China Actively Collecting Zero-Days For Use By Its Intelligence Agencies -- Just Like The West

from the no-moral-high-ground-there,-then dept

It all seems so far away now, but in 2013, during the early days of the Snowden revelations, a story about the NSA's activities emerged that apparently came from a different source. Bloomberg reported (behind a paywall, summarized by Ars Technica) that Microsoft was providing the NSA with information about newly-discovered bugs in the company's software before it patched them. It gave the NSA a window of opportunity during which it could take advantage of those flaws in order to gain access to computer systems of interest. Later that year, the Washington Post reported that the NSA was spending millions of dollars per year to acquire other zero-days from malware vendors.

A stockpile of vulnerabilities and hacking tools is great -- until they leak out, which is precisely what seems to have happened several times with the NSA's collection. The harm that lapse can cause was vividly demonstrated by the WannaCry ransomware. It was built on a Microsoft zero-day that was part of the NSA's toolkit, and caused very serious problems to companies -- and hospitals -- around the world.

The other big problem with the NSA -- or the UK's GCHQ, or Germany's BND -- taking advantage of zero-days in this way is that it makes it inevitable that other actors will do the same. An article on the Access Now site confirms that China is indeed seeking out software flaws that it can use for attacking other systems:

In November 2017, Recorded Future published research on the publication speed for China's National Vulnerability Database (with the memorable acronym CNNVD). When they initially conducted this research, they concluded that China actually evaluates and reports vulnerabilities faster than the U.S. However, when they revisited their findings at a later date, they discovered that a majority of the figures had been altered to hide a much longer processing period during which the Chinese government could assess whether a vulnerability would be useful in intelligence operations.

As the Access Now article explains, the Chinese authorities have gone beyond simply keeping zero-days quiet for as long as possible. They are actively discouraging Chinese white hats from participating in international hacking competitions because this would help Western companies learn about bugs that might otherwise be exploitable by China's intelligence services. This is really bad news for the rest of us. It means that China's huge and growing pool of expert coders are no longer likely to report bugs to software companies when they find them. Instead, they will be passed to the CNNVD for assessment. Not only will bug fixes take longer to appear, exposing users to security risks, but the Chinese may even weaponize the zero-days in order to break into other systems.

Another regrettable aspect of this development is that Western countries like the US and UK can hardly point fingers here, since they have been using zero-days in precisely this way for years. The fact that China -- and presumably Russia, North Korea and Iran amongst others -- have joined the club underlines what a stupid move this was. It may have provided a short-term advantage for the West, but now that it's become the norm for intelligence agencies, the long-term effect is to reduce the security of computer systems everywhere by leaving known vulnerabilities unpatched. It's an unwinnable digital arms race that will be hard to stop now. It also underlines why adding any kind of weakness to cryptographic systems would be an incredibly reckless escalation of an approach that has already put lives at risk.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

15 Comments | Leave a Comment..

Posted on Free Speech - 19 September 2018 @ 11:59am

Tanzania Plans To Outlaw Fact-Checking Of Government Statistics

from the dodgy-data dept

Back in April, Techdirt wrote about a set of regulations brought in by the Tanzanian government that required people there to pay around $900 per year for a license to blog. Despite the very high costs it imposes on people -- Tanzania's GDP per capita was under $900 in 2016 -- it seems the authorities are serious about enforcing the law. The iAfrikan site reported in June:

Popular Tanzanian forums and "leaks" website, Jamii Forums, has been temporarily shut down by government as it has not complied with the new regulations and license fees required of online content creators in Tanzania. This comes after Tanzania Communications Regulatory Authority (TCRA) issued a notice to Jamii Forums reminding them that it is a legal offense to publish content on the Internet without having registered and paid for a license.

The Swahili-language site Jamii Forums is back online now. But the Tanzanian authorities are not resting on their laurels when it comes to introducing ridiculous laws. Here's another one that's arguably worse than charging bloggers to post:

[President John] Magufuli and his colleagues are now looking to outlaw fact checking thanks to proposed amendments to the Statistics Act, 2015.

"The principal Act is amended by adding immediately after section 24 the following: 24A.-(1) Any person who is authorised by the Bureau to process any official statistics, shall before publishing or communicating such information to the public, obtain an authorisation from the Bureau. (2) A person shall not disseminate or otherwise communicate to the public any statistical information which is intended to invalidate, distort, or discredit official statistics," reads the proposed amendments to Tanzania's Statistics Act, 2015 as published in the Gazette of the United Republic of Tanzania No. 23 Vol. 99.

As the iAfrikan article points out, the amendments will mean that statistics published by the Tanzanian government must be regarded as correct, however absurd or obviously erroneous they might be. Moreover, it will be illegal for independent researchers to publish any other figures that contradict, or even simply call into question, official statistics.

This is presumably born of a thin-skinned government that wants to avoid even the mildest criticism of its policies or plans. But it seems certain to backfire badly. If statistics are wrong, but no one can correct them, there is the risk that Tanzanian businesses, organizations and citizens will make bad decisions based on this dodgy data. That could lead to harmful consequences for the economy and society, which the Tanzanian government might well be tempted to cover up by issuing yet more incorrect statistics. Without open and honest feedback to correct this behavior, there could be an ever-worsening cascade of misinformation and lies until public trust in the government collapses completely. Does President Magufuli really want that?

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

17 Comments | Leave a Comment..

Posted on Techdirt - 17 September 2018 @ 7:49pm

Software Patch Claimed To Allow Aadhaar's Security To Be Bypassed, Calling Into Question Biometric Database's Integrity

from the but-it's-ok,-we-already-blacklisted-the-50,000-rogue-operators-that-we-found dept

Earlier this year, we wrote about what seemed to be a fairly serious breach of security at the world's largest biometric database, India's Aadhaar. The Indian edition of Huffington Post now reports on what looks like an even more grave problem:

The authenticity of the data stored in India's controversial Aadhaar identity database, which contains the biometrics and personal information of over 1 billion Indians, has been compromised by a software patch that disables critical security features of the software used to enrol new Aadhaar users, a three month-long investigation by HuffPost India reveals.

According to the article, the patch can be bought for just Rs 2,500 (around $35). The easy-to-install software removes three critical security features of Aadhaar:

The patch lets a user bypass critical security features such as biometric authentication of enrolment operators to generate unauthorised Aadhaar numbers.

The patch disables the enrolment software's in-built GPS security feature (used to identify the physical location of every enrolment centre), which means anyone anywhere in the world -- say, Beijing, Karachi or Kabul -- can use the software to enrol users.

The patch reduces the sensitivity of the enrolment software's iris-recognition system, making it easier to spoof the software with a photograph of a registered operator, rather than requiring the operator to be present in person.

As the Huffington Post article explains, creating a patch that is able to circumvent the main security features in this way was possible thanks to design choices made early on in the project. The unprecedented scale of the Aadhaar enrollment process -- so far around 1.2 billion people have been given an Aadhaar number and added to the database -- meant that a large number of private agencies and village-level computer kiosks were used for registration. Since connectivity was often poor, the main software was installed on local computers, rather than being run in the cloud. The patch can be used by anyone with local access to the computer system, and simply involves replacing a folder of Java libraries with versions lacking the security checks.

The Unique Identification Authority of India (UIDAI), the government body responsible for the Aadhaar project, has responded to the Huffington Post article, but in a rather odd way: as a Donald Trump-like stream of tweets. The Huffington Post points out: "[the UIDAI] has simply stated that its systems are completely secure without any supporting evidence." One of the Aadhaar tweets is as follows:

It is because of this stringent and robust system that as on date more that 50,000 operators have been blacklisted, UIDAI added.

The need to throw 50,000 operators off the system hardly inspires confidence in its overall security. What makes things worse is that the Indian government seems determined to make Aadhaar indispensable for Indian citizens who want to deal with it in any way, and to encourage business to do the same. Given the continuing questions about Aadhaar's overall security and integrity, that seems unwise, to say the least.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

9 Comments | Leave a Comment..

Posted on Techdirt - 13 September 2018 @ 12:17am

Corporate Sovereignty On The Wane, As Governments Realize It's More Trouble Than It's Worth

from the but-not-dead-yet dept

A few years ago, corporate sovereignty -- officially known as "investor-state dispute settlement" (ISDS) -- was an indispensable and important element of trade deals. As a result, it would crop up on Techdirt quite often. But the world is finally moving on, and old-style corporate sovereignty is losing its appeal. As we reported last year, the US Trade Representative, Robert Lighthizer, hinted that the US might not support ISDS in future trade deals, but it was not clear what that might mean in practice. The Canadian Broadcasting Corporation (CBC) site has an interesting article that explores the new contours of corporate sovereignty:

The preliminary trade agreement the U.S. recently reached with Mexico may offer a glimpse of what could happen with NAFTA's Chapter 11 [governing ISDS].

A U.S. official said the two countries wanted ISDS to be "limited" to cases of expropriation, bias against foreign companies or failure to treat all trading partners equally.

The new US thinking places Canada in a tricky position because the latter is involved in several trade deals, which take different approaches to corporate sovereignty. As well as the US-dominated NAFTA, there is CETA, the trade deal with Europe. For that, Canada is acquiescing to the EU's request to replace ISDS with the new Investment Court System (ICS). In TPP, however -- still lumbering on, despite the US withdrawal -- Canada seems to be going along with the traditional corporate sovereignty approach.

A willingness to move on from traditional ISDS can be seen in the often overlooked, but important, Regional Comprehensive Economic Partnership (RCEP) trade deal. India's Business Standard reports:

Despite treading diametrically opposite paths on tariffs and market access, India and China, along with other nations, have hit it off on talks regarding investment norms in the proposed Regional Comprehensive Economic Partnership (RCEP) pact.

In a bid to fast-track the deal, most nations have agreed to ease the investor-state-dispute settlement (ISDS) clauses.

As with NAFTA and CETA, it seems that the nations involved in RCEP no longer regard corporate sovereignty as a priority, and are willing to weaken its powers in order to reach agreement on other areas. Once the principle has been established that ISDS can be watered down, there's nothing to stop nations proposing that it should be dropped altogether. Given the astonishing awards and abuses that corporate sovereignty has led to in the past, that's a welcome development.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

7 Comments | Leave a Comment..

Posted on Techdirt - 10 September 2018 @ 8:01pm

Europe's New 'Plan S' For Open Access: Daft Name, Great News

from the admirably-strong dept

The journey towards open access has been a long one, with many disappointments along the way. But occasionally there are unequivocal and major victories. One such is the new "Plan S" from the inelegantly-named cOALition S:

On 4 September 2018, 11 national research funding organisation, with the support of the European Commission including the European Research Council (ERC), announced the launch of cOAlition S, an initiative to make full and immediate Open Access to research publications a reality. It is built around Plan S, which consists of one target and 10 principles.

cOAlition S signals the commitment to implement, by 1 January 2020, the necessary measures to fulfil its main principle: "By 2020 scientific publications that result from research funded by public grants provided by participating national and European research councils and funding bodies, must be published in compliant Open Access Journals or on compliant Open Access Platforms."

The plan and its ten principles (pdf) are usefully summed up by Peter Suber, one of the earliest and most influential open access advocates, as follows:

The plan is admirably strong. It aims to cover all European research, in the sciences and in the humanities, at the EU level and the member-state level. It's a plan for a mandate, not just an exhortation or encouragement. It keeps copyright in the hands of authors. It requires open licenses and prefers CC-BY. It abolishes or phases out embargoes. It does not support hybrid journals except as stepping stones to full-OA journals. It's willing to pay APCs [Article Processing Charges] but wants to cap them, and wants funders and universities to pay them, not authors. It will monitor compliance and sanction non-compliance. It's already backed by a dozen powerful, national funding agencies and calls for other funders and other stakeholders to join the coalition.

Keeping copyright in the hands of authors is crucial: too often, academics have been cajoled or bullied into handing over copyright for their articles to publishers, thus losing the ability to determine who can read them, and under what conditions. Similarly, the CC-BY license would allow commercial use by anyone -- many publishers try to release so-called open access articles under restrictive licenses like CC-BY-NC, which stop other publishers from distributing them.

Embargo periods are routinely used by publishers to delay the appearance of open access versions of articles; under Plan S, that would no longer be allowed. Finally, the new initiative discourages the use of "hybrid" journals that have often enabled publishers to "double dip". That is, they charge researchers who want to release their work as open access, but also require libraries to take out full-price subscriptions for journals that include these freely-available articles.

Suber has a number of (relatively minor) criticisms of Plan S, which are well-worth reading. All-in-all, though, this is a major breakthrough for open access in Europe, and thus the world. Once "admirably strong" open access mandates like Plan S have been established in one region, others tend to follow in due course. Let's just hope they choose better names.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

10 Comments | Leave a Comment..

Posted on Techdirt - 5 September 2018 @ 7:35pm

Leading Biomedical Funders Call For Open Peer Review Of Academic Research

from the nothing-to-hide dept

Techdirt has written many posts about open access -- the movement to make digital versions of academic research freely available to everyone. Open access is about how research is disseminated once it has been selected for publication. So far, there has been less emphasis on changing how academic work is selected in the first place, which is based on the time-honored approach of peer review. That is, papers submitted to journals are sent out to experts in the same or similar field, who are invited to comment on ways of improving the work, and on whether the research should be published. Traditionally, the process is shrouded in secrecy. The reviewers are generally anonymous, and the reports they make on the submissions are not made public. Now, however, the idea of making peer review more transparent as part of the general process of becoming more open is gaining increasing impetus.

A couple of weeks ago, representatives of two leading biomedical funders -- the UK Wellcome Trust and the Howard Hughes Medical Institute -- together with ASAPbio, a non-profit organization that encourages innovation in life-sciences publishing, wrote a commentary in Nature. In it, they called for "open review", which, they point out, encompasses two distinct forms of transparency:

'Open identities' means disclosing reviewers' names; 'open reports' (also called transparent reviews or published peer review) means publishing the content of reviews. Journals might offer one or the other, neither or both.

In a 2016 survey, 59% of 3,062 respondents were in favour of open reports. Only 31% favoured open identities, which they feared could cause reviewers to weaken their criticisms or could lead to retaliation from authors. Here, we advocate for open reports as the default and for open identities to be optional, not mandatory.

The authors of the commentary believe that there are a number of advantages to open reports:

The scientific community would learn from reviewers' and editors’ insights. Social scientists could collect data (for example, on biases among reviewers or the efficiency of error identification by reviewers) that might improve the process. Early-career researchers could learn by example. And the public would not be asked to place its faith in hidden assessments.

There are, of course risks. One concern mentioned is that published reviews might be used unfairly in subsequent evaluation of the authors for grants, jobs, awards or promotions. Another possibility is the 'weaponization' of reviewer reports:

Opponents of certain types of research (for example, on genetically modified organisms, climate change and vaccines) could take critical remarks in peer reviews out of context or mischaracterize disagreements to undermine public trust in the paper, the field or science as a whole.

Despite these and other concerns mentioned in the Nature commentary, an open letter published on the ASAPbio site lists dozens of major titles that have already instituted open reports, or promise to do so next year. As well as that indication that open reports are passing from concept to reality, it's worth bearing in mind that the UK Wellcome Trust and the Howard Hughes Medical Institute are major funders of biomedical research. It would be a relatively straightforward step for them to make the adoption of open reports a condition of receiving their grants -- something that would doubtless encourage uptake of the idea.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

20 Comments | Leave a Comment..

Posted on Techdirt - 17 August 2018 @ 7:39pm

As Academic Publishers Fight And Subvert Open Access, Preprints Offer An Alternative Approach For Sharing Knowledge Widely

from the this-is-the-future dept

The key idea behind open access is that everyone with an Internet connection should be able to read academic papers without needing to pay for them. Or rather without needing to pay again, since most research is funded using taxpayers' money. It's hard to argue against that proposition, or that making information available in this way is likely to increase the rate at which medical and scientific discoveries are made for the benefit of all. And yet, as Techdirt has reported, academic publishers that often enjoy profit margins of 30-40% have adopted a range of approaches to undermine open access and its aims -- and with considerable success. A recent opinion column in the Canadian journal University Affairs explains how traditional publishers have managed to subvert open access for their own benefit:

An ironic twist to the open-access movement is that it has actually made the publishers richer. They've jumped on the bandwagon by offering authors the option of paying article processing charges (APCs) in order to make their articles open access, while continuing to increase subscription charges to libraries at the institutions where those authors work. So, in many cases, the publishers are being paid twice for the same content -- often charging APCs higher than purely open access journals.

Another serious problem is the rise of so-called "predatory" open access publishers that have distorted the original ideas behind the movement even more. The Guardian reported recently:

More than 175,000 scientific articles have been produced by five of the largest "predatory open-access publishers", including India-based Omics publishing group and the Turkish World Academy of Science, Engineering and Technology, or Waset.

But the vast majority of those articles skip almost all of the traditional checks and balances of scientific publishing, from peer review to an editorial board. Instead, most journals run by those companies will publish anything submitted to them -- provided the required fee is paid.

These issues will be hard, if not impossible, to solve. As a result, many are now looking for a different solution to the problem of providing easy and cost-free access to academic knowledge, this time in the form of preprints. Techdirt reported earlier this year that there is evidence the published versions of papers add very little to the early, preprint version that is placed online directly by the authors. The negligible barriers to entry, the speed at which work can be published, and the extremely low costs involved have led many to see preprints as the best solution to providing open access to academic papers without needing to go through publishers at all.

Inevitably, perhaps, criticisms of the idea are starting to appear. Recently, Tom Sheldon, who is a senior press manager at the Science Media Centre in London, published a commentary in one of the leading academic journals, Nature, under the headline: "Preprints could promote confusion and distortion". As he noted, this grew out of an earlier discussion paper that he published on the Science Media Centre's blog. The Science Media Centre describes itself as "an independent press office helping to ensure that the public have access to the best scientific evidence and expertise through the news media when science hits the headlines." Its funding comes from "scientific institutions, science-based companies, charities, media organisations and government". Sheldon's concerns are not so much about preprints themselves, but their impact on how science is reported:

I am a big fan of bold and disruptive changes which can lead to fundamental culture change. My reading around work on reproducibility, open access and preprint make me proud to be part of a scientific community intent on finding ways to make science better. But I am concerned about how this change might affect the bit of science publication that we are involved with at the Science Media Centre. The bit which is all about the way scientific findings find their way to the wider public and policymakers via the mass media.

One of his concerns is the lack of embargoes for preprints. At the moment, when researchers have what they think is an important result or discovery appearing in a paper, they typically offer trusted journalists a chance to read it in advance on the understanding that they won't write about it until the paper is officially released. This has a number of advantages. It creates a level playing field for those journalists, who all get to see the paper at the same time. Crucially, it allows journalists to contact other experts to ask their opinion of the results, which helps to catch rogue papers, and also provides much-needed context. Sheldon writes:

Contrast this with preprints. As soon as research is in the public domain, there is nothing to stop a journalist writing about it, and rushing to be the first to do so. Imagine early findings that seem to show that climate change is natural or that a common vaccine is unsafe. Preprints on subjects such as those could, if they become a story that goes viral, end up misleading millions, whether or not that was the intention of the authors.

That's certainly true, but is easy to remedy. Academics who plan to publish a preprint could offer a copy of the paper to the group of trusted journalists under embargo -- just as they would with traditional papers. One sentence describing why it would be worth reading is all that is required by way of introduction. To the extent that the system works for today's published papers, it will also work for preprints. Some authors may publish without giving journalists time to check with other experts, but that's also true for current papers. Similarly, some journalists may hanker after full press releases that spoon-feed them the results, but if they can't be bothered working it out for themselves, or contacting the researchers and asking for an explanation, they probably wouldn't write a very good article anyway.

The other concern relates to the quality of preprints. One of the key differences between a preprint and a paper published in a journal is that the latter usually goes through the process of "peer review", whereby fellow academics read and critique it. But it is widely agreed that the peer review process has serious flaws, as many have pointed out for years -- and as Sheldon himself admits.

Indeed, as defenders note, preprints allow far more scrutiny to be applied than with traditional peer review, because they are open for all to read and spot mistakes. There are some new and interesting projects to formalize this kind of open review. Sheldon rightly has particular concerns about papers on public health matters, where lives might be put at risk by erroneous or misleading results. But major preprint sites like bioRxiv (for biology) and the upcoming medRxiv (for medicine and health sciences) are already trying to reduce that problem by actively screening preprints before they are posted.

Sheldon certainly raises some valid questions about the impact of preprints on the communication of science to a general audience. None of the issues is insurmountable, but it may require journalists as well as scientists to adapt to the changed landscape. However, changing how things are done is precisely the point about preprints. The present academic publishing system does not promote general access to knowledge that is largely funded by the taxpayer. The attempt by the open access movement to make that happen has arguably been neutered by shrewd moves on the part of traditional publishers, helped by complaisant politicians. Preprints are probably the best hope we have now for achieving a more equitable and efficient way of sharing knowledge and building on it more effectively.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

40 Comments | Leave a Comment..

Posted on Techdirt - 8 August 2018 @ 7:59pm

ICANN Loses Yet Again In Its Quixotic Quest To Obtain A Special Exemption From The EU's GDPR

from the oh,-do-give-it-a-rest dept

Back in May, we wrote about the bizarre attempt by the Internet Corporation for Assigned Names and Numbers (ICANN) to exempt itself from the EU's new privacy legislation, the GDPR. ICANN sought an injunction to force EPAG, a Tucows-owned registrar based in Bonn, Germany, to collect administrative and technical contacts as part of the domain name registration process. EPAG had refused, because it felt doing so would fall foul of the GDPR. A German court turned down ICANN's request, but without addressing the question whether gathering that information would breach the GDPR.

As the organization's timeline of the case indicates, ICANN then appealed to the Higher Regional Court of Cologne, Germany, against the ruling. Meanwhile, the lower court that issued the original judgment decided to re-visit the case, which it has the option to do upon receipt of an appeal. However, it did not change its view, and referred the matter to the upper Court. The Appellate Court of Cologne has issued its judgment (pdf), with a comprehensive smackdown of ICANN, yet again (via The Register):

Regardless of the fact that already in view of the convincing remarks of the Regional Court in its orders of 29 May 2018 and 16 July 2018 the existence of a claim for a preliminary injunction (Verfügungsanspruch) is doubtful, at least with regard to the main application, the granting the sought interim injunction fails in any case because the Applicant has not sufficiently explained and made credible a reason for a preliminary injunction (Verfügungsgrund).

The Appellate Court pointed out that ICANN could hardly claim it would suffer "irreparable harm" if it were not granted an injunction forcing EPAG to gather the additional data. If necessary, ICANN could collect that information at a later date, without any serious consequences. ICANN's case was further undermined by the fact that gathering administrative and technical contacts in the past had always been on a voluntary basis, so not doing so could hardly cause great damage.

Once more, then, the question of whether collecting this extra personal information was forbidden under the GDPR was not addressed, since ICANN's argument was found wanting irrespective of that privacy issue. And because no interpretation of the GDPR was required for the case, the Appellate Court also ruled there were no grounds for referring the question to the EU's highest court, the Court of Justice of the European Union.

ICANN says that it is "considering its next steps", but it's hard to see what those might be, given the unanimous verdict of the courts. Maybe it's time for ICANN to comply with the EU law like everybody else, and for it to stop wasting money in its forlorn attempts to get EU courts to grant it a special exemption from the GDPR's rules.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

22 Comments | Leave a Comment..

Posted on Techdirt - 2 August 2018 @ 3:26am

Facebook Granted 'Unprecedented' Leave To Appeal Over Referral Of Privacy Shield Case To Top EU Court

from the never-a-dull-moment dept

Back in April, we wrote about the latest development in the long, long saga of Max Schrem's legal challenge to Facebook's data transfers from the EU to the US. The Irish High Court referred the case to the EU's top court, asking the Court of Justice of the European Union (CJEU) to rule on eleven issues that the judge raised. Facebook tried to appeal against the Irish High Court's decision, but the received wisdom was that it was not an option for CJEU referrals of this kind. But as the Irish Times reports, to everyone's surprise, it seems the received wisdom was wrong:

The [Irish] Supreme Court has agreed to hear an unprecedented appeal by Facebook over a High Court judge's decision to refer to the European Court of Justice (CJEU) key issues concerning the validity of EU-US data transfer channels.

The Irish Chief Justice rejected arguments by the Irish Data Protection Commissioner and Schrems that Facebook could not seek to have the Supreme Court reverse certain disputed findings of fact by the High Court. The judge said that it was "at least arguable" Facebook could persuade the Supreme Court that some or all of the facts under challenge should be reversed. On that basis, the appeal could go ahead. Among the facts that would be considered were the following key points:

The chief justice said Facebook was essentially seeking that the Supreme Court "correct" the alleged errors, including the High Court findings of "mass indiscriminate" processing, that surveillance is legal unless forbidden, on the doctrine of legal standing in US law and in the consideration of other issues including safeguards.

Facebook also argues the High Court erred in finding the laws and practices of the US did not provide EU citizens with an effective remedy, as required under the Charter of Fundamental Rights of the EU, for breach of data privacy rights.

Those are crucial issues not just for Facebook, but also for the validity of the entire Privacy Shield framework, which is currently under pressure in the EU. It's not clear whether the Irish Supreme Court is really prepared to overrule the High Court judge, and to what extent the CJEU will take note anyway. One thing that is certain is that a complex and important case just took yet another surprising twist.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

3 Comments | Leave a Comment..

Posted on Techdirt - 26 July 2018 @ 8:06pm

EU And Japan Agree To Free Data Flows, Just As Tottering Privacy Shield Framework Threatens Transatlantic Transfers

from the cooperation-not-confrontation dept

The EU's strong data protection laws affect not only how personal data is handled within the European Union, but also where it can flow to. Under the GDPR, just as was the case with the preceding EU data protection directive, the personal data of EU citizens can only be sent to countries whose privacy laws meet the standard of "essential equivalence". That is, there may be differences in detail, but the overall effect has to be similar to the GDPR, something established as part of what is called an "adequacy decision". Just such an adequacy ruling by the European Commission has been agreed in favor of Japan:

This mutual adequacy arrangement will create the world's largest area of safe transfers of data based on a high level of protection for personal data. Europeans will benefit from strong protection of their personal data in line with EU privacy standards when their data is transferred to Japan. This arrangement will also complement the EU-Japan Economic Partnership Agreement, European companies will benefit from uninhibited flow of data with this key commercial partner, as well as from privileged access to the 127 million Japanese consumers. With this agreement, the EU and Japan affirm that, in the digital era, promoting high privacy standards and facilitating international trade go hand in hand. Under the GDPR, an adequacy decision is the most straightforward way to ensure secure and stable data flows.

Before the European Commission formally adopts the latest adequacy decision, Japan has agreed to tighten up certain aspects of its data protection laws by implementing the following:

A set of rules providing individuals in the EU whose personal data are transferred to Japan, with additional safeguards that will bridge several differences between the two data protection systems. These additional safeguards will strengthen, for example, the protection of sensitive data, the conditions under which EU data can be further transferred from Japan to another third country, the exercise of individual rights to access and rectification. These rules will be binding on Japanese companies importing data from the EU and enforceable by the Japanese independent data protection authority (PPC) and courts.

A complaint-handling mechanism to investigate and resolve complaints from Europeans regarding access to their data by Japanese public authorities. This new mechanism will be administered and supervised by the Japanese independent data protection authority.

It is precisely these areas that are proving so problematic for the data flow agreement between the EU and the US, known as the Privacy Shield framework. As Techdirt has reported, the European Commission is under increasing pressure to suspend Privacy Shield unless the US implements it fully -- something it has failed to do so far, despite repeated EU requests. Granting adequacy to Japan is an effective way to flag up that other major economies don't have any problems with the GDPR, and that the EU can turn its attention elsewhere if the US refuses to comply with the terms of the Privacy Shield agreement.

The new data deal with Japan still has several hurdles to overcome before it goes into effect. For example, the European Data Protection Board, the EU body in charge of applying the GDPR, must give its view on the adequacy ruling, as must the civil liberties committee of the European Parliament -- the one that has just called for Privacy Shield to be halted. Nonetheless, the European Commission will be keen to adopt the adequacy decision, not least to show that countries are still willing to reduce trade barriers, rather than to impose them, as the US is currently doing.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

1 Comments | Leave a Comment..

More posts from Glyn Moody >>