Glyn Moody’s Techdirt Profile

glynmoody

About Glyn MoodyTechdirt Insider




Posted on Techdirt - 19 October 2018 @ 3:35pm

Whistleblowing About Swiss Banks' Bad Behavior Just Became Safer

from the terms-and-conditions-apply dept

Whistleblowers play a vital role in releasing information the powerful would rather keep secret. But the former pay a high price for their bravery, as the experiences of recent whistleblowers such as Chelsea Manning and Edward Snowden make plain. Another whistleblower whose life has become very difficult after leaking is Rudolf Elmer. He has a Web site about his actions and his subsequent problems, but it's not the easiest to navigate. Here's Wikipedia's summary of who he is and what he did:

In 2008, Elmer illegally disclosed confidential bank documents to WikiLeaks detailing the activities of [the Swiss multinational private bank] Julius Bär in the Cayman Islands and its role in alleged tax evasion. In January 2011, he was convicted in Switzerland of breaching secrecy laws and other offenses. He was rearrested immediately thereafter for having again distributed illegally obtained data to WikiLeaks. Julius Bär as well as select Swiss and German newspapers alleges that Elmer has doctored evidence to suggest the bank engaged in tax evasion.

According to a new article about him in the Economist, Elmer has undergone no less than 48 prosecutorial interrogations, spent six months in solitary confinement and faced 70 court rulings. The good news is that he has finally won an important court case at Switzerland's Supreme Court. The court ruled that since Elmer was employed by the Cayman Islands affiliate of the Zurich-based Julius Bär bank, he was not bound by Switzerland's strict secrecy laws when he passed information to WikiLeaks. Here's why that is a big deal, and not just for Elmer:

The ruling matters because Swiss banks are among the world's most international. They employ thousands of private bankers offshore, and many more in outsourcing operations in countries like India and Poland. Many foreign employees are involved in creating structures comprising overseas companies and trusts linked to a Swiss bank account. Thanks to the ruling, as long as their employment contract is local they can now leak information on suspected tax evasion or other shenanigans without fear of falling under Switzerland's draconian secrecy law, which imposes jail terms of up to five years on whistleblowers.

Sadly, Elmer's problems aren't over. According to the Economist article, he was found guilty of forging a letter and making a threat, and has been ordered to pay SFr320,000 ($325,000) towards the costs of the case. He maintains this was imposed on him as "revenge" for prevailing in the main part of his case. Certainly, in the light of the Supreme Court's ruling in favor of whistleblowing, he is unlikely to have won any new friends in the world of Swiss banking.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

2 Comments | Leave a Comment..

Posted on Techdirt - 12 October 2018 @ 3:43am

Politicians Start To Push For Autonomous Vehicle Data To Be Protected By Copyright Or Database Rights

from the battle-for-the-internet-of-things dept

Autonomous vehicles are much in the news these days, and seem poised to enter the mainstream soon. One of their key aspects is that they are digital systems -- essentially, computers with wheels. As such they gather and generate huge amounts of data as they move around and interact with their surroundings. This kind of data is increasingly valuable, so an important question poses itself: what should happen to all that information from autonomous vehicles?

The issue came up recently in a meeting of the European Parliament's legal affairs committee, which was drawing up a document to summarize its views on autonomous driving in the EU (pdf). It's an area now being explored by the EU with a view to bringing in relevant regulations where they are needed. Topics under consideration include civil liability, data protection, and who gets access to the data produced by autonomous vehicles. On that topic, the Swedish Greens MEP Max Andersson suggested the following amendment (pdf) to the committee's proposed text:

Notes that data generated during autonomous transport are automatically generated and are by nature not creative, thus making copyright protection or the right on databases inapplicable.

Pretty inoffensive stuff, you might think. But not for the center-right EPP politicians present. They demanded a vote on Andersson's amendment, and then proceeded to block its inclusion in the committee's final report.

This is a classic example of the copyright ratchet in action: copyright only ever gets longer, stronger and broader. Here a signal is being sent that copyright or a database right should be extended to apply not just to works created by people, but also to the data streams generated by autonomous vehicles. Given their political leanings, it is highly unlikely that the EPP politicians believe that data belongs to the owner of the vehicle. They presumably think that the manufacturer retains rights to it, even after the vehicle has left the factory and been sold.

That's bad enough, but there's a bigger threat here. Autonomous vehicles are just part of a much larger wave of connected digital devices that generate huge quantities of data, what is generally called the Internet of Things. The next major front in the copyright wars -- the next upward move of the copyright ratchet -- will be over what happens to all that data, and who, if anyone, owns it.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

36 Comments | Leave a Comment..

Posted on Techdirt - 10 October 2018 @ 12:00pm

As Everyone Knows, In The Age Of The Internet, Privacy Is Dead -- Which Is Awkward If You Are A Russian Spy

from the not-just-here-for-the-medieval-church-architecture dept

Judging by the headlines, there are Russian spies everywhere these days. Of course, Russia routinely denies everything, but its attempts at deflection are growing a little feeble. For example, the UK government identified two men it claimed were responsible for the novichok attack on the Skripals in Salisbury. It said they were agents from GRU, Russia's largest military intelligence agency, and one of several groups authorized to spy for the Russian government. The two men appeared later on Russian television, where they denied they were spies, and insisted they were just lovers of English medieval architecture who were in Salisbury to admire the cathedral's 123-meter spire.

More recently, Dutch military intelligence claimed that four officers from GRU had flown into the Netherlands in order to carry out an online attack on the headquarters of the international chemical weapons watchdog that was investigating the Salisbury poisoning. In this case, the Russian government didn't even bother insisting that the men were actually in town to look at Amsterdam's canals. That was probably wise, since a variety of information available online seems to confirm their links to GRU, as the Guardian explained:

One of the suspected agents, tipped as a "human intelligence source" by Dutch investigators, had registered five vehicles at a north-western Moscow address better known as the Aquarium, the GRU finishing school for military attaches and elite spies. According to online listings, which are not official but are publicly available to anyone on Google, he drove a Honda Civic, then moved on to an Alfa Romeo. In case the address did not tip investigators off, he also listed the base number of the Military-Diplomatic Academy.

One of the men, Aleksei Morenets, an alleged hacker, appeared to have set up a dating profile.

Another played for an amateur Moscow football team "known as the security services team" a current player told the Moscow Times. "Almost everyone works for an intelligence agency." The team rosters are publicly available.

The "open source intelligence" group Bellingcat came up with even more astonishing details when they started digging online. Bellingcat found one of the four Russians named by the Dutch authorities in Russia's vehicle ownership database. The car was registered to Komsomolsky Prospekt 20, which happens to be the address of military unit 26165, described by Dutch and US law enforcement agencies as GRU's digital warfare department. By searching the database for other vehicles registered at the same address, Bellingcat came up with a list of 305 individuals linked with the GRU division. The database entries included their full names and passport numbers, as well as mobile phone numbers in most cases. Bellingcat points out that if these are indeed GRU operatives, this discovery would be one of the largest breaches of personal data of an intelligence agency in recent years.

An interesting thread on Twitter by Alexander Gabuev, Senior Fellow and Chair of Russia in Asia-Pacific Program at Carnegie Moscow Center, explains why Bellingcat was able to find such sensitive information online. He says:

the Russian Traffic Authority is notoriously corrupt even by Russian standards, it's inexhaustible source of dark Russian humor. No surprise its database is very easy to buy in the black market since 1990s

In the 1990s, black market information was mostly of interest to specialists, hard to find, and had limited circulation. Today, even sensitive data almost inevitably ends up posted online somewhere, because everything digital has a tendency to end up online once it's available. It's then only a matter of time before groups like Bellingcat find it as they follow up their leads. Combine that with a wealth of information contained in social media posts or on Web sites, and spies have a problem keeping in the shadows. Techdirt has written many stories about how the privacy of ordinary people has been compromised by leaks of personal information that is later made available online. There's no doubt that can be embarrassing and inconvenient for those affected. But if it's any consolation, it's even worse when you are a Russian spy.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

25 Comments | Leave a Comment..

Posted on Techdirt - 5 October 2018 @ 1:42pm

Broad Alliance Calls For Australian Government To Listen To Experts' Warnings About Flaws In New Compelled Access Legislation

from the nah,-we're-ramming-it-through-anyway dept

The battle against encryption is being waged around the world by numerous governments, no matter how often experts explain, often quite slowly, that it's a really bad idea. As Techdirt reported back in August, Australia is mounting its own attack against privacy and security in the form of a compelled access law. The pushback there has just taken an interesting turn with the formation of a Alliance for a Safe and Secure Internet:

The Alliance is campaigning for the Government to slow down, stop ignoring the concerns of technology experts, and listen to its citizens when they raise legitimate concerns. For a piece of legislation that could have such far ranging impacts, a proper and transparent dialogue is needed, and care taken to ensure it does not have the unintended consequence of making all Australians less safe.

The Alliance for a Safe and Secure Internet represents an unusually wide range of interests. It includes Amnesty International and the well-known local group Digital Rights Watch, the Communications Alliance, the main industry body for Australian telecoms, and DIGI, which counts Facebook, Google, Twitter and Yahoo among its members. One disturbing development since we last wrote about the proposed law is the following:

The draft Bill was made public in mid-August and, following a three week consultation process, a large number of submissions from concerned citizens and organisation were received by the Department of Home Affairs. Only a week after the consultation closed the Bill was rushed into Parliament with only very minor amendments, meaning that almost all the expert recommendations for changes to the Bill were ignored by Government.

The Bill has now been referred to the Parliamentary Joint Committee on Intelligence and Security (PJCIS), where again processes have been truncated, setting the stage for it to be passed into law within months.

That's a clear indication that the Australian government intends to ram this law through the legislative process as quickly as possible, and that it has little intention of taking any notice of what the experts say on the matter -- yet again.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

15 Comments | Leave a Comment..

Posted on Techdirt - 5 October 2018 @ 3:06am

Most Chinese Patents Are Being Discarded By Their Owners Because They Are Not Worth The Maintenance Fees To Keep Them

from the more-patents-do-not-mean-more-innovation dept

Techdirt has been writing about China and patents for years. One recurrent theme is that the West is foolish to encourage China to embrace patents more enthusiastically, since the inevitable result will be more Chinese companies suing Western ones for alleged infringement. The second theme -- related to the first -- is that the Chinese government is unwise to use patents as proxies for innovation by offering incentives to its researchers and companies to file for patents. That leads people to file as much as possible, regardless of whether the ideas are original enough to warrant patent protection. One of the surest guides to the value of a patent is whether those who filed for them are willing to pay maintenance fees. Clearly, if patents were really as valuable as many claim they are, there would be no question about paying. An article in Bloomberg reveals how that is working out in China:

Despite huge numbers of filings, most patents are discarded by their fifth year as licensees balk at paying escalating fees. When it comes to design, more than nine out of every ten lapses -- almost the mirror opposite of the U.S.

The high attrition rate is a symptom of the way China has pushed universities, companies and backyard inventors to transform the country into a self-sufficient powerhouse. Subsidies and other incentives are geared toward making patent filings, rather than making sure those claims are useful. So the volume doesn't translate into quality, with the country still dependent on others for innovative ideas, such as modern smartphones.

The discard rate varies according to the patent type. China issues patents for three different categories: invention, utility model and design. Invention patents are "classical" patents, and require a notable breakthrough of some kind, at least in theory. A design patent could be just the shape of a product, while a utility model would include something as minor as sliding to unlock a smartphone. According to the Bloomberg article, 91% of design patents granted in 2013 had been discarded because people stopped paying to maintain them, while 61% of utility patents lapsed within five years. Even the relatively rigorous invention patents saw 37% dumped, compared to around 15% of US patents that were not maintained after five years.

This latest news usefully confirms that the simplistic equation "more patents = more innovation" is false, as Techdirt has been warning for years. It also suggests that China still has some way to go before it can match the West in real inventiveness, rather than the sham kind based purely on meaningless patent statistics.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

8 Comments | Leave a Comment..

Posted on Techdirt - 3 October 2018 @ 7:28pm

African Countries Shooting Themselves In The Digital Foot By Imposing Taxes And Levies On Internet Use

from the how-not-to-do-it dept

Techdirt has written a number of stories recently about unfortunate developments taking place in the African digital world. The Alliance for Affordable Internet (A4AI ) site has usefully pulled together what's been happening across the continent -- and it doesn't look good:

A4AI's recent mobile broadband pricing update shows that Africans face the highest cost to connect to the internet -- just 1GB of mobile data costs the average user in Africa nearly 9% of their monthly income, while their counterparts in the Asia-Pacific region pay one-fifth of that price (around 1.5% of monthly income). Despite this already high cost to connect, we're seeing a worrying trend of governments across Africa imposing a variety of taxes on some of the most popular internet applications and services.

The article goes on to list the following examples.

Uganda

imposes a daily fee of UGX 200 ($0.05) to access social media sites and many common Internet-based messaging and voice applications, as well as a tax on mobile money transactions.

Zambia

has announced it will levy a 30 ngwee ($0.03) daily tax on social network use.

Tanzania

requires bloggers to pay a government license fee roughly equivalent to the average annual income for the country.

Kenya

aims to impose additional taxation on the Internet, with proposed levies on telecommunications and on mobile money transfers.

Benin

imposed a 5 CFCA ($0.01) per megabyte fee to access social media sites, messaging, and Voice-over-IP applications, causing a 250% increase in the price for 1GB of mobile data.

The article explains that the last of these was rescinded within days because of public pressure, while Kenya's tax is currently on hold thanks to a court order. Nonetheless, there is a clear tendency among some African governments to see the Internet as a handy new source of tax income. That's clearly a very short-sighted move. At a time when the digital world in Africa is advancing rapidly, with innovation hubs and startups appearing all over the continent, making it more expensive and thus harder for ordinary people to access the Internet threatens to throttle this growth. Whatever the short-term benefits from the moves listed above, countries imposing taxes and levies of whatever kind risk cutting their citizens off from the exciting digital future being forged elsewhere in Africa. As the A4AI post rightly says:

Africa, with the largest digital divide of any geographic region, has the greatest untapped potential with regards to improving affordable access and meaningful use of the internet. With affordable internet access, African economies can grow sustainably and inclusively.

Sadly, in certain African countries, that seems unlikely to happen.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

15 Comments | Leave a Comment..

Posted on Techdirt - 26 September 2018 @ 3:43pm

Indian Supreme Court Rules Aadhaar Does Not Violate Privacy Rights, But Places Limits On Its Use

from the mixed-result dept

Techdirt wrote recently about what seems to be yet another problem with India's massive Aadhaar biometric identity system. Alongside these specific security issues, there is the larger question of whether Aadhaar as a whole is a violation of Indian citizens' fundamental privacy rights. That question was made all the more pertinent in the light of the country's Supreme Court ruling last year that "Privacy is the constitutional core of human dignity." It led many to hope that the same court would strike down Aadhaar completely following constitutional challenges to the project. However, in a mixed result for both privacy organizations and Aadhaar proponents, India's Supreme Court has handed down a judgment that the identity system does not fundamentally violate privacy rights, but that its use must be strictly circumscribed. As The New York Times explains:

The five-judge panel limited the use of the program, called Aadhaar, to the distribution of certain benefits. It struck down the government's use of the system for unrelated issues like identifying students taking school exams. The court also said that private companies like banks and cellphone providers could not require users to prove their identities with Aadhaar.

The majority opinion of the court said that an Indian's Aadhaar identity was unique and "unparalleled" and empowered marginalized people, such as those who are illiterate.

The decision affects everything from government welfare programs, such as food aid and pensions, to private businesses, which have used the digital ID as a fast, efficient way to verify customers' identities. Some states, such as Andhra Pradesh, had also planned to integrate the ID system into far-reaching surveillance programs, raising the specter of widespread government spying.

In essence, the Supreme Court seems to have felt that although Aadhaar's problems were undeniable, its advantages, particularly for India's poorest citizens, outweighed those concerns. However, its ruling also sought to limit function creep by stipulating that Aadhaar's compulsory use had to be restricted to the original aim of distributing government benefits. Although that seems a reasonable compromise, it may not be quite as clear-cut as it seems. The Guardian writes that it still may be possible to use Aadhaar for commercial purposes:

Sharad Sharma, the co-founder of a Bangalore-based technology think tank which has worked closely with Aadhaar's administrators, said Wednesday's judgment did not totally eliminate that vision for the future of the scheme, but that private use of Aadhaar details would now need to be voluntary.

"Nothing has been said [by the court] about voluntary usage and nothing has been said about regulating bodies mandating it for services," Sharma said. "So access to private parties for voluntary use is permitted."

That looks to be a potentially large loophole in the Supreme Court's attempt to keep the benefits of Aadhaar while stopping it turning into a compulsory identity system for accessing all government and business services. No doubt in the coming years we will see companies exploring just how far they can go in demanding a "voluntary" use of Aadhaar, as well as legal action by privacy advocates trying to stop them from doing so.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

3 Comments | Leave a Comment..

Posted on Techdirt - 24 September 2018 @ 10:44am

China Actively Collecting Zero-Days For Use By Its Intelligence Agencies -- Just Like The West

from the no-moral-high-ground-there,-then dept

It all seems so far away now, but in 2013, during the early days of the Snowden revelations, a story about the NSA's activities emerged that apparently came from a different source. Bloomberg reported (behind a paywall, summarized by Ars Technica) that Microsoft was providing the NSA with information about newly-discovered bugs in the company's software before it patched them. It gave the NSA a window of opportunity during which it could take advantage of those flaws in order to gain access to computer systems of interest. Later that year, the Washington Post reported that the NSA was spending millions of dollars per year to acquire other zero-days from malware vendors.

A stockpile of vulnerabilities and hacking tools is great -- until they leak out, which is precisely what seems to have happened several times with the NSA's collection. The harm that lapse can cause was vividly demonstrated by the WannaCry ransomware. It was built on a Microsoft zero-day that was part of the NSA's toolkit, and caused very serious problems to companies -- and hospitals -- around the world.

The other big problem with the NSA -- or the UK's GCHQ, or Germany's BND -- taking advantage of zero-days in this way is that it makes it inevitable that other actors will do the same. An article on the Access Now site confirms that China is indeed seeking out software flaws that it can use for attacking other systems:

In November 2017, Recorded Future published research on the publication speed for China's National Vulnerability Database (with the memorable acronym CNNVD). When they initially conducted this research, they concluded that China actually evaluates and reports vulnerabilities faster than the U.S. However, when they revisited their findings at a later date, they discovered that a majority of the figures had been altered to hide a much longer processing period during which the Chinese government could assess whether a vulnerability would be useful in intelligence operations.

As the Access Now article explains, the Chinese authorities have gone beyond simply keeping zero-days quiet for as long as possible. They are actively discouraging Chinese white hats from participating in international hacking competitions because this would help Western companies learn about bugs that might otherwise be exploitable by China's intelligence services. This is really bad news for the rest of us. It means that China's huge and growing pool of expert coders are no longer likely to report bugs to software companies when they find them. Instead, they will be passed to the CNNVD for assessment. Not only will bug fixes take longer to appear, exposing users to security risks, but the Chinese may even weaponize the zero-days in order to break into other systems.

Another regrettable aspect of this development is that Western countries like the US and UK can hardly point fingers here, since they have been using zero-days in precisely this way for years. The fact that China -- and presumably Russia, North Korea and Iran amongst others -- have joined the club underlines what a stupid move this was. It may have provided a short-term advantage for the West, but now that it's become the norm for intelligence agencies, the long-term effect is to reduce the security of computer systems everywhere by leaving known vulnerabilities unpatched. It's an unwinnable digital arms race that will be hard to stop now. It also underlines why adding any kind of weakness to cryptographic systems would be an incredibly reckless escalation of an approach that has already put lives at risk.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

15 Comments | Leave a Comment..

Posted on Free Speech - 19 September 2018 @ 11:59am

Tanzania Plans To Outlaw Fact-Checking Of Government Statistics

from the dodgy-data dept

Back in April, Techdirt wrote about a set of regulations brought in by the Tanzanian government that required people there to pay around $900 per year for a license to blog. Despite the very high costs it imposes on people -- Tanzania's GDP per capita was under $900 in 2016 -- it seems the authorities are serious about enforcing the law. The iAfrikan site reported in June:

Popular Tanzanian forums and "leaks" website, Jamii Forums, has been temporarily shut down by government as it has not complied with the new regulations and license fees required of online content creators in Tanzania. This comes after Tanzania Communications Regulatory Authority (TCRA) issued a notice to Jamii Forums reminding them that it is a legal offense to publish content on the Internet without having registered and paid for a license.

The Swahili-language site Jamii Forums is back online now. But the Tanzanian authorities are not resting on their laurels when it comes to introducing ridiculous laws. Here's another one that's arguably worse than charging bloggers to post:

[President John] Magufuli and his colleagues are now looking to outlaw fact checking thanks to proposed amendments to the Statistics Act, 2015.

"The principal Act is amended by adding immediately after section 24 the following: 24A.-(1) Any person who is authorised by the Bureau to process any official statistics, shall before publishing or communicating such information to the public, obtain an authorisation from the Bureau. (2) A person shall not disseminate or otherwise communicate to the public any statistical information which is intended to invalidate, distort, or discredit official statistics," reads the proposed amendments to Tanzania's Statistics Act, 2015 as published in the Gazette of the United Republic of Tanzania No. 23 Vol. 99.

As the iAfrikan article points out, the amendments will mean that statistics published by the Tanzanian government must be regarded as correct, however absurd or obviously erroneous they might be. Moreover, it will be illegal for independent researchers to publish any other figures that contradict, or even simply call into question, official statistics.

This is presumably born of a thin-skinned government that wants to avoid even the mildest criticism of its policies or plans. But it seems certain to backfire badly. If statistics are wrong, but no one can correct them, there is the risk that Tanzanian businesses, organizations and citizens will make bad decisions based on this dodgy data. That could lead to harmful consequences for the economy and society, which the Tanzanian government might well be tempted to cover up by issuing yet more incorrect statistics. Without open and honest feedback to correct this behavior, there could be an ever-worsening cascade of misinformation and lies until public trust in the government collapses completely. Does President Magufuli really want that?

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

17 Comments | Leave a Comment..

Posted on Techdirt - 17 September 2018 @ 7:49pm

Software Patch Claimed To Allow Aadhaar's Security To Be Bypassed, Calling Into Question Biometric Database's Integrity

from the but-it's-ok,-we-already-blacklisted-the-50,000-rogue-operators-that-we-found dept

Earlier this year, we wrote about what seemed to be a fairly serious breach of security at the world's largest biometric database, India's Aadhaar. The Indian edition of Huffington Post now reports on what looks like an even more grave problem:

The authenticity of the data stored in India's controversial Aadhaar identity database, which contains the biometrics and personal information of over 1 billion Indians, has been compromised by a software patch that disables critical security features of the software used to enrol new Aadhaar users, a three month-long investigation by HuffPost India reveals.

According to the article, the patch can be bought for just Rs 2,500 (around $35). The easy-to-install software removes three critical security features of Aadhaar:

The patch lets a user bypass critical security features such as biometric authentication of enrolment operators to generate unauthorised Aadhaar numbers.

The patch disables the enrolment software's in-built GPS security feature (used to identify the physical location of every enrolment centre), which means anyone anywhere in the world -- say, Beijing, Karachi or Kabul -- can use the software to enrol users.

The patch reduces the sensitivity of the enrolment software's iris-recognition system, making it easier to spoof the software with a photograph of a registered operator, rather than requiring the operator to be present in person.

As the Huffington Post article explains, creating a patch that is able to circumvent the main security features in this way was possible thanks to design choices made early on in the project. The unprecedented scale of the Aadhaar enrollment process -- so far around 1.2 billion people have been given an Aadhaar number and added to the database -- meant that a large number of private agencies and village-level computer kiosks were used for registration. Since connectivity was often poor, the main software was installed on local computers, rather than being run in the cloud. The patch can be used by anyone with local access to the computer system, and simply involves replacing a folder of Java libraries with versions lacking the security checks.

The Unique Identification Authority of India (UIDAI), the government body responsible for the Aadhaar project, has responded to the Huffington Post article, but in a rather odd way: as a Donald Trump-like stream of tweets. The Huffington Post points out: "[the UIDAI] has simply stated that its systems are completely secure without any supporting evidence." One of the Aadhaar tweets is as follows:

It is because of this stringent and robust system that as on date more that 50,000 operators have been blacklisted, UIDAI added.

The need to throw 50,000 operators off the system hardly inspires confidence in its overall security. What makes things worse is that the Indian government seems determined to make Aadhaar indispensable for Indian citizens who want to deal with it in any way, and to encourage business to do the same. Given the continuing questions about Aadhaar's overall security and integrity, that seems unwise, to say the least.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

9 Comments | Leave a Comment..

Posted on Techdirt - 13 September 2018 @ 12:17am

Corporate Sovereignty On The Wane, As Governments Realize It's More Trouble Than It's Worth

from the but-not-dead-yet dept

A few years ago, corporate sovereignty -- officially known as "investor-state dispute settlement" (ISDS) -- was an indispensable and important element of trade deals. As a result, it would crop up on Techdirt quite often. But the world is finally moving on, and old-style corporate sovereignty is losing its appeal. As we reported last year, the US Trade Representative, Robert Lighthizer, hinted that the US might not support ISDS in future trade deals, but it was not clear what that might mean in practice. The Canadian Broadcasting Corporation (CBC) site has an interesting article that explores the new contours of corporate sovereignty:

The preliminary trade agreement the U.S. recently reached with Mexico may offer a glimpse of what could happen with NAFTA's Chapter 11 [governing ISDS].

A U.S. official said the two countries wanted ISDS to be "limited" to cases of expropriation, bias against foreign companies or failure to treat all trading partners equally.

The new US thinking places Canada in a tricky position because the latter is involved in several trade deals, which take different approaches to corporate sovereignty. As well as the US-dominated NAFTA, there is CETA, the trade deal with Europe. For that, Canada is acquiescing to the EU's request to replace ISDS with the new Investment Court System (ICS). In TPP, however -- still lumbering on, despite the US withdrawal -- Canada seems to be going along with the traditional corporate sovereignty approach.

A willingness to move on from traditional ISDS can be seen in the often overlooked, but important, Regional Comprehensive Economic Partnership (RCEP) trade deal. India's Business Standard reports:

Despite treading diametrically opposite paths on tariffs and market access, India and China, along with other nations, have hit it off on talks regarding investment norms in the proposed Regional Comprehensive Economic Partnership (RCEP) pact.

In a bid to fast-track the deal, most nations have agreed to ease the investor-state-dispute settlement (ISDS) clauses.

As with NAFTA and CETA, it seems that the nations involved in RCEP no longer regard corporate sovereignty as a priority, and are willing to weaken its powers in order to reach agreement on other areas. Once the principle has been established that ISDS can be watered down, there's nothing to stop nations proposing that it should be dropped altogether. Given the astonishing awards and abuses that corporate sovereignty has led to in the past, that's a welcome development.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

7 Comments | Leave a Comment..

Posted on Techdirt - 10 September 2018 @ 8:01pm

Europe's New 'Plan S' For Open Access: Daft Name, Great News

from the admirably-strong dept

The journey towards open access has been a long one, with many disappointments along the way. But occasionally there are unequivocal and major victories. One such is the new "Plan S" from the inelegantly-named cOALition S:

On 4 September 2018, 11 national research funding organisation, with the support of the European Commission including the European Research Council (ERC), announced the launch of cOAlition S, an initiative to make full and immediate Open Access to research publications a reality. It is built around Plan S, which consists of one target and 10 principles.

cOAlition S signals the commitment to implement, by 1 January 2020, the necessary measures to fulfil its main principle: "By 2020 scientific publications that result from research funded by public grants provided by participating national and European research councils and funding bodies, must be published in compliant Open Access Journals or on compliant Open Access Platforms."

The plan and its ten principles (pdf) are usefully summed up by Peter Suber, one of the earliest and most influential open access advocates, as follows:

The plan is admirably strong. It aims to cover all European research, in the sciences and in the humanities, at the EU level and the member-state level. It's a plan for a mandate, not just an exhortation or encouragement. It keeps copyright in the hands of authors. It requires open licenses and prefers CC-BY. It abolishes or phases out embargoes. It does not support hybrid journals except as stepping stones to full-OA journals. It's willing to pay APCs [Article Processing Charges] but wants to cap them, and wants funders and universities to pay them, not authors. It will monitor compliance and sanction non-compliance. It's already backed by a dozen powerful, national funding agencies and calls for other funders and other stakeholders to join the coalition.

Keeping copyright in the hands of authors is crucial: too often, academics have been cajoled or bullied into handing over copyright for their articles to publishers, thus losing the ability to determine who can read them, and under what conditions. Similarly, the CC-BY license would allow commercial use by anyone -- many publishers try to release so-called open access articles under restrictive licenses like CC-BY-NC, which stop other publishers from distributing them.

Embargo periods are routinely used by publishers to delay the appearance of open access versions of articles; under Plan S, that would no longer be allowed. Finally, the new initiative discourages the use of "hybrid" journals that have often enabled publishers to "double dip". That is, they charge researchers who want to release their work as open access, but also require libraries to take out full-price subscriptions for journals that include these freely-available articles.

Suber has a number of (relatively minor) criticisms of Plan S, which are well-worth reading. All-in-all, though, this is a major breakthrough for open access in Europe, and thus the world. Once "admirably strong" open access mandates like Plan S have been established in one region, others tend to follow in due course. Let's just hope they choose better names.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

10 Comments | Leave a Comment..

Posted on Techdirt - 5 September 2018 @ 7:35pm

Leading Biomedical Funders Call For Open Peer Review Of Academic Research

from the nothing-to-hide dept

Techdirt has written many posts about open access -- the movement to make digital versions of academic research freely available to everyone. Open access is about how research is disseminated once it has been selected for publication. So far, there has been less emphasis on changing how academic work is selected in the first place, which is based on the time-honored approach of peer review. That is, papers submitted to journals are sent out to experts in the same or similar field, who are invited to comment on ways of improving the work, and on whether the research should be published. Traditionally, the process is shrouded in secrecy. The reviewers are generally anonymous, and the reports they make on the submissions are not made public. Now, however, the idea of making peer review more transparent as part of the general process of becoming more open is gaining increasing impetus.

A couple of weeks ago, representatives of two leading biomedical funders -- the UK Wellcome Trust and the Howard Hughes Medical Institute -- together with ASAPbio, a non-profit organization that encourages innovation in life-sciences publishing, wrote a commentary in Nature. In it, they called for "open review", which, they point out, encompasses two distinct forms of transparency:

'Open identities' means disclosing reviewers' names; 'open reports' (also called transparent reviews or published peer review) means publishing the content of reviews. Journals might offer one or the other, neither or both.

In a 2016 survey, 59% of 3,062 respondents were in favour of open reports. Only 31% favoured open identities, which they feared could cause reviewers to weaken their criticisms or could lead to retaliation from authors. Here, we advocate for open reports as the default and for open identities to be optional, not mandatory.

The authors of the commentary believe that there are a number of advantages to open reports:

The scientific community would learn from reviewers' and editors’ insights. Social scientists could collect data (for example, on biases among reviewers or the efficiency of error identification by reviewers) that might improve the process. Early-career researchers could learn by example. And the public would not be asked to place its faith in hidden assessments.

There are, of course risks. One concern mentioned is that published reviews might be used unfairly in subsequent evaluation of the authors for grants, jobs, awards or promotions. Another possibility is the 'weaponization' of reviewer reports:

Opponents of certain types of research (for example, on genetically modified organisms, climate change and vaccines) could take critical remarks in peer reviews out of context or mischaracterize disagreements to undermine public trust in the paper, the field or science as a whole.

Despite these and other concerns mentioned in the Nature commentary, an open letter published on the ASAPbio site lists dozens of major titles that have already instituted open reports, or promise to do so next year. As well as that indication that open reports are passing from concept to reality, it's worth bearing in mind that the UK Wellcome Trust and the Howard Hughes Medical Institute are major funders of biomedical research. It would be a relatively straightforward step for them to make the adoption of open reports a condition of receiving their grants -- something that would doubtless encourage uptake of the idea.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

20 Comments | Leave a Comment..

Posted on Techdirt - 17 August 2018 @ 7:39pm

As Academic Publishers Fight And Subvert Open Access, Preprints Offer An Alternative Approach For Sharing Knowledge Widely

from the this-is-the-future dept

The key idea behind open access is that everyone with an Internet connection should be able to read academic papers without needing to pay for them. Or rather without needing to pay again, since most research is funded using taxpayers' money. It's hard to argue against that proposition, or that making information available in this way is likely to increase the rate at which medical and scientific discoveries are made for the benefit of all. And yet, as Techdirt has reported, academic publishers that often enjoy profit margins of 30-40% have adopted a range of approaches to undermine open access and its aims -- and with considerable success. A recent opinion column in the Canadian journal University Affairs explains how traditional publishers have managed to subvert open access for their own benefit:

An ironic twist to the open-access movement is that it has actually made the publishers richer. They've jumped on the bandwagon by offering authors the option of paying article processing charges (APCs) in order to make their articles open access, while continuing to increase subscription charges to libraries at the institutions where those authors work. So, in many cases, the publishers are being paid twice for the same content -- often charging APCs higher than purely open access journals.

Another serious problem is the rise of so-called "predatory" open access publishers that have distorted the original ideas behind the movement even more. The Guardian reported recently:

More than 175,000 scientific articles have been produced by five of the largest "predatory open-access publishers", including India-based Omics publishing group and the Turkish World Academy of Science, Engineering and Technology, or Waset.

But the vast majority of those articles skip almost all of the traditional checks and balances of scientific publishing, from peer review to an editorial board. Instead, most journals run by those companies will publish anything submitted to them -- provided the required fee is paid.

These issues will be hard, if not impossible, to solve. As a result, many are now looking for a different solution to the problem of providing easy and cost-free access to academic knowledge, this time in the form of preprints. Techdirt reported earlier this year that there is evidence the published versions of papers add very little to the early, preprint version that is placed online directly by the authors. The negligible barriers to entry, the speed at which work can be published, and the extremely low costs involved have led many to see preprints as the best solution to providing open access to academic papers without needing to go through publishers at all.

Inevitably, perhaps, criticisms of the idea are starting to appear. Recently, Tom Sheldon, who is a senior press manager at the Science Media Centre in London, published a commentary in one of the leading academic journals, Nature, under the headline: "Preprints could promote confusion and distortion". As he noted, this grew out of an earlier discussion paper that he published on the Science Media Centre's blog. The Science Media Centre describes itself as "an independent press office helping to ensure that the public have access to the best scientific evidence and expertise through the news media when science hits the headlines." Its funding comes from "scientific institutions, science-based companies, charities, media organisations and government". Sheldon's concerns are not so much about preprints themselves, but their impact on how science is reported:

I am a big fan of bold and disruptive changes which can lead to fundamental culture change. My reading around work on reproducibility, open access and preprint make me proud to be part of a scientific community intent on finding ways to make science better. But I am concerned about how this change might affect the bit of science publication that we are involved with at the Science Media Centre. The bit which is all about the way scientific findings find their way to the wider public and policymakers via the mass media.

One of his concerns is the lack of embargoes for preprints. At the moment, when researchers have what they think is an important result or discovery appearing in a paper, they typically offer trusted journalists a chance to read it in advance on the understanding that they won't write about it until the paper is officially released. This has a number of advantages. It creates a level playing field for those journalists, who all get to see the paper at the same time. Crucially, it allows journalists to contact other experts to ask their opinion of the results, which helps to catch rogue papers, and also provides much-needed context. Sheldon writes:

Contrast this with preprints. As soon as research is in the public domain, there is nothing to stop a journalist writing about it, and rushing to be the first to do so. Imagine early findings that seem to show that climate change is natural or that a common vaccine is unsafe. Preprints on subjects such as those could, if they become a story that goes viral, end up misleading millions, whether or not that was the intention of the authors.

That's certainly true, but is easy to remedy. Academics who plan to publish a preprint could offer a copy of the paper to the group of trusted journalists under embargo -- just as they would with traditional papers. One sentence describing why it would be worth reading is all that is required by way of introduction. To the extent that the system works for today's published papers, it will also work for preprints. Some authors may publish without giving journalists time to check with other experts, but that's also true for current papers. Similarly, some journalists may hanker after full press releases that spoon-feed them the results, but if they can't be bothered working it out for themselves, or contacting the researchers and asking for an explanation, they probably wouldn't write a very good article anyway.

The other concern relates to the quality of preprints. One of the key differences between a preprint and a paper published in a journal is that the latter usually goes through the process of "peer review", whereby fellow academics read and critique it. But it is widely agreed that the peer review process has serious flaws, as many have pointed out for years -- and as Sheldon himself admits.

Indeed, as defenders note, preprints allow far more scrutiny to be applied than with traditional peer review, because they are open for all to read and spot mistakes. There are some new and interesting projects to formalize this kind of open review. Sheldon rightly has particular concerns about papers on public health matters, where lives might be put at risk by erroneous or misleading results. But major preprint sites like bioRxiv (for biology) and the upcoming medRxiv (for medicine and health sciences) are already trying to reduce that problem by actively screening preprints before they are posted.

Sheldon certainly raises some valid questions about the impact of preprints on the communication of science to a general audience. None of the issues is insurmountable, but it may require journalists as well as scientists to adapt to the changed landscape. However, changing how things are done is precisely the point about preprints. The present academic publishing system does not promote general access to knowledge that is largely funded by the taxpayer. The attempt by the open access movement to make that happen has arguably been neutered by shrewd moves on the part of traditional publishers, helped by complaisant politicians. Preprints are probably the best hope we have now for achieving a more equitable and efficient way of sharing knowledge and building on it more effectively.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

40 Comments | Leave a Comment..

Posted on Techdirt - 8 August 2018 @ 7:59pm

ICANN Loses Yet Again In Its Quixotic Quest To Obtain A Special Exemption From The EU's GDPR

from the oh,-do-give-it-a-rest dept

Back in May, we wrote about the bizarre attempt by the Internet Corporation for Assigned Names and Numbers (ICANN) to exempt itself from the EU's new privacy legislation, the GDPR. ICANN sought an injunction to force EPAG, a Tucows-owned registrar based in Bonn, Germany, to collect administrative and technical contacts as part of the domain name registration process. EPAG had refused, because it felt doing so would fall foul of the GDPR. A German court turned down ICANN's request, but without addressing the question whether gathering that information would breach the GDPR.

As the organization's timeline of the case indicates, ICANN then appealed to the Higher Regional Court of Cologne, Germany, against the ruling. Meanwhile, the lower court that issued the original judgment decided to re-visit the case, which it has the option to do upon receipt of an appeal. However, it did not change its view, and referred the matter to the upper Court. The Appellate Court of Cologne has issued its judgment (pdf), with a comprehensive smackdown of ICANN, yet again (via The Register):

Regardless of the fact that already in view of the convincing remarks of the Regional Court in its orders of 29 May 2018 and 16 July 2018 the existence of a claim for a preliminary injunction (Verfügungsanspruch) is doubtful, at least with regard to the main application, the granting the sought interim injunction fails in any case because the Applicant has not sufficiently explained and made credible a reason for a preliminary injunction (Verfügungsgrund).

The Appellate Court pointed out that ICANN could hardly claim it would suffer "irreparable harm" if it were not granted an injunction forcing EPAG to gather the additional data. If necessary, ICANN could collect that information at a later date, without any serious consequences. ICANN's case was further undermined by the fact that gathering administrative and technical contacts in the past had always been on a voluntary basis, so not doing so could hardly cause great damage.

Once more, then, the question of whether collecting this extra personal information was forbidden under the GDPR was not addressed, since ICANN's argument was found wanting irrespective of that privacy issue. And because no interpretation of the GDPR was required for the case, the Appellate Court also ruled there were no grounds for referring the question to the EU's highest court, the Court of Justice of the European Union.

ICANN says that it is "considering its next steps", but it's hard to see what those might be, given the unanimous verdict of the courts. Maybe it's time for ICANN to comply with the EU law like everybody else, and for it to stop wasting money in its forlorn attempts to get EU courts to grant it a special exemption from the GDPR's rules.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

22 Comments | Leave a Comment..

Posted on Techdirt - 2 August 2018 @ 3:26am

Facebook Granted 'Unprecedented' Leave To Appeal Over Referral Of Privacy Shield Case To Top EU Court

from the never-a-dull-moment dept

Back in April, we wrote about the latest development in the long, long saga of Max Schrem's legal challenge to Facebook's data transfers from the EU to the US. The Irish High Court referred the case to the EU's top court, asking the Court of Justice of the European Union (CJEU) to rule on eleven issues that the judge raised. Facebook tried to appeal against the Irish High Court's decision, but the received wisdom was that it was not an option for CJEU referrals of this kind. But as the Irish Times reports, to everyone's surprise, it seems the received wisdom was wrong:

The [Irish] Supreme Court has agreed to hear an unprecedented appeal by Facebook over a High Court judge's decision to refer to the European Court of Justice (CJEU) key issues concerning the validity of EU-US data transfer channels.

The Irish Chief Justice rejected arguments by the Irish Data Protection Commissioner and Schrems that Facebook could not seek to have the Supreme Court reverse certain disputed findings of fact by the High Court. The judge said that it was "at least arguable" Facebook could persuade the Supreme Court that some or all of the facts under challenge should be reversed. On that basis, the appeal could go ahead. Among the facts that would be considered were the following key points:

The chief justice said Facebook was essentially seeking that the Supreme Court "correct" the alleged errors, including the High Court findings of "mass indiscriminate" processing, that surveillance is legal unless forbidden, on the doctrine of legal standing in US law and in the consideration of other issues including safeguards.

Facebook also argues the High Court erred in finding the laws and practices of the US did not provide EU citizens with an effective remedy, as required under the Charter of Fundamental Rights of the EU, for breach of data privacy rights.

Those are crucial issues not just for Facebook, but also for the validity of the entire Privacy Shield framework, which is currently under pressure in the EU. It's not clear whether the Irish Supreme Court is really prepared to overrule the High Court judge, and to what extent the CJEU will take note anyway. One thing that is certain is that a complex and important case just took yet another surprising twist.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

3 Comments | Leave a Comment..

Posted on Techdirt - 26 July 2018 @ 8:06pm

EU And Japan Agree To Free Data Flows, Just As Tottering Privacy Shield Framework Threatens Transatlantic Transfers

from the cooperation-not-confrontation dept

The EU's strong data protection laws affect not only how personal data is handled within the European Union, but also where it can flow to. Under the GDPR, just as was the case with the preceding EU data protection directive, the personal data of EU citizens can only be sent to countries whose privacy laws meet the standard of "essential equivalence". That is, there may be differences in detail, but the overall effect has to be similar to the GDPR, something established as part of what is called an "adequacy decision". Just such an adequacy ruling by the European Commission has been agreed in favor of Japan:

This mutual adequacy arrangement will create the world's largest area of safe transfers of data based on a high level of protection for personal data. Europeans will benefit from strong protection of their personal data in line with EU privacy standards when their data is transferred to Japan. This arrangement will also complement the EU-Japan Economic Partnership Agreement, European companies will benefit from uninhibited flow of data with this key commercial partner, as well as from privileged access to the 127 million Japanese consumers. With this agreement, the EU and Japan affirm that, in the digital era, promoting high privacy standards and facilitating international trade go hand in hand. Under the GDPR, an adequacy decision is the most straightforward way to ensure secure and stable data flows.

Before the European Commission formally adopts the latest adequacy decision, Japan has agreed to tighten up certain aspects of its data protection laws by implementing the following:

A set of rules providing individuals in the EU whose personal data are transferred to Japan, with additional safeguards that will bridge several differences between the two data protection systems. These additional safeguards will strengthen, for example, the protection of sensitive data, the conditions under which EU data can be further transferred from Japan to another third country, the exercise of individual rights to access and rectification. These rules will be binding on Japanese companies importing data from the EU and enforceable by the Japanese independent data protection authority (PPC) and courts.

A complaint-handling mechanism to investigate and resolve complaints from Europeans regarding access to their data by Japanese public authorities. This new mechanism will be administered and supervised by the Japanese independent data protection authority.

It is precisely these areas that are proving so problematic for the data flow agreement between the EU and the US, known as the Privacy Shield framework. As Techdirt has reported, the European Commission is under increasing pressure to suspend Privacy Shield unless the US implements it fully -- something it has failed to do so far, despite repeated EU requests. Granting adequacy to Japan is an effective way to flag up that other major economies don't have any problems with the GDPR, and that the EU can turn its attention elsewhere if the US refuses to comply with the terms of the Privacy Shield agreement.

The new data deal with Japan still has several hurdles to overcome before it goes into effect. For example, the European Data Protection Board, the EU body in charge of applying the GDPR, must give its view on the adequacy ruling, as must the civil liberties committee of the European Parliament -- the one that has just called for Privacy Shield to be halted. Nonetheless, the European Commission will be keen to adopt the adequacy decision, not least to show that countries are still willing to reduce trade barriers, rather than to impose them, as the US is currently doing.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

1 Comments | Leave a Comment..

Posted on Techdirt - 25 July 2018 @ 3:27am

South Africa's Proposed Fair Use Right In Copyright Bill Is Surprisingly Good -- At The Moment

from the stand-back-for-the-lobbyist-attacks dept

Too often Techdirt writes about changes in copyright law that are only for the benefit of the big publishing and recording companies, and offer little to individual creators or the public. So it makes a pleasant change to be able to report that South Africa's efforts to update its creaking copyright laws seem, for the moment, to be bucking that trend. Specifically, those drafting the text seem to have listened to the calls for intelligent fair use rights fit for the digital world. As a post on infojustice.org explains, a key aspect of copyright reform is enshrining exceptions that give permission to Internet users to do all the usual online stuff -- things like sharing photos on social media, or making and distributing memes. The South African text does a good job in this respect:

A key benefit of the Bill is that its new exceptions are generally framed to be open to all works, uses, and users. Research shows that providing exceptions that are open to purposes, uses, works and users is correlated with both information technology industry growth and to increased production of works of knowledge creation.

The solution adopted for the draft of the new copyright law is a hybrid approach that contains both a set of specific modern exceptions for various purposes, along with an open general exception that can be used to assess any use not specifically authorized:

The key change is the addition of "such as" before the list of purposes covered by the right, making the provision applicable to a use for any purpose, as long as that use is fair to the author.

In order to test whether a use is fair, the standard four factors are to be considered:

(i) the nature of the work in question;

(ii) the amount and substantiality of the part of the work affected by the act in relation to the whole of the work;

(iii) the purpose and character of the use, including whether --

(aa) such use serves a purpose different from that of the work affected; and
(bb) it is of a commercial nature or for non-profit research, library or educational purposes; and

(iv) the substitution effect of the act upon the potential market for the work in question.

Crucially, the legislators rejected calls by some to include a fifth factor that would look at whether licenses for the intended use were available. As the infojustice.org post points out, had that factor been included, it would have made it considerably harder to claim fair use. That's one reason why the copyright world has been pushing so hard for licensing as the solution to everything -- whether it's orphan works, text and data mining, or the EU's revised copyright directive. That rejection sends an important signal to other politicians looking to update their copyright laws, and makes the South African text particularly welcome, as the infojustice.org post underlines:

We commend its Parliament on both the openness of this process and on the excellent drafting of the proposed fair use clause. We are confident it will become a model for other countries around the world that seek to modernize their copyright laws for the digital age.

However, for that very reason, the fair use proposal is like to come under heavy attack from the copyright companies and their lobbyists. It remains to be seen whether the good things in the present Bill will still be there in the final law.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

26 Comments | Leave a Comment..

Posted on Techdirt - 23 July 2018 @ 7:49pm

Applicant For Major EU Open Access Publishing Contract Proposes Open Source, Open Data And Open Peer Review As Solution

from the Elsevier-not-invited dept

We've just written about growing discontentment among open access advocates with the role that the publishing giant Elsevier will play in monitoring open science in the EU. That unhappiness probably just went up a notch, as a result of the following development, reported here by Nature:

Elsevier last week stopped thousands of scientists in Germany from reading its recent journal articles, as a row escalates over the cost of a nationwide open-access agreement.

The move comes just two weeks after researchers in Sweden lost access to the most recent Elsevier research papers, when negotiations on its contract broke down over the same issue.

The open science monitoring project involving Elsevier is only a very minor part of the EU's overall open science strategy, which itself is part of the €80 billion research and innovation program called Horizon 2020. A new post on the blog of the open access publisher Hindawi reveals that it has put in a bid in response to the European Commission's call for tenders to launch a major new open research publishing platform:

The Commission's aim is to build on their progressive Open Science agenda to provide an optional Open Access publishing platform for the articles of all researchers with Horizon 2020 grants. The platform will also provide incentives for researchers to adopt Open Science practices, such as publishing preprints, sharing data, and open peer review. The potential for this initiative to lead a systemic transformation in research practice and scholarly communication in Europe and more widely should not be underestimated.

That last sentence makes a bold claim. Hindawi's blog post provides some good analysis of why the transition to open access and open science is proving so hard. Hindawi's proposed solution is based on open source code, and openness in general:

Our proposal to the Commission involves the development of an end-to-end publishing platform that is fully Open Source, with an editorial model that incentivises Open Science practices including preprints, data sharing, and objective open peer review. Data about the impact of published outputs would also be fully open and available for independent scrutiny, and the policies and governance of the platform would be managed by the research community. In particular, researchers who are currently disenfranchised by the current academic reward system, including early career researchers and researchers whose primary research outputs include data and software code, would have a key role in developing the policies of the platform.

Recognizing the flaws in the current system of assessment and rewards is key here. Open access has been around for two decades, but the reliance on near-meaningless impact factors to judge the alleged influence of titles, and thus of the work published there, has barely changed at all. As the Hindawi blog post notes:

As long as journal rank and journal impact factor remain the currency used to judge academic quality, no amount of technological change or economic support for open outputs and open infrastructure will make research and researchers more open

Unfortunately, even a major project like the Horizon 2020 open research publishing platform -- whichever company wins the contract -- will not be able to change that culture on its own, however welcome it might be in itself. Core changes must come from within the academic world. Sadly, there are still precious few signs that those in positions of power are willing to embrace not just open access and even open science, but also a radical openness that extends to every aspect of the academic world, including evaluation and recognition.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

9 Comments | Leave a Comment..

Posted on Techdirt - 16 July 2018 @ 9:36am

Guy In Charge Of Pushing Draconian EU Copyright Directive, Evasive About His Own Use Of Copyright Protected Images

from the do-as-I-say,-not-as-I-do? dept

There's one person who wields more power than anyone to shape the awful EU Copyright Directive: the MEP Axel Voss. He's the head of the main Legal Affairs Committee (JURI) that is steering the Directive through the European Parliament. Voss took over from the previous MEP leading JURI, Therese Comodini Cachia, after she decided to return to her native Malta as member of the national parliament. Her draft version of the Directive was certainly not perfect, but it did possess the virtue of being broadly acceptable to all sides of the debate. When Voss took over last year, the text took a dramatic turn for the worse thanks to the infamous "snippet tax" (Article 11 of the proposed Directive) and the "upload filter" (Article 13).

As Mike reported a couple of weeks ago, Voss offered a pretty poor defense of his proposals, showing little understanding of the Internet. But he made clear that he thinks respecting copyright law is really important. For example, he said he was particularly concerned that material is being placed online, where "there is no remuneration of the concerned author." Given that background, it will probably come as no surprise to Techdirt readers to learn that questions are now being asked whether Voss himself has paid creators for material that he has used on his social media accounts:

BuzzFeed News Germany ... looked at the posts from the past 24 months on Voss's social media channels. In the two years, BuzzFeed News has found at least 17 copyrighted images from at least eight different image agencies, including the German press agency dpa.

As good journalists, BuzzFeed News Germany naturally contacted Axel Voss to ask whether he had paid to use all these copyrighted images:

Since last Thursday, 5 July, BuzzFeed News Germany has asked Voss's office and his personal assistant a total of five times in writing and several times over the phone whether Axel Voss or his social media team has paid for the use of these copyrighted photos. Voss's staff responded evasively five times. Asked if the office could provide us with licensing evidence, the Voss office responded: "We do not provide invoices to uninvolved third parties."

Such a simple question -- had Voss paid for the images he used? -- and yet one that seemed so hard for the Voss team to answer, even with the single word "yes". The article (original in German) took screenshots of the images the BuzzFeed Germany journalists had found. That's just as well, because shortly afterwards, 12 of the 17 posts with copyrighted images had been deleted. The journalists contacted Axel Voss once more, and asked why they had disappeared (original in German). To which Axel Voss's office replied: anyone can add and remove posts, if they wish. Which is true, but fails to answer the question, yet again. However, Axel Voss's office did offer an additional "explanation":

according to the current legal situation (...), if the right-holder informs us that we have violated their rights, we remove the image in question according to the notice and takedown procedure of the e-commerce directive.

That is, Axel Voss, or his office, seems to believe it's fine to post copyrighted material online provided you take it down if someone complains. But that's not how it works at all. The EU notice and takedown procedure applies to the Internet services hosting material, not to the individual users of those services. The fact that the team of the senior MEP responsible for pushing the deeply-flawed Copyright Directive through the European Parliament is ignorant of the current laws is bad enough. That he may have posted copyrighted material without paying for it while claiming to be worried that creators aren't being remunerated for their work, is beyond ridiculous.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

21 Comments | Leave a Comment..

More posts from Glyn Moody >>