Glyn Moody’s Techdirt Profile

glynmoody

About Glyn MoodyTechdirt Insider




Posted on Techdirt - 20 November 2017 @ 7:44pm

Top German Judges Slam EU Plans To Create Global Court To Enforce Corporate Sovereignty

from the let's-just-make-up-the-laws-as-we-go-along dept

A few weeks ago, we wrote how many -- even the US Trade Representative, Robert Lighthizer -- seem to think it's time for corporate sovereignty, also called "investor-state dispute settlement" (ISDS), to go. For some reason the European Commission disagrees. As Techdirt readers may recall, after receiving a bloody nose in a public consultation about corporate sovereignty, the Commission announced to great fanfare that it was "replacing" ISDS with something called the Investment Court System (ICS). In fact, this amounted to little more than putting lipstick on the ISDS pig, since ICS suffered from the same fundamental flaw: it gave companies unique rights to sue countries in a supra-national court. The EU is still plugging away at the ICS idea, and it now wants to go further by creating a truly global corporate sovereignty system enforced by a new Multilateral Investment Court (pdf), an initiative formally launched a couple of months ago:

the [EU's] approach since 2015 has been to institutionalise the system for the resolution of investment disputes in EU trade and investment agreements through the inclusion of the Investment Court System (ICS). However, due to its bilateral nature, the ICS cannot fully address all the aforementioned problems. Moreover, the inclusion of ICSs in [EU] agreements has costs in terms of administrative complexity and budgetary impact.

The multilateral investment court initiative aims at setting up a framework for the resolution of international investment disputes that is permanent, independent and legitimate; predictable in delivering consistent case-law; allowing for an appeal of decisions; cost-effective; transparent and efficient proceedings and allowing for third party interventions (including for example interested environmental or labour organisations).

When the ICS was first proposed, the German Association of Judges, which Wikipedia describes as "the largest professional organization of judges and public prosecutors in Germany", ripped it to shreds. The same august body has just meted out similar treatment to the Multilateral Investment Court, and has asked the German government "to deny the European Commission the required mandate to negotiate the establishment of a Multinational Investment Court (MIC)."

The document, originally in German, and available in an unofficial translation by EuroMinds Linguistics (pdf), contains a devastating analysis of the MIC and its flaws. For example, it points out that international investment protection law is characterized by a "lack of substantive law principles". That is, there are no global investment laws that the MIC could apply when deciding cases. The MIC would effectively be making it up as it went along. The German Association of Judges points out why the situation would be even worse for the MIC than for the ICS or ISDS tribunals:

Because of [the arbitration courts'] position, they can override decisions of national administrations and courts in favour of an investor. This exercise of power, exercised by an arbitral tribunal, has thus far been limited to the enforcement of individual arbitral awards. However, it would be considerably strengthened if the arbitral tribunals were upgraded to an MIC with permanent jurisdiction, which would operate under an international convention. Together with the investment protection agreements, as part of European law, the MIC Convention will be recognised by international law and can thus bind national courts. This will make the MIC a standard-setting organization.

In other words, the MIC would be able to create what amount to global laws, without any democratic input or scrutiny. The document also explains -- as many have before -- why special investor courts are unnecessary:

The protection of individual goods, including those of investors, is the daily work of the judges of all judicial courts and instances. In principle, these rights can also be claimed by foreign investors.

...

the best investor protection is a functioning, uncorrupted administration and jurisdiction and a democratic legislative process. It is the task of every investor to determine this; they can avoid investments in countries that do not fulfil these standards. If they, nonetheless, take the risk, no special protection is necessary.

Obvious really.

Recognizing that the German government and European Commission will probably try to go ahead with the MIC initiative anyway, the German Association of Judges makes a number of sensible suggestions for improving the idea, and limiting the possible damage. However, the real solution would be for the EU to join other, wiser nations and abolish the system completely.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

14 Comments | Leave a Comment..

Posted on Techdirt - 15 November 2017 @ 3:43pm

Professor Says Threats Of Retaliation By China Stopped Publication Of His Book Revealing Chinese Influence In Australia

from the expect-much-more-of-this dept

We've just written about how the Chinese government wanted to censor articles published by two academic publishers, Cambridge University Press (CUP) and Springer. After an initial wobble, CUP ultimately refused, while Springer by contrast decided to kowtow to the authorities. Those incidents concerned the publication in China of articles the Chinese didn't like. Now it seems the latter are extending their campaign against inconvenient facts to other countries, in this case Australia:

Prominent Charles Sturt University academic Clive Hamilton said Allen & Unwin was ready to publish his manuscript Silent Invasion, but last week informed him it could no longer proceed because it was worried about defamation action.

"Allen & Unwin said that they were worried about retaliation from Beijing through a number of possible avenues including legal threats, orchestrated by Beijing, and they decided it was too big a risk and so therefore pulled the plug and returned the rights to me," Professor Hamilton said.

As the article on ABC News explains, "Silent Invasion" is about the Chinese Communist Party's activities and growing influence in Australia -- obviously a highly sensitive topic for China. In an email to the company, obtained by ABC News, Professor Hamilton's former publishers, Allen & Unwin, wrote about what it saw as "potential threats" if it published his book:

The most serious of these threats was the very high chance of a vexatious defamation action against Allen & Unwin, and possibly against you personally as well.

It's a little hard to see how an entire nation might sue successfully for defamation, but that's not the point. Once again, the mere threat of litigation was enough to cause someone -- in this case a publisher -- to self-censor. Interestingly, the ABC News article notes that the Australian government is expected to unveil soon new legislation to counter foreign interference in the country, which suggests that it is becoming a serious problem. We can expect more such attempts to censor overseas sources of information it doesn't like from the increasingly self-confident and intransigent China.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

7 Comments | Leave a Comment..

Posted on Techdirt - 15 November 2017 @ 3:23am

A Great Use For Artificial Intelligence: Scamming Scammers By Wasting Their Time

from the I,-for-one,-welcome-our-new-AI-chatbot-overlords dept

As artificial intelligence (AI) finally begins to deliver on the field's broken promises of the last forty years, there's been some high-profile hand-wringing about the risks, from the likes of Stephen Hawking and Elon Musk, among others. It's always wise to be cautious, but surely even AI's fiercest critics would find it hard not to like the following small-scale application of the technology to tackle the problem of phishing scams. Instead of simply deleting the phishing email, you forward it to a new service called Re:Scam, and the AI takes over. The aim is to waste the time of scammers by engaging them with AI chatbots, so as to reduce the volume of phishing emails that they can send and follow up:

When you forward an email, you believe to be a scam to me@rescam.org a check is done to make sure it is a scam attempt, and then a proxy email address is used to engage the scammer. This will flood their inboxes with responses without any way for them to tell who is a chat-bot, and who is a real vulnerable target. Once you've forwarded an email nothing more is required on your part, but the more you send through, the more effective it will be.

Here's how the AI is applied:

Re:scam can take on multiple personas, imitating real human tendencies with humour and grammatical errors, and can engage with infinite scammers at once, meaning it can continue an email conversation for as long as possible. Re:scam will turn the table on scammers by wasting their time, and ultimately damage the profits for scammers.

When you send emails to Re:Scam, it not only ties up the scammers in fruitless conversations, it also helps to train the underlying AI system. The service doesn't require any sign-up -- you just forward the phishing email to me@rescam.org -- and there's no charge. Re:Scam comes from Netsafe, a well-established non-profit online safety organization based in New Zealand, which is supported by government bodies there. It's a nice idea, and it would be interesting to see it applied in other situations. That way we could enjoy the benefits of AI for a while, before it decides to kill us all.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

76 Comments | Leave a Comment..

Posted on Techdirt - 9 November 2017 @ 10:44pm

Recent Intel Chipsets Have A Built-In Hidden Computer, Running Minix With A Networking Stack And A Web Server

from the what-could-possibly-go-wrong? dept

One way of looking at the history of computing is as the story of how the engineering focus rose gradually up the stack, from the creation of the first hardware, through operating systems, and then applications, and focusing now on platform-independent Net-based services. Underneath it all, there's still the processor, even if most people don't pay much attention to it these days. Unregarded it may be, but the world of the chip continues to move on. For example, for some years now, Intel has incorporated something called the Management Engine into its chipsets:

Built into many Intel Chipset–based platforms is a small, low-power computer subsystem called the Intel Management Engine (Intel ME). The Intel ME performs various tasks while the system is in sleep, during the boot process, and when your system is running. This subsystem must function correctly to get the most performance and capability from your PC.

That is, inside recent Intel-based systems, there is a separate computer within a computer -- one the end user never sees and has no control over. Although a feature for some time, it's been one of Intel's better-kept secrets, with details only emerging slowly. For example, a recent article on Network World pointed out that earlier this year, Dmitry Sklyarov (presumably, that Dmitry Sklyarov) worked out that Intel's ME is probably running a variant of the Minix operating system (yes, that Minix.) The Network World article notes that a Google project has found out more about the ME system:

According to Google, which is actively working to remove Intel's Management Engine (MINIX) from their internal servers (for obvious security reasons), the following features exist within Ring -3:

Full networking stack
File systems
Many drivers (including USB, networking, etc.)
A web server

That’s right. A web server. Your CPU has a secret web server that you are not allowed to access, and, apparently, Intel does not want you to know about.

Why on this green Earth is there a web server in a hidden part of my CPU? WHY?

The "Ring-3" mentioned there refers to the level of privileges granted to the ME system. As a Google presentation about ME (pdf) explains, operating systems like GNU/Linux run on Intel chips at Ring 0 level; Ring-3 ("minus 3") trumps everything above -- include the operating system -- and has total control over the hardware. Throwing a Web server and a networking stack in there too seems like a really bad idea. Suppose there was some bug in the ME system that allowed an attacker to take control? Funny you should ask; here's what we learned earlier this year:

Intel says that three of its ME services -- Active Management Technology, Small Business Technology, and Intel Standard Manageability -- were all affected [by a critical bug]. These features are meant to let network administrators remotely manage a large number of devices, like servers and PCs. If attackers can access them improperly they potentially can manipulate the vulnerable computer as well as others on the network. And since the Management Engine is a standalone microprocessor, an attacker could exploit it without the operating system detecting anything.

As the Wired story points out, that critical bug went unnoticed for seven years. Because of the risks a non-controllable computer within a computer brings with it, Google is looking to remove ME from all its servers, and there's also an open source project doing something similar. But that's difficult: without ME, the modern systems based on Intel chipsets may not boot. The problems of ME have led the EFF to call on Intel to make a number of changes to the technology, including:

Provide a way for their customers to audit ME code for vulnerabilities. That is presently impossible because the code is kept secret.

Offer a supported way to disable the ME. If that's literally impossible, users should be able to flash an absolutely minimal, community-auditable ME firmware image.

Those don't seem unreasonable requests given how serious the flaws in the ME system have been, and probably will be again in the future. It also seems only fair that people should be able to control fully a computer that they own -- and that ought to include the Minix-based computer hidden within.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

131 Comments | Leave a Comment..

Posted on Techdirt - 9 November 2017 @ 4:25pm

Algorithmic Videos Are Making YouTube Unsuitable For Young Children, And Google's 'Revenue Architecture' Is To Blame

from the so-how-do-we-fix-it? dept

There's an interesting article on Medium by James Bridle that's generating plenty of discussion at the moment. It has the title "Something is wrong on the internet", which is certainly true. Specifically, what the article is concerned about is the following:

Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level.

I recommend reading the article so that you can decide whether it is a perspicacious analysis of what's wrong with the Internet today, or merely another of the hyperbolic "the Internet is corrupting innocent children" screeds that come along from time to time. As an alternative -- or in addition -- you might want to read this somewhat more measured piece from the New York Times, which raises many similar points:

the [YouTube Kids] app contains dark corners, too, as videos that are disturbing for children slip past its filters, either by mistake or because bad actors have found ways to fool the YouTube Kids algorithms.

In recent months, parents like Ms. Burns have complained that their children have been shown videos with well-known characters in violent or lewd situations and other clips with disturbing imagery, sometimes set to nursery rhymes.

The piece on Medium explores a particular class of YouTube Kids videos that share certain characteristics. They have bizarre, keyword-strewn titles like "Bad Baby with Tantrum and Crying for Lollipops Little Babies Learn Colors Finger Family Song 2 " or "Angry Baby vs Spiderman vs Frozen Elsa BABY DROWNING w/ Maleficent Car Pink Spidergirl Superhero IRL". They have massive numbers of views: 110 million for "Bad Baby" and 75 million for "Angry Baby". In total, there seem to be thousands of them with similar, strange titles, and similar, disturbing content, which collectively are racking up billions of views.

As Bridle rightly notes, the sheer scale and downright oddness of the videos suggests that some are being generated, at least in part, by automated algorithms that churn out increasingly-deranged variations on themes that are already popular on the YouTube Kids channel. The aim is to garner as many views as possible, and to get children to watch yet more of the many similar videos. More views means more revenue from advertising: alongside the video, before it, or even in it -- some feature blatant product placement. Young children are the perfect audience for this kind of material: they are inexperienced, and therefore are less likely to dismiss episodes as poor quality; they are curious, and so will probably watch closely to see what happens, no matter how absurd and vacuous the storyline; and they probably don't use ad blockers. As Bridle says in his Medium post:

right now, right here, YouTube and Google are complicit in that system [of psychological abuse]. The architecture they have built to extract the maximum revenue from online video is being hacked by persons unknown to abuse children, perhaps not even deliberately, but at a massive scale.

That may be overstating it, but it is certainly true that YouTube's "revenue architecture", based on how many views videos achieve, tends to produce a race to the bottom in terms of quality, and a shift to automated production of endless variations on a popular themes -- both with the aim of maximizing the audience.

YouTube has just announced that it will try to restrict access by young children to this type of video, a move that it rather improbably claims has nothing to do with the recent articles. But given the potential harm that inappropriate material could produce when viewed by young children, there's a strong argument that Google should apply other criteria in order to de-emphasize such offerings. A possible approach would be to allow adults to rate the material their children see, using a mechanism separate from the current "like" and "dislike". Google could then use adverse parental ratings to scale back payments it makes to channels, while good ratings from adults would cause income to be boosted. Parents would need to sign up before rating material, but that's unlikely to be a significant barrier to participation for those who care about what their children watch.

Although there is always a risk of such systems being gamed, the sheer scale of the audience involved -- millions of views for a video -- makes it much harder than for material that has smaller reach, where bogus votes skew results more easily. Google would anyway need to develop systems that can detect attempts to use large-scale bots to boost ratings. The fact that the company has become quite adept at spotting and blocking spam at scale on Gmail suggests it could create such a system if there were enough pressure from parents to do so.

If Google adopted such a reward system, Darwinian dynamics are likely to lead to better-quality content for children, where "better" is defined by the broad consensus of what adults want their children to see. Other ways that Google could encourage such content to be produced would be to allow parents to boost further what they regard as valuable content with one-off donations or regular subscriptions. Techdirt readers can doubtless come up with other ways of providing incentives to YouTube channels to move away from the automated and often disturbing material many are increasingly filled with.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

58 Comments | Leave a Comment..

Posted on Techdirt - 6 November 2017 @ 3:23am

Top Academic Publisher Kowtows To China: Censors Thousands Of Papers, Denies It Is Censorship

from the comments-that-insult-our-intelligence dept

It's no secret that the Chinese authorities wish to have control over every aspect of life in China, including what people say and do online. Here they are laying down what academic papers people can read, as reported by a new story in the New York Times:

One of the world's largest academic publishers was criticized on Wednesday for bowing to pressure from the Chinese government to block access to hundreds of articles on its Chinese website.

Springer Nature, whose publications include Nature and Scientific American, acknowledged that at the government's request, it had removed articles from its mainland site that touch on topics the ruling Communist Party considers sensitive, including Taiwan, Tibet, human rights and elite politics.

The publisher defended its decision, saying that only 1 percent of its content was inaccessible in mainland China.

And if you think that its comment is ridiculous -- "only" one percent is over 7000 articles -- wait till you read what Springer said in its official statement on the move, reported by the Fresno Bee:

"This action is deeply regrettable but has been taken to prevent a much greater impact on our customers and authors and is in compliance with our published policy," the statement said. "This is not editorial censorship and does not affect the content we publish or make accessible elsewhere in the world."

According to Springer, it is not really censoring articles in China, because people outside can still read them. That insults both Chinese researchers, whom Springer clearly thinks don't count, and our intelligence.

What makes Springer's pusillanimity even more reprehensible is that another leading academic publisher was also told to censor articles in China, but took a different course of action. Back in August, Cambridge University Press (CUP) was ordered by the Chinese authorities to censor 300 articles from its journal China Quarterly. Initially, like Springer, it complied, but came to its senses a couple of days later:

It said the academic leadership of the university had reviewed the publisher's decision and agreed to reinstate the blocked content with immediate effect to "uphold the principle of academic freedom on which the university’s work is founded".

If Springer fails to do the same, researchers will be justified in concluding that, unlike CUP, it does not uphold that principle of academic freedom. In which case, they may decide to publish their future work elsewhere.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

24 Comments | Leave a Comment..

Posted on Techdirt - 1 November 2017 @ 3:24am

Time To Get Rid Of Corporate Sovereignty? USTR Robert Lighthizer Seems To Think So

from the you-either-are-in-the-market,-or-you're-not-in-the-market dept

As we noted a couple of months ago, the topic of corporate sovereignty -- also known as investor-state dispute settlement (ISDS) -- has rather dropped out of the public eye. One post on the subject from earlier this year pointed out that an editorial in the Financial Times had called for ISDS to be "ditched". That was welcome but surprising. At the time, it seemed like an outlier, but it now looks more as if it was simply ahead of the field, as many more have started to call for the same. For example 230 law and economics professors are urging President Trump to remove corporate sovereignty from NAFTA and other trade deals (pdf). From a rather different viewpoint, here's Dan Ikenson, a director at the Cato Institute, calling for ISDS to be absent from a re-negotiated NAFTA:

U.S. negotiators should offer to drop their rules-of-origin and sunset provision demands in exchange for agreement to expunge the controversial dispute settlement provisions under Chapters 11 and 19. These provisions are unnecessary, raise fundamental questions about sovereignty and constitutionality, and fuel trade agreement opposition on both the political left and right.

It's all very well for professors and pundits to call for corporate sovereignty to go, but what do the people who have the power -- the politicians -- think? Well, here's the newly-elected prime minister of New Zealand, Jacinda Ardern, speaking on the topic:

We remain determined to do our utmost to amend the ISDS provisions of TPP. In addition, Cabinet has today instructed trade negotiation officials to oppose ISDS in any future free trade agreements.

Finally, and arguably most importantly, this is what the US Trade Representative, Robert Lighthizer, said recently (reported on Forbes):

It's always odd to me when the business people come around and say, 'Oh, we just want our investments protected.' … I mean, don't we all? I would love to have my investments guaranteed. But unfortunately, it doesn't work that way in the market. … I've had people come in and say, literally, to me: 'Oh, but you can't do this: you can't change ISDS. … You can't do that because we wouldn't have made the investment otherwise.' I’m thinking, 'Well, then why is it a good policy of the United States government to encourage investment in Mexico?' … The bottom line is, business says: 'We want to make decisions and have markets decide. But! We would like to have political risk insurance paid for by the United States' government.' And to me that's absurd. You either are in the market, or you're not in the market.

Whether that extraordinarily sensible analysis is ultimately converted into action remains to be seen: there will be plenty of lobbying against the idea. But the fact that so many are now making the call for corporate sovereignty to be dropped from existing and future trade deals does, at least, make it much more likely that it will happen soon.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

9 Comments | Leave a Comment..

Posted on Techdirt - 30 October 2017 @ 7:53pm

Move By Top Chinese University Could Mean Journal Impact Factors Begin To Lose Their Influence

from the and-no-bad-thing,-either dept

The so-called "impact factors" of journals play a major role in the academic world. And yet people have been warning about their deep flaws for many years. Here, for example, is Professor Stephen Curry, a leading advocate of open access, writing on the topic back in 2012:

I am sick of impact factors and so is science.

The impact factor might have started out as a good idea, but its time has come and gone. Conceived by Eugene Garfield in the 1970s as a useful tool for research libraries to judge the relative merits of journals when allocating their subscription budgets, the impact factor is calculated annually as the mean number of citations to articles published in any given journal in the two preceding years.

The rest of that article and the 233 comments that follow it explain in detail why impact factors are a problem, and why they need to be discarded. The hard part is coming up with other ways of gauging the influence of people who write in high-profile publications -- one of the main reasons why many academics cling to the impact factor system. A story in Nature reports on a bold idea from a top Chinese university in this area:

One of China's most prestigious universities plans to give some articles in newspapers and posts on major social-media outlets the same weight as peer-reviewed publications when it evaluates researchers.

It will work like this:

articles have to be original, written by the researcher and at least 1,000 words long; they need to be picked up by major news outlets and widely disseminated through social media; and they need to have been seen by a large number of people. The policy requires an article to be viewed more than 100,000 times on WeChat, China's most popular instant-messaging service, or 400,000 times on news aggregators such as Toutiao. Articles that meet the criteria will be considered publications, alongside papers in peer-reviewed journals.

The university has also established a publication hierarchy, with official media outlets such as the People's Daily considered most important, regional newspapers and magazines occupying a second tier, and online news sites such as Sina, NetEase or Sohu ranking third./blockquote>

One of the advantages of this idea is that it recognizes that publishing in non-academic titles can be just as valid as appearing in conventional peer-reviewed journals. It also has the big benefit of encouraging academics to communicate with the public -- something that happens too rarely at the moment. That, in its turn, might help experts learn how to explain their often complex work in simple terms. At the same time, it would allow non-experts to hear about exciting new ideas straight from the top people in the field, rather than mediated through journalists, who may misunderstand or distort various aspects.

However, there are clear risks, too. For example, there is a danger that newspapers and magazines will be unwilling to accept articles about difficult work, or from controversial academics. Equally, mediocre researchers that hew to the government line may benefit from increased exposure, even resulting in them being promoted ahead of other, more independent-minded academics. Those are certainly issues. But what's interesting here is not just the details of the policy itself, but the fact that it was devised and is being tried in China. That's another sign that the country is increasingly a leader in many areas, and no longer a follower.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

10 Comments | Leave a Comment..

Posted on Techdirt - 30 October 2017 @ 3:39am

European Parliament Agrees Text For Key ePrivacy Regulation; Online Advertising Industry Hates It

from the how-dare-people-refuse-to-be-tracked-online dept

Techdirt has mentioned a couple of times the EU's important ePrivacy Regulation that is currently working its way through the legislative process. It's designed to complement the EU's new General Data Protection Regulation (GDPR), which comes into force next year, and which is likely to have far-reaching effects. Where the GDPR is concerned with personal data "at rest" -- how it is stored and processed -- the ePrivacy Regulation can be thought of as dealing with personal data in motion. That is, how it is gathered and flows across networks. Since that goes to the heart of how the Internet works, it will arguably have an even bigger impact than the GDPR on the online world -- not just in the EU, but globally too.

That's led to lobbying on an unprecedented scale. A recent report on the Regulation by Corporate Europe Observatory quoted a source in the European Parliament as saying it was "one of the worst lobby campaigns I have ever seen". Despite that pressure, and a last-minute attempt to derail proceedings, the European Parliament has just agreed a text for the ePrivacy Regulation. That's not the end of the story -- the other parts of the European Union legislative machine will weigh in with their views, and seek to make changes, but it's an important milestone.

The European Parliament has produced an excellent briefing on the background to the ePrivacy Regulation (pdf), and on its main elements. A key feature is that it will apply to every business supplying Internet-based services, not just telecom companies. It will also regulate any service provided to end-users in the EU, no matter where the company offering it may be based. There are strict new rules on tracking services -- including, but not limited to, cookies. Consent to tracking "must be freely given and unambiguous" -- it cannot be assumed by default or hidden away on a Web page that no one ever reads. Cookie walls, which only grant access to a site if the visitor agrees to be tracked online, will be forbidden under the new ePrivacy rules.

IAB Europe, the main European-level association for the digital media and advertising industry, says giving the public the right to refuse to be tracked amounts to "expropriation":

"The European Parliament's text on the ePrivacy Regulation would essentially expropriate advertising-funded businesses by banning them from restricting or refusing access to users who do not agree to the data collection underpinning data-driven advertising," warned Townsend Feehan, CEO of IAB Europe.

The press release then goes to make the claim that online advertising simply must use tracking, and that visitors to a site are somehow morally obliged to give up their privacy in order to preserve the advertiser's "fundamental rights":

"Data-driven advertising isn't an optional extra; it is online advertising," explained Feehan. "Forcing businesses to grant access to ad-funded content or services even when users reject the proposed advertising value exchange, basically deprives ad-funded businesses of their fundamental rights to their own property. They would be forced to give something in return for nothing."

However, IAB Europe graciously goes on to say it "will continue to engage constructively with the EU institutions in hopes of meaningfully improving the draft law in the remaining legislative process." Translated, that means it will lobby even harder to get the cookie wall ban removed from the text during the final negotiations. IAB Europe is naturally most concerned with the issues that affect its members. But the European Parliament's text -- not the final one, remember, so things could still change -- includes some other extremely welcome elements. For example, the Regulation in its present form would require EU Member States to promote and even make mandatory the use of end-to-end encryption. Moreover, crypto backdoors would be explicitly banned:

In order to safeguard the security and integrity of networks and services, the use of end-to-end encryption should be promoted and, where necessary, be mandatory in accordance with the principles of security and privacy by design. Member States should not impose any obligation on encryption providers, on providers of electronic communications services or on any other organisations (at any level of the supply chain) that would result in the weakening of the security of their networks and services, such as the creation or facilitation of "backdoors".

As the above extracts indicate, the European Parliament's text offers strong support for the user's right to both encryption and privacy online. For that reason, we can expect it to be attacked fiercely from a number of quarters as haggling over the final text take place within the EU. Unfortunately, unlike the European Parliament's discussions, these negotiations will take place behind closed doors.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

53 Comments | Leave a Comment..

Posted on Techdirt - 26 October 2017 @ 3:23am

The Good News: You Can Download Hawking's PhD For Free; The Bad News: It Took 50 Years To Make It Happen

from the why-are-we-waiting? dept

Techdirt has been writing about the (slow but steady) rise of open access for a decade. That's as long as the Annual International Open Access Week has been running. Cambridge University came up with quite a striking way to join in the celebrations:

Stephen Hawking's PhD thesis, 'Properties of expanding universes', has been made freely available to anyone, anywhere in the world, after being made accessible via the University of Cambridge's Open Access repository, Apollo.

The 1966 doctoral thesis by the world's most recognisable scientist is the most requested item in Apollo with the catalogue record alone attracting hundreds of views per month. In just the past few months, the University has received hundreds of requests from readers wishing to download Professor Hawking's thesis in full.

The idea has been quite a hit -- literally, since the demand for Hawking's thesis was so great on Monday, that it hit the Apollo server hard enough to take it offline for a while. The Guardian reported:

A University of Cambridge spokesperson said: "We have had a huge response to Prof Hawking's decision to make his PhD thesis publicly available to download, with almost 60,000 downloads in less than 24 hours.

"As a result, visitors to our Open Access site may find that it is performing slower than usual and may at times be temporarily unavailable."

Popular as the 1966 PhD has proved, the point of the exercise was to spread the word about open access. Hawking is quoted as saying:

Anyone, anywhere in the world should have free, unhindered access to not just my research, but to the research of every great and enquiring mind across the spectrum of human understanding.

Cambridge University made a further announcement to mark Open Access Week. Dr Arthur Smith, Deputy Head of Scholarly Communication, said:

From October 2017 onwards, all PhD students graduating from the University of Cambridge will be required to deposit an electronic copy of their doctoral work for future preservation. And like Professor Hawking, we hope that many students will also take the opportunity to freely distribute their work online by making their thesis Open Access. We would also invite former University alumni to consider making their theses Open Access, too.

That's great, as is the free availability of Hawking's PhD. But the question for both has to be: why has it taken so long -- 50 years in the case of the thesis? Even allowing for the fact that the Internet was not a mass medium for 30 of those 50 years, there was nothing stopping Cambridge University putting PhDs online from the mid-1990s. Similarly, why make depositing theses as open access optional? The University would be quite justified in requiring the thesis of any PhD it grants to be online and freely downloadable immediately under a suitable CC license. The moment to make that happen is now, not in another 10 years' time.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

17 Comments | Leave a Comment..

Posted on Techdirt - 23 October 2017 @ 3:31pm

How To Avoid Future Krack-Like Failures: Create Well-Maintained 'Fat' Protocols Using Initial Coin Offerings

from the blockchain-cryptocurrency-fashionable-moi? dept

It came as something of a shock to learn recently that several hugely-popular security protocols for Wi-Fi, including WPA (Wireless Protected Access) and WPA2, were vulnerable to a key re-installation attack (pdf). A useful introduction from the EFF puts things in context, while more technical details can be found on the krackattacks.com site, and in a great post by Matthew Green. As well as the obvious security implications, there's another angle to the Krack incident that Techdirt readers may find of note. It turns out that one important reason why what is a fairly simple flaw was not spotted earlier is that the main documentation was not easily accessible. As Wired explains:

The WPA2 protocol was developed by the Wi-Fi Alliance and the Institute of Electrical and Electronics Engineers (IEEE), which acts as a standards body for numerous technical industries, including wireless security. But unlike, say, Transport Layer Security [TLS], the popular cryptographic protocol used in web encryption, WPA2 doesn't make its specifications widely available. IEEE wireless security standards carry a retail cost of hundreds of dollars to access, and costs to review multiple interoperable standards can quickly add up to thousands of dollars.

The obvious way to avoid this issue is to ensure that key protocols are all freely available so that they can be scrutinized by the greatest number of people. But the Wired article points out that there's a different problem in that situation:

Even open standards like TLS experience major, damaging bugs at times. Open standards have broad community oversight, but don't have the funding for deep, robust maintenance and vetting

It's another well-known concern: just because protocols and software are open doesn't necessarily mean that people will find even obvious bugs. That's because they may not have the time to look for them, which in turn comes down to incentives and rewards. Peer esteem only goes to far, and even hackers have to eat. If they receive no direct reward for spending hours searching through code for bugs, they may not bother.

So if we want to avoid major failures like the Krack vulnerability, we need to do two things. First, key protocols and software should be open and freely available. That's the easy part, since openness is now a well-accepted approach in the digital world. Secondly, we need to find a way to reward people for looking at all this stuff. As Krack shows, current incentives aren't working. But there's a new approach that some are touting as the way forward. It involves the fashionable idea of Initial Coin Offerings (ICO) of cryptocurrency tokens. A detailed article on qz.com explains how ICOs can be used to fund new software projects by encouraging people to buy tokens speculatively:

The user would pay for a token upfront, providing funds for coders to develop the promised technology. If the technology works as advertised and gains popularity, it should attract more users, thus increasing demand for the token offered at the start. As the token value increases, those early users who bought tokens will benefit from appreciating token prices.

It's that hope of future investment gains that would encourage people to buy ICO tokens from a risky venture. But it's not just the early users who benefit from a technology that takes off. A key idea of this kind of ICO is that the coders behind the technology would own a sizable proportion of the total token offering; as the technology becomes popular, and tokens gain in value, so does their holding.

This novel approach could be applied to protocol development. The hope is that by creating "fat" protocols that can capture more of the value of the ecosystem that is built on top of them, there would be funds available to pay people to look for bugs in the system, which would be totally open. It's an intriguing idea -- one that may be worth trying given the problems with today's approaches.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

26 Comments | Leave a Comment..

Posted on Techdirt - 19 October 2017 @ 8:01pm

A Tale of Two Transparencies: Why The EU And Activists Will Always Disagree Over Trade Deal Negotiations

from the TTIP,-remember-that? dept

Although the Transatlantic Trade and Investment Partnership (TTIP) has dropped off the radar completely since Donald Trump's election, for some years it was a key concern of both the US and European governments, and a major theme of Techdirt's posts. One of the key issues was transparency -- or the lack of it. Eventually, the European Commission realized that its refusal to release information about the negotiations was seriously undermining its ability to sell the deal to the EU public, and it began making some changes on this front, as we discussed back in 2015. Since then, transparency has remained a theme of the European Commission's initiatives. Last month, in his annual State of the Union address, President Jean-Claude Juncker unveiled his proposals for trade policy. One of them was all about transparency:

the Commission has decided to publish as of now all its recommendations for negotiating directives for trade agreements (known as negotiating mandates). When they are submitted to the European Parliament and the Council, those documents will in parallel be sent automatically to all national Parliaments and will be made available to the general public. This should allow for a wide and inclusive debate on the planned agreements from the start.

An interesting article on Borderlex explores why moves to open up trade policy by the European Commission did not and probably never will satisfy activists who have been pushing for more transparency, and why in this area there is an unbridgeable gulf between them and the EU politicians. In contrast to Juncker's limited plan to publish negotiating directives in order to allow "a wide and inclusive debate on the planned agreements", this is what activists want, according to the article:

timely release of textual proposals on all negotiating positions, complete lists and minutes of meetings of Commission officials with third parties, consolidated texts, negotiating mandates, and all correspondence between third parties and officials.

Activists are keen to see what is happening in detail throughout the negotiations, not just some top-level view at the start, or the initial textual proposals for each chapter, but nothing afterwards. The article suggests that this is not simply a case of civil society wanting more information for its own sake, but rather reflects completely different conceptions of what transparency means. Transparency is intimately bound up with accountability, which raises the key question of: accountability to whom?

These two different views reflect a seminal academic distinction between 'delegation' and 'participation' models of accountability in international politics. In a 'delegation' model, an organisation (such as the Commission) is accountable to those who have granted it a mandate (in the EU: the Council, the [European Parliament] and national parliaments). Transparency and participation should first and foremost be directed to them. Extending managed transparency to the wider public can be instrumentally used to increase trust.

In a 'participation model', in contrast, organisations are accountable to those who bear the burden of the decisions that are taken. If contemporary trade policy impacts people's daily lives, the people -- directly or through civil society organisations that claim to represent them -- should be able to see what is going on, and be able to influence the process. Therefore, there is a presupposition for openness, disclosure, and close participation.

The article's authors suggest that for activists, transparency is a means to an end -- gaining influence through participation -- and it is the European Commission's refusal to allow civil society any meaningful role in trade negotiations that guarantees that token releases of a few policy documents will never be enough.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

4 Comments | Leave a Comment..

Posted on Techdirt - 18 October 2017 @ 3:30am

Details Emerge Of World's Biggest Facial Recognition Surveillance System, Aiming To Identify Any Chinese Citizen In Three Seconds

from the but-what-happens-when-the-dataset-leaks-out? dept

Back in July, Techdirt wrote about China's plan to build a massive surveillance system based on 600 million CCTV cameras around the country. Key to the system would be facial recognition technology that would allow Chinese citizens to be identified using a pre-existing centralized image database plus billions more photos found on social networks. Lingering doubts about whether China is going ahead with such an unprecedented surveillance system may be dispelled by an article in the South China Morning Post, which provides additional details:

China is building the world's most powerful facial recognition system with the power to identify any one of its 1.3 billion citizens within three seconds.

The goal is for the system to able to match someone's face to their ID photo with about 90 per cent accuracy.

The project, launched by the Ministry of Public Security in 2015, is under development in conjunction with a security company based in Shanghai.

The article says that the system will use cloud computing facilities to process images from the millions of CCTV cameras located across the country. The company involved is Isvision, which has been using facial recognition with CCTV cameras since 2003. The earliest deployments were in the highly-sensitive Tiananmen Square area. Other hotspots where its technology has been installed are Tibet and Xinjiang, where surveillance has been at a high level for many years.

However, the report also cautions that the project is encountering "many difficulties" due to the technical limits of facial recognition and the sheer size of the database involved. A Chinese researcher is quoted as saying that some totally unrelated people in China have faces so alike that even their parents cannot tell them apart. Another issue is managing the biometric data, which is around 13 terabytes for the facial information, and 90 terabytes for the full dataset, which includes additional personal details on everyone in China.

As the South China Morning Post article rightly notes, it won't be long before 13 terabytes will fit on a single portable USB hard drive, which raises the issue of facial recognition data being copied and used for other unauthorized purposes:

But a network security vendor for the Ministry of Public Security dismissed the possibility.

"To download the whole data set is as difficult as launching a missile with a nuclear warhead. It requires several high-ranking officials to insert and turn their keys at the same time," the vendor said.

Given all that we know about the lamentable state of computer security around the world, even for highly-sensitive data, that claim seems a little hyperbolic. Since the Chinese government is apparently determined to build and operate this huge facial recognition system despite all the challenges, the unnamed network security vendor quoted above may find out the hard way that exfiltrating some or even all of that data really isn't rocket science.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

34 Comments | Leave a Comment..

Posted on Techdirt - 11 October 2017 @ 7:38pm

New 'Coalition For Responsible Sharing' About To Send Millions Of Take-Down Notices To Stop Researchers Sharing Their Own Papers

from the how-responsible-is-that? dept

A couple of weeks ago, we wrote about a proposal from the International Association of Scientific Technical and Medical Publishers (STM) to introduce upload filtering on the ResearchGate site in order to stop authors from sharing their own papers without "permission". In its letter to ResearchGate, STM's proposal concluded with a thinly-veiled threat to call in the lawyers if the site refused to implement the upload filters. In the absence of ResearchGate's acquiescence, a newly-formed "Coalition for Responsible Sharing", whose members include the American Chemical Society (ACS), Brill, Elsevier, Wiley and Wolters Kluwer, has issued a statement confirming the move:

Following unsuccessful attempts to jointly find ways for scholarly collaboration network ResearchGate to run its service in a copyright-compliant way, a coalition of information analytics businesses, publishers and societies is now left with no other choice but to take formal steps to remedy the illicit hosting of millions of subscription articles on the ResearchGate site.

Those formal steps include sending "millions of takedown notices for unauthorized content on its site now and in the future." Two Coalition publishers, ACS and Elsevier, have also filed a lawsuit in a German regional court, asking for “clarity and judgement” on the legality of ResearchGate's activities. Justifying these actions, the Coalition's statement says: "ResearchGate acquires volumes of articles each month in violation of agreements between journals and authors" -- and that, in a nutshell, is the problem.

The articles posted on ResearchGate are generally uploaded by the authors; they want them there so that their peers can read them. They also welcome the seamless access to other articles written by their fellow researchers. In other words, academic authors are perfectly happy with ResearchGate and how it uses the papers that they write, because it helps them work better as researchers. A recent post on The Scholarly Kitchen blog noted:

Researchers particularly appreciate ResearchGate because they can easily follow who cites their articles, and they can follow references to find other articles they may find of interest. Researchers do not stop to think about copyright concerns and in fact, the platform encourages them, frequently, to upload their published papers.

The problem lies in the unfair and one-sided contracts academic authors sign with publishers, which often do not allow them to share their own published papers freely. The issues with ResearchGate would disappear if researchers stopped agreeing to these completely unnecessary restrictions -- and if publishers stopped demanding them.

The Coalition for Responsible Sharing's statement makes another significant comment about ResearchGate: that it acquires all these articles "without making any contribution to the production or publication of the intellectual work it hosts." But much the same could be said about publishers, which take papers written by publicly-funded academics for free, chosen by academics for free, and reviewed by academics for free, and then add some editorial polish at the end. Despite their minimal contributions, publishers -- and publishers alone -- enjoy the profits that result. The extremely high margins offer incontrovertible proof that ResearchGate and similar scholarly collaboration networks are not a problem for anybody. The growing popularity and importance of unedited preprints confirms that what publishers add is dispensable. That makes the Coalition for Responsible Sharing's criticism of ResearchGate and its business model deeply hypocritical.

It is also foolish. By sending millions of take-down notices to ResearchGate -- and thus making it harder for researchers to share their own papers on a site they currently find useful -- the Coalition for Responsible Sharing will inevitably push people to use other alternatives, notably Sci-Hub. Unlike ResearchGate, which largely offers articles uploaded by their own authors, Sci-Hub generally sources its papers without the permission of the academics. So, once more, the clumsy actions of publishers desperate to assert control at all costs make it more likely that unauthorized copies will be downloaded and shared, not less. How responsible is that?

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

53 Comments | Leave a Comment..

Posted on Techdirt - 4 October 2017 @ 7:34pm

Elsevier's Latest Brilliant Idea: Adding Geoblocking To Open Access

from the how-about-no? dept

We've just written about a troubling move by Elsevier to create its own, watered-down version of Wikipedia in the field of science. If you are wondering what other plans it has for the academic world, here's a post from Elsevier’s Vice President, Policy and Communications, Gemma Hersh, that offers some clues. She's "responsible for developing and refreshing policies in areas related to open access, open data, text mining and others," and in "Working towards a transition to open access", Hersh meditates upon the two main kinds of open access, "gold" and "green". She observes:

While gold open access offers immediate access to the final published article, the trade-off is cost. For those that can't or don't wish to pay the article publishing charge (APC) for gold open access, green open access -- making a version of the subscription article widely available after a time delay or embargo period -- remains a viable alternative to enabling widespread public access.

She has a suggestion for how the transition from green open access to gold open access might be effected:

Europe is a region where a transition to fully gold open access is likely to be most cost-neutral and, perhaps for this reason, where gold OA currently has the highest policy focus. This is in stark contrast to other research-intensive countries such as the US, China and Japan, which on the whole have pursued the subscription/green open access path. Therefore one possible first step for Europe to explore would be to enable European articles to be available gold open access within Europe and green open access outside of Europe.

Blithely ignoring the technical impossibility of enforcing an online geographical gold/green border, Hersh is proposing to add all the horrors of geoblocking -- a long-standing blight on the video world -- to open access. But gold open access papers that aren't fully accessible outside Europe simply aren't open access at all. The whole point of open access is that it makes academic work freely available to everyone, everywhere, without restriction -- unlike today, where only the privileged few can afford wide access to research that is often paid for by the public.

It's hard to know why Elsevier is putting forward an idea that is self-evidently preposterous. Perhaps it now feels it has such a stranglehold on the entire academic knowledge production process that it doesn't even need to hide its contempt for open access and those who support it.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

14 Comments | Leave a Comment..

Posted on Techdirt - 29 September 2017 @ 3:42pm

Elsevier Launching Rival To Wikipedia By Extracting Scientific Definitions Automatically From Authors' Texts

from the don't-do-as-we-do,-do-as-we-say dept

Elsevier is at it again. It has launched a new (free) service that is likely to undermine open access alternatives by providing Wikipedia-like definitions generated automatically from texts it publishes. As an article on the Times Higher Education site explains, the aim is to stop users of the publishing giant's ScienceDirect platform from leaving Elsevier's walled garden and visiting sites like Wikipedia in order to look up definitions of key terms:

Elsevier is hoping to keep researchers on its platform with the launch of a free layer of content called ScienceDirect Topics, offering an initial 80,000 pages of material relating to the life sciences, biomedical sciences and neuroscience. Each offers a quick definition of a key term or topic, details of related terms and relevant excerpts from Elsevier books.

Significantly, this content is not written to order but is extracted from Elsevier's books, in a process that Sumita Singh, managing director of Elsevier Reference Solutions, described as "completely automated, algorithmically generated and machine-learning based".

It's typical of Elsevier's unbridled ambition that instead of supporting a digital commons like Wikipedia, it wants to compete with it by creating its own redundant versions of the same information, which are proprietary. Even worse, it is drawing that information from books written by academics who have given Elsevier a license -- perhaps unwittingly -- that allows it to do that. The fact that a commercial outfit mines what are often publicly-funded texts in this way is deeply hypocritical, since Elsevier's own policy on text and data mining forbids other companies from doing the same. It's another example of how Elsevier uses its near-monopolistic stranglehold over academic publishing for further competitive advantage. Maybe it's time anti-trust authorities around the world took a look at what is going on here.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

33 Comments | Leave a Comment..

Posted on Techdirt - 28 September 2017 @ 7:49pm

Chinese High-Tech Startups: Now More Copied Than Copying

from the time-to-wake-up dept

Techdirt has been pointing out for a while that the cliché about Chinese companies being little more than clever copycats, unable to come up with their own ideas, ceased to be true years ago. Anyone clinging to that belief is simply deluding themselves, and is likely to have a rude awakening as Chinese high-tech companies continue to advance in global influence. China's advances in basic research are pretty clear, but what about business innovation? That's an area that the US has traditionally prided itself on being the world leader. However, an interesting article in the South China Morning Post -- a Hong Kong-based newspaper owned by the Chinese e-commerce giant Alibaba, which has a market capitalization of $400 billion -- explores how it's Chinese ideas that are now being copied:

it's a reflection of a growing trend in which businesses across Southeast Asia look to China for inspiration for everything from e-commerce to mobile payment systems and news apps.

Once derided as a copycat of Western giants, Chinese companies have grown in stature to the point that in many areas they are now seen as the pinnacle of business innovation.

The article mentions dockless bike-sharing, which is huge in China, being copied in California by a startup called Limebike. It notes that Thailand's central bank has introduced a standardized QR code that enables the country's smartphone users to pay for their purchases simply by scanning their devices -- a habit that is well on the way to replacing cash and credit cards in China. In Malaysia, an online second-hand car trading platform Carsome based its approach closely on a Chinese company operating in Nanjing. Other copycats of Chinese innovators include:

Orami, Thailand's leading e-commerce business, which started out as a clone of China's online baby product platform Mia; Offpeak, a Malaysian version of the Chinese group buying website Meituan; and BaBe, an Indonesian news app that borrowed the business idea from China's Toutiao and has been downloaded more than 10 million times.

As the article points out, it is perhaps natural that entrepreneurs in Southeast Asia should look to China for ideas given the commonalities of culture. But that kind of creative borrowing can only occur if Chinese companies are producing enough good business ideas that are worth copying. It's evident that they are, and it's time that the West recognized that fact.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

15 Comments | Leave a Comment..

Posted on Techdirt - 27 September 2017 @ 7:37pm

Lawyers Gearing Up To Hit UK With Corporate Sovereignty Claims Totalling Billions Of Dollars Over Brexit

from the nobody-painted-that-on-the-side-of-a-bus dept

We're not hearing much about corporate sovereignty -- also known as "investment state dispute settlement" (ISDS) -- these days. It's definitely still a problem, especially for smaller countries. But the big fights over the inclusion of corporate sovereignty chapters in the two global trade deals -- the Transatlantic Trade and Investment Partnership (TTIP), and the Trans-Pacific Partnership (TPP) agreement -- have been put on hold for the moment. That's for the simple reason that both TPP and TTIP are in a kind of limbo following the election of Donald Trump as US President with his anti-free trade platform.

TTIP seems completely moribund, whereas TPP -- re-branded as TPP11 to reflect the fact that there are only 11 countries now that the US has pulled out -- is showing the odd twitch of life. A recent article in the Canadian newspaper National Post points out that the departure of the US might even allow some of the worst bits of TPP to be jettisoned:

the Americans insisted on longer intellectual property patent terms and stronger copyright regulations than many countries wanted. Canada will now argue for shorter patent terms, in support of its generic drug sector and in an attempt to keep drug costs down.

Canada is also keen to water down the investor-state dispute settlement negotiated by the U.S. in the original deal, and bolster the state's right to regulate in the public interest.

The move by Canada to rein in some of the worst excesses of corporate sovereignty follows the EU's lead in this area. As Techdirt reported, during the TTIP negotiations between the EU and US, the former suggested replacing the old ISDS with a "new" Investment Court System (ICS). Although the US was not interested, Canada later agreed to this slightly watered-down version for the CETA trade deal with the EU.

The ICS still doesn't exist, and is still something of a mystery in terms of how it will work. It was proposed in an attempt to head off massive public concern about corporations being able to sue governments -- and thus taxpayers -- for huge sums, completely outside the normal legal system, and subject to few constraints. But even ICS was not enough to stop the Belgian region of Wallonia nearly de-railing the CETA deal at the last moment.

Anxious to avoid that happening again, the President of the European Commission, Jean-Claude Juncker, had a rather radical suggestion in his recent State of the Union address. In order to make future trade deals easier to push through the legislative process in the EU, Juncker proposed removing investment protection chapters from them completely, and negotiating a separate deal covering this aspect. An article on Politico.eu explains the thinking behind that move:

Slicing out investment protection will give [Juncker] an immediate legal advantage. Under EU law, a trade deal without investment clauses could be ratified exclusively by the European Parliament and by the member countries as represented at the Council in Brussels. That effectively removes the direct veto powers of the Walloons.

Simon Lester, Trade Policy Analyst at the Cato Institute, thinks the US should follow suit -- an idea that someone from the same group suggested a few years ago:

The Europeans have faced a greater struggle with investment protection and ISDS than has been the case in the United States, but these provisions have been a problem here as well. If we want to make it easier to get trade negotiations completed and trade agreements passed by Congress, we should consider following the EU's lead.

Although removing corporate sovereignty from trade deals does not solve the larger problem of giving companies special protection, it is a step in the right direction. For example, after concluding trade deals that do not have ISDS chapters, governments may decide to bring them in as quickly as possible in order to enjoy their claimed benefits. When commercial relations work perfectly well without them -- as is already the case for both the US-Australia trade deal, and the one between the EU and South Korea, neither of which include corporate sovereignty -- governments may decide to leave it at that, and forget about further negotiations covering investment.

Unfortunately, none of these recent moves is likely to help the UK, currently struggling with the implications of last year's "Brexit" referendum to leave the EU. Ever-inventive lawyers have realized that an unexpected withdrawal of the country from the EU could represent an excellent opportunity for companies that have invested in the UK to claim that they will suffer as a result, and to use corporate sovereignty clauses to claim compensation potentially amounting to billions of dollars. Corporate Europe Observatory has a new post exploring what could happen here:

the UK's impending exit from the European Union may bring new investment arbitration opportunities. The country has 92 investment agreements in force, which investors from other countries could use to file ISDS claims against the UK. In conferences and alerts for their multinational clients, some of the top investment arbitration law firms are already assessing the prospect of such Brexit claims. Depending on how the Brexit negotiations turn out, these lawsuits could be about anything from foreign carmakers or financial companies losing free access to the EU market, to the government scrapping subsidies for certain sectors. One lawyer from UK-based law firm Volterra Fietta has even suggested that "there may be a number of investors that would have come to the UK expecting to have a certain low wage group of employees", which might sue for loss of expected profit if they lose access to underpaid, foreign workers.

But the clever lawyers don't stop there. They see opportunities for corporations to use Brexit as a way to sue other EU countries too:

Several law firms have published briefings suggesting that it would be an advantage for corporations if they structured their foreign investment into the remaining EU member states through the UK. This means that if you are a German company, for instance, and have an investment in Romania you could let this investment 'flow' through a subsidiary -- possibly only a mailbox company -- in the UK. You could then sue Romania via its bilateral investment treaty with the UK -- even if no such treaty was in place between Romania and Germany.

This kind of "creativity" is yet another reason why tweaks to corporate sovereignty of the kind contemplated by the EU and Canada are simply not enough: ISDS needs to be dropped completely from all trade deals -- past, present and future.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

19 Comments | Leave a Comment..

Posted on Techdirt - 25 September 2017 @ 7:36pm

Scientific Publishers Want Upload Filter To Stop Academics Sharing Their Own Papers Without Permission

from the where-there's-a-gate,-there's-got-to-be-a-gatekeeper dept

Back in March of this year, Techdirt wrote about ResearchGate, a site that allows its members to upload and share academic papers. Although the site says it is the responsibility of the uploaders to make sure that they have the necessary rights to post and share material, it's clear that millions of articles on ResearchGate are unauthorized copies according to the restrictive agreements that publishers typically impose on their authors. As we wrote back then, it was interesting that academic publishers were fine with that, but not with Sci-Hub posting and sharing more or less the same number of unauthorized papers.

Somewhat belatedly, the International Association of Scientific Technical and Medical Publishers (STM) has now announced that it is not fine with authors sharing copies of their own papers on ResearchGate without asking permission. In a letter to the site from its lawyers (pdf), the STM is proposing what it calls "a sustainable way to grow and to continue the important role you play in the research ecosystem". Here's what it wants ResearchGate ("RG") to do:

RG's users could continue "claiming”, i.e. agreeing to make public or uploading documents in the way they may have become accustomed to with RG's site. An automated system, utilizing existing technologies and ready to be implemented by STM members, would indicate if the version of the article could be shared publicly or privately. If publicly, then the content could be posted widely. If privately, then the article would remain available only to the co-authors or other private research groups consistent with the STM Voluntary Principles. In addition, a message could be sent to the author showing how to obtain rights to post the article more widely. This system could be implemented within 30-60 days and could then handle this "processing" well within 24 hours.

In other words, an upload filter, of exactly the kind proposed by the European Commission in its new Copyright Directive. There appears to be a concerted push by the copyright industry to bring in upload filters where it can, either through legislation, as in the EU, or through "voluntary" agreements, as with ResearchGate. Although the lawyer's letter is couched in the politest terms, it leaves no doubt that if ResearchGate refuses to implement STM's helpful suggestion, things might become less pleasant. It concludes:

On behalf of STM, I urge you therefore to consider this proposal. If you fail to accede to this proposal by 22 September 2017, then STM will be leaving the path open for its individual members to follow up with you separately, whether individually or in groups sharing a similar interest and approach, as they may see fit.

What this latest move shows is that publishers aren't prepared to allow academics to share even their own papers without permission. It underlines that, along with fat profits, what the industry is most concerned about in this struggle is control. Academic publishers will graciously allow ResearchGate to exist, but only if they are recognized unequivocally as the gatekeeper.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

33 Comments | Leave a Comment..

Posted on Techdirt - 25 September 2017 @ 9:09am

NSA-Developed Crypto Technology No Longer Trusted For Use In Global Standards

from the I-just-can't-think-why dept

One of the most shocking pieces of information to emerge from the Snowden documents was that the NSA had paid RSA $10 million to push a weakened form of crypto in its products. The big advantage for the NSA was that it made it much easier to decrypt messages sent using that flawed technology. A few months after this news, the National Institute of Standards and Technology announced that it would remove the "Dual Elliptic Curve" (Dual EC) algorithm from its recommendations. But of course, that's not the end of the story. Betraying trust is always a bad idea, but in the security field it's an incredibly stupid idea, since trust is a key aspect of the way things work in that shadowy world. So it should come as no surprise that following the Dual EC revelations, the world's security experts no longer trust the NSA:

An international group of cryptography experts has forced the U.S. National Security Agency to back down over two data encryption techniques it wanted set as global industry standards, reflecting deep mistrust among close U.S. allies.

In interviews and emails seen by Reuters, academic and industry experts from countries including Germany, Japan and Israel worried that the U.S. electronic spy agency was pushing the new techniques not because they were good encryption tools, but because it knew how to break them.

The NSA has now agreed to drop all but the most powerful versions of the techniques -- those least likely to be vulnerable to hacks -- to address the concerns.

The Reuters report has interesting comments from security experts explaining why they opposed the new standards. Concerns included the lack of peer-reviewed publication by the creators, the absence of industry adoption or a clear need for the new approaches. There's also the intriguing fact that the UK was happy for the NSA algorithms to be adopted. Given the extremely close working relationship GCHQ has with the NSA, you can't help wondering whether the UK's support was because it too knew how to break the proposed encryption techniques, and therefore was keen for them to be rolled out widely. Certainly, the reason its representative gave for backing the two NSA data encryption methods, known as Simon and Speck, was feeble in the extreme:

Chris Mitchell, a member of the British delegation, said he supported Simon and Speck, noting that "no one has succeeded in breaking the algorithms.”

Moreover, it was only half-true: the Reuters story says that academics have already had "partial success" in finding weaknesses, which surely calls for a cautious approach and more research, rather than simply accepting the proposal and hoping for the best. And even the British representative had to admit that his NSA mates had totally blown it:

He acknowledged, though, that after the Dual EC revelations, "trust, particularly for U.S. government participants in standardization, is now non-existent."

As the NSA -- and also the W3C, thanks to its blessing of DRM in HTML -- will now find, regaining that lost trust will be a long and difficult process. Maybe others can learn from their (bad) examples.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

14 Comments | Leave a Comment..

More posts from Glyn Moody >>