Glyn Moody’s Techdirt Profile

glynmoody

About Glyn MoodyTechdirt Insider




Posted on Techdirt - 14 August 2017 @ 7:13pm

Danish University And Industry Work Together On Open Science Platform Whose Results Will All Be Patent-Free

from the they-said-it-couldn't-be-done dept

Here on Techdirt, we write a lot about patents. Mostly, it's about their huge downsides -- the stupid patents that should never have been awarded, or the parasitic patent trolls that feed off companies doing innovative work. The obvious solution is to get rid of patents, but the idea is always met with howls of derision, as if the entire system of today's research and development would collapse, and a new dark age would be upon us. It's hard to refute that claim with evidence to the contrary because most people -- other than a few brave souls like Elon Musk -- are reluctant to find out what happens if they don't cling to patents. Against that background, it's great to see Aarhus University in Denmark announce a new open science initiative that will eschew patents on researchers' work completely:

The platform has been established with funds from the Danish Industry Foundation and it combines basic research with industrial innovation in a completely new way, ensuring that industry and the universities get greater benefit from each other's knowledge and technology.

University researchers and companies collaborate across the board to create fundamental new knowledge that is constantly made available to everyone -- and which nobody may patent. On the contrary, everyone is subsequently freely able to use the knowledge to develop and patent their own unique products.

According to Aarhus University, Danish industry loves it:

The idea of collaborating in such a patent-free zone has aroused enormous interest in industry and among companies that otherwise use considerable resources on protecting their intellectual property rights.

The attraction seems to be that an open platform will make it easier for companies -- particularly smaller ones -- to gain access to innovative technologies at an early stage, without needing to worry about patents and licensing. Aarhus University hopes that the approach will also allow researchers to take greater risks with their work, rather than sticking with safer, less ambitious projects, as has happened in the past. The first example is already up and running. It is called SPOMAN (Smart Polymer Materials and Nano-Composites), and has a project page hosted on the Open Science Framework site:

In this project, you will find minutes from the Open Science meetings, current status of the initiative, general presentations etc. More importantly, this project has links to the individual activities and research projects under Open Science. In these projects, the research progress, lab journals and more are found.

Combined with the no-patent promise, you don't get much more open than that.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

77 Comments | Leave a Comment..

Posted on Techdirt - 14 August 2017 @ 1:23pm

The Ultimate Virus: How Malware Encoded In Synthesized DNA Can Compromise A Computer System

from the digital-code-is-digital-code dept

DNA is a digital code, written not as 0s and 1s (binary) but in the chemical letters A, C, G and T -- a quaternary system. Nature's digital code runs inside the machinery of the cell, which outputs the proteins that are the building blocks of living organisms. The parallels between DNA and computer code are one reason why we speak of computer viruses, since both are sequences of instructions that subvert the hardware meant to run other, more benign programs. Wired reports on new work which brings out those parallels in a rather dramatic fashion:

a group of researchers from the University of Washington has shown for the first time that it's possible to encode malicious software into physical strands of DNA, so that when a gene sequencer analyzes it the resulting data becomes a program that corrupts gene-sequencing software and takes control of the underlying computer.

A certain amount of cheating was involved in order to obtain this undeniably impressive outcome. For example, the researchers took an open source compression utility, and then intentionally added a buffer overflow bug to it. They crafted a specific set of DNA letters such that when it was synthesized, sequenced and processed in the normal way -- which included compressing the raw digital readout -- it exploited the buffer overflow flaw in the compression program. That, in its turn, allowed the researchers to run arbitrary code on the computer system that was being used for the analysis. In other words, the malware encoded in the synthesized DNA had given them control of a physical system.

While they may have added the buffer overflow exploit to the compression program themselves, the researchers pointed out they found three similar flaws in other commonly-used DNA sequencing and analysis software, so their approach is not completely unrealistic. However, even setting up the system to fail in this way, the researchers encountered considerable practical problems. These included a requirement to keep the DNA malware short, maintaining a certain ratio of Gs and Cs to As and Ts for reasons of DNA stability, and avoiding repeated elements, which caused the DNA strand to fold back on itself.

Clearly, then, this is more a proof of concept than a serious security threat. Indeed, the researchers themselves write in their paper (pdf):

Our key finding is that it is possible to encode a computer exploit into synthesized DNA strands.

However, in the longer term, as DNA sequencing becomes routine and widespread, there will be greater scope for novel attacks based on the approach:

If hackers did pull off the trick, the researchers say they could potentially gain access to valuable intellectual property, or possibly taint genetic analysis like criminal DNA testing. Companies could even potentially place malicious code in the DNA of genetically modified products, as a way to protect trade secrets, the researchers suggest.

If nothing else, this first DNA malware hack confirms that there is no unbridgeable gulf between the programs running in our cells, and those running on our computers. Digital code is digital code.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

23 Comments | Leave a Comment..

Posted on Techdirt - 10 August 2017 @ 7:41pm

Elsevier Continues To Build Its Monopoly Solution For All Aspects Of Scholarly Communication

from the but-can-people-be-bothered-to-support-open-alternatives? dept

Techdirt has just written about the amazing achievements of Sci-Hub, and how it now offers the vast majority of academic papers free online. One implication may be that traditional publishing, with high-cost journals hidden behind paywalls, is no longer viable. But as we noted, that doesn't mean that traditional publishers will disappear. For one thing, many are embracing open access, and finding it pretty profitable (some would say too profitable thanks to things like "double dipping".) But there's another way that academic publishers, particularly the biggest ones with deep pockets, can head off the threat to their profits from developments like Sci-Hub and open access: by diversifying.

Mike wrote about one example last year, when Elsevier bought the preprint service Social Science Research Network (SSRN), arguably the most popular repository of research in the fields of economics, law and the social sciences. Since SSRN deals in preprints, which can be freely downloaded, sites like Sci-Hub are no threat. Similarly, preprints are generally posted before submission to journals, and therefore can flourish whether or not those journals are open access. Now we have yet another significant move by Elsevier, reported here on the Scholarly Kitchen blog:

Elsevier announces its acquisition of bepress. In a move entirely consistent with its strategy to pivot beyond content licensing to preprints, analytics, workflow, and decision-support, Elsevier is now a major if not the foremost single player in the institutional repository landscape. If successful, and there are some risks, this acquisition will position Elsevier as an increasingly dominant player in preprints, continuing its march to adopt and coopt open access.

As that post explains, Bepress is not a publishing company, but seeks to provide key elements of the general infrastructure needed for scholarly communications. That includes things like repositories -- the stores of articles produced by researchers at an institution, or covering a specific field -- and "showcases". Bepress's product in this field is called Digital Commons. It claims to be:

the only comprehensive showcase that lets institutions publish, manage, and increase recognition for everything produced on campus -- and the only institutional repository and publishing platform that integrates with a full faculty research and impact suite.

It's a shrewd acquisition by Elsevier. It continues to move the company beyond the role of a traditional publisher into one that can offer a complete solution for the academic world, with products and services handling every aspect of scholarly work. By acquiring more and more parts of this solution, Elsevier can integrate them ever-more tightly, which will encourage users of one element to adopt others. If this process of integration can be carried out successfully, it will leave Elsevier with almost total control of the sector, beyond even today's already profitable position.

That may be great for Elsevier shareholders, but it limits choices for the academic community. Fortunately, there are ways to counter Elsevier's rise to monopoly power. Techdirt wrote about one of them last year, when a new open preprint repository for the social sciences, SocArXiv, was created soon after Elsevier bought SSRN. There are already a number of open source alternatives to Bepress products, and supporting those rather than moving to Elsevier-owned services is an obvious move for those in the academic community who wish to preserve their independence. The problem is that doing so is likely to require a certain amount of effort, and it may be that institutions, libraries and academics don't have the time or energy to do that, and they will simply sign up to Elsevier's monoculture without worrying too much about the long-term consequences.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

17 Comments | Leave a Comment..

Posted on Techdirt - 9 August 2017 @ 1:24pm

Australian Public Servants Warned Against Liking Social Media Posts That Are Critical Of Government Policies

from the couldn't-happen-in-the-US-oh-wait dept

The Internet effectively turns everyone into a publisher, able to promulgate their ideas in a way that was not open to most people before. That's great for the democratization of media -- and terrible for governments that want to control the flow of information to citizens. The Australian government is particularly concerned about what its 150,000 public servants might say. It has issued a "guidance" document that "sets out factors for employees to consider in making decisions about whether and what to post". Here's why:

The speed and reach of online communication means that material posted online is available immediately to a wide audience. It can be difficult to delete and may be replicated endlessly. It may be sent to, or seen by, people the author never intended or expected would see it.

Deciding whether to make a particular comment or post certain material online is a matter of careful judgement rather than a simple formula. This guidance sets out factors for employees to consider in making decisions about whether and what to post.

That sounds reasonable enough. But it turns out that what the policy is really about is muzzling public employees, and stopping them from expressing or supporting views that disagree with government policies. As the Australian organization Digital Rights Watch summarizes:

The new guidelines warn that public servants would be in breach of code of conduct if they "liked" anti-government posts, privately emailing negative material or do not remove "nasty comments" about the government posted by others. The new policies apply to employees even if they use social media in a private capacity outside of work hours.

It also applies to your past employment with the Australian government -- and futures ones:

it is also worth bearing in mind that comments you make about an agency you've never worked in might be made public and taken into account if you apply for a job there later. Perhaps you haven't breached the Code, but you might have ruled yourself out for that job if the comment could reasonably call into question your capacity to work there impartially.

In other words, if you criticize any aspect of government policy, you'll never work in this town again. What's troubling about this move is not just that it is limiting people's freedom of speech -- something that the guidance freely admits:

The common law recognises an individual right to freedom of expression. This right is subject to limitations such as those imposed by the Public Service Act. In effect, the Code of Conduct operates to limit this right.

It's also that we have seen before where this kind of muzzling leads. Back in 2013, Techdirt wrote about similar rules for public servants in Canada, only rescinded last year. One of the most problematic areas was in the field of the environment, since it meant that even world-leading scientists were unable to point out publicly the evident flaws in the the Canadian government's climate policy. It looks like experts employed by the Australian government now find themselves similarly unable to be openly critical of the official line, no matter how misguided or dangerous it may be. There are also signs that a similar muzzling of scientists is starting to take place in the US. Despite unequivocal evidence of "drastic" climate change in a new, but unreleased US government report, emails obtained by the Guardian reveal the following:

Staff at the US Department of Agriculture (USDA) have been told to avoid using the term climate change in their work, with the officials instructed to reference "weather extremes" instead.

At least they can still like social media posts that are critical of the US government's environmental policies. For now...

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

24 Comments | Leave a Comment..

Posted on Techdirt - 4 August 2017 @ 3:23am

Georgia To Roll Out Tens Of Thousands Of CCTV Cameras With Real-Time Facial Recognition Capabilities

from the are-you-a-sheep-or-a-goat? dept

Surveillance using CCTV cameras is old hat these days, even for locations outside the world's CCTV capital, London. But there's an important step-change taking place in the sector, as operators move from simply observing and recording, to analyzing video feeds automatically using facial recognition software. Techdirt has written about this area a few times, but these examples have all been fairly small-scale and exploratory. News from Georgia -- the one in the Caucasus, not the State -- shows that things are moving fast in this field:

NEC Corporation today announced that it has provided an advanced surveillance system for cities utilizing facial recognition to the Ministry of Internal Affairs of Georgia, in cooperation with Capital Systems LLC, a leading system developer. The system began operation in June of this year, and works in combination with 400 CCTV surveillance cameras installed in Georgia's major cities, including the capital, Tbilisi.

The system utilizes NeoFace Watch, NEC's real-time facial recognition software for video, featuring the world's highest recognition precision. It checks images captured by CCTV cameras against pictures of suspects and others registered in a watch list, making it possible to identify figures rapidly and accurately.

This system was introduced as part of Georgia's "Safe City, Safe Region, Safe Country" program aiming to improve public safety. Georgia also plans to install tens of thousands of additional cameras nationwide in the future.

It's not clear whether those tens of thousands of CCTV systems will all be equipped with real-time facial recognition, or only some of them. But even the immediate roll out of facial recognition to 400 CCTV cameras is substantial, especially for a country with fewer than four million inhabitants. It's hard not to see this as a test-bed for other, much bigger countries, which will doubtless be watching Georgia's experience with interest. Some have already started their own trials: ZDNet reports that at least two of Australia's police forces -- the Northern Territory Police and South Australia Police -- have 100s of CCTV cameras with real-time facial recognition features. There's also a small-scale trial employing vehicle-mounted cameras with similar capabilities being conducted by UK police in Wales. All of the examples mentioned here use the NeoFace Watch system from NEC, which the company claims is able to process multiple camera feeds, and to extract and match thousands of faces per minute.

NEC also emphasizes that its product is "suitable for the detection of both undesirables and VIPs." That's an important point. CCTV systems are currently fairly egalitarian, spying on and recording everyone equally. But the addition of facial recognition allows a crowd's sheep and goats to be distinguished, and then dealt with appropriately. While beefy security guards preemptively -- and discreetly -- remove the "undesirables" who might lower the tone of a venue, exquisite hospitality experts can meet and greet the VIPs as they approach. One of the unexpected results of adding facial recognition to CCTV is that it brings out the "servile" in "surveillance".

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

31 Comments | Leave a Comment..

Posted on Techdirt - 1 August 2017 @ 6:41pm

How May 35th Freedoms Have Blossomed With China's Martian Language

from the say-what? dept

In recent years, the Internet news from China has been pretty depressing, as Xi Jinping tightens his control over every aspect of the online world. But the Chinese are a resourceful people, with thousands of years of experience of circumventing imperial oppression. For example, one of the many taboo subjects today is the "June 4th incident", better known in the West as the Tiananmen Square protests of 1989. A New York Times article published in 2011 explains how people in China managed to refer to this forbidden date online:

You might think May 35th is an imaginary date, but in China it's a real one. Here, where references to June 4 -- the date of the Tiananmen incident of 1989 -- are banned from the Internet, people use "May 35th" to circumvent censorship and commemorate the events of that day.

Inevitably, the authorities soon spotted this trick, and blocked references to May 35th too. But as the author of the New York Times piece, Yu Hua, explains:

May 35th freedom is an art form. To evade censorship when expressing their opinions on the Internet, Chinese people give full rein to the rhetorical functions of language, elevating to a sublime level both innuendo and metaphor, parody and hyperbole, conveying sarcasm and scorn through veiled gibes and wily indirection.

The latest, most highly-developed form of that "May 35th freedom" is described in an article on Quartz, which explores an invented Chinese language known as "Martian":

Martian dates back to at least 2004 but its origins are mysterious. Its use appears to have begun among young people in Taiwan for online chatting, and then it spread to the mainland. The characters randomly combine, split, and rebuild traditional Chinese characters, Japanese characters, pinyin, and sometimes English and kaomoji, a mixture of symbols that conveys an emotion (e.g. O(∩_∩)O: Happy).

Martian is an extension of the May 35th approach, but with additional elements, including fairly random ones. That makes it hard for the automated censorship systems to spot forbidden topics, since the Martian elements have to be decoded first. Naturally, though, the human censors eventually work out what the Martian terms mean, and add them to the blacklists for automatic blocking. However, according to the Quartz article, China's censorship system is not monolithic, and just because a post written in Martian is blocked on one service doesn't mean it will be blocked on another.

It's the continuing existence of those small spaces for free speech, coupled with the never-ending ingenuity of Chinese Internet users in coming up with Martian-like linguistic camouflage, that allows controversial material to be posted and circulated, despite the massive censorship machine.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

46 Comments | Leave a Comment..

Posted on Techdirt - 1 August 2017 @ 9:28am

Streisand Effect Helps Sci-Hub To Acquire Almost All Scholarly Literature, Dooms Traditional Academic Publishing

from the leak-once,-available-perpetually dept

Techdirt has been covering the story of Sci-Hub, which provides unrestricted access to a massive (unauthorized) database of academic papers, for a while now. As several posts have emphasized, the decision by the publishing giant Elsevier to pursue the site through the courts is a classic example of the Streisand Effect: it has simply served to spread the word about a hitherto obscure service. There's a new paper exploring this and other aspects of Sci-Hub, currently available as a PeerJ preprint. Here's what one of the authors says in a related Science interview about the impact of lawsuits on Sci-Hub:

In our paper we have a graph plotting the history of Sci-Hub against Google Trends -- each legal challenge resulted in a spike in Google searches [for the site], which suggests the challenges are basically generating free advertising for Sci-Hub. I think the suits are not going to stop Sci-Hub.

That free advertising provided by Elsevier and others through their high-profile legal assaults on Alexandra Elbakyan, the academic from Kazakhstan who created and runs Sci-Hub pretty much single-handedly, has been highly effective. The surge in searches for Sci-Hub seems to have led to its holdings becoming incredibly comprehensive, as increased numbers of visitors have requested missing articles, which are then added to the collection:

As of March 2017, we find that Sci-Hub's database contains 68.9% of all 81.6 million scholarly articles, which rises to 85.2% for those published in closed access journals. Furthermore, Sci-Hub contains 77.0% of the 5.2 million articles published by inactive journals. Coverage varies by discipline, with 92.8% coverage of articles in chemistry journals compared to 76.3% for computer science. Coverage also varies by publisher, with the coverage of the largest publisher, Elsevier, at 97.3%.

The preprint article has some interesting statistics on user donations, a measure of people's appreciation of Elbakyan's work and the Sci-Hub service:

We find that these [Bitcoin] addresses have received 1,037 donations, totaling 92.63 bitcoins. Using the U.S. dollar value at the time of transaction confirmation, Sci-Hub has received an equivalent of $60,358 in bitcoins. However, since the price of bitcoins has risen, the 67.42 donated bitcoins that remain unspent are now worth approximately $175,000.

That suggests a fairly healthy financial basis for Sci-Hub, but there is still the risk that its servers and contents could be seized, and the site shut down. As the preprint points out, there are technologies under development that would allow files to be hosted without any central point of failure, which would address this vulnerability. The paper also notes two powerful reasons why old-style academic publishing is probably doomed, and why Sci-Hub has won:

adoption of Sci-Hub and similar sites may accelerate if universities continue canceling increasingly expensive journal subscriptions, leaving researchers with few alternative access options. We can also expect biblioleaks -- bulk releases of closed access corpuses -- to progress despite publisher's best efforts, as articles must only leak once to be perpetually available. In essence, scholarly publishers have already lost the access battle. Publishers will be forced to adapt quickly to open access publishing models.

It's worth noting that this does not mean the end of academic publishing, simply that it makes no sense to put papers behind a paywall, since it is almost inevitable that they will end up on Sci-Hub. However, as the quotation above notes, adopting an open access publishing model, whereby academic institutions pay for their researchers' papers to be made freely available online, can still flourish in this situation. The current analysis finds that people already don't bother to use Sci-Hub so much for open access papers, because they don't need to:

We find strong evidence that Sci-Hub is primarily used to circumvent paywalls. In particular, users requested articles from closed access journals much more frequently than open access journals. Accordingly, many users likely only resort to Sci-Hub when access through a commercial database is cumbersome or costly.

It turns out that the best way to "defeat" Sci-Hub is not through legal threats, which only strengthen it, but by moving to open access, which effectively embraces Elbakyan's vision of all academic literature being made freely available to everyone.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

13 Comments | Leave a Comment..

Posted on Techdirt - 24 July 2017 @ 6:42pm

Surveillance Used To Give Poor Students Extra Financial Assistance Discreetly. Is That OK?

from the invisible-subsidies dept

A story about surveillance in China is hardly notable -- Techdirt has run dozens of them. But there are some unusual aspects to this report on the Sixth Tone site that make it a little out of the ordinary:

The University of Science and Technology of China (USTC), in the eastern province of Anhui, collects data from the charge cards of students who frequently eat in the school cafeteria -- usually the cheapest option, thanks to government subsidies -- but spend very little on each meal. The school's student affairs department uses the information for "invisible subsidies," or allowances delivered without drawing attention -- what it calls "a more dignified way for poor students to receive stipends."

According to the post, the program has been running for many years, but only came to light when a former student posting under the name of "Shannon" wrote an account of being selected in 2005 for additional support, published on the site Zhihu, the Chinese equivalent of Quora. His post has received over 45,000 likes so far, and the number continues to rise. As the Sixth Tone story notes, comments on Shannon's post have been overwhelmingly positive:

One comment that received over 3,000 likes read: "The University of Science and Technology of China has really got the human touch -- they are pretty awesome." Another netizen, meanwhile, described the innovative scheme as "the right way to use big data."

This raises a number of questions. For example, does the widespread use of surveillance in China make people more willing to accept this kind of benevolent spying, as here? Or is it simply that its use is felt to be justified because it led to additional funding that was given in a discreet fashion? More generally, how would Chinese citizens feel about this approach being rolled out to other areas of life? Since that's pretty much what China's rumored "citizen score" system aims to do, we might find out, if it's ever implemented.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

8 Comments | Leave a Comment..

Posted on Techdirt - 18 July 2017 @ 3:49pm

From Sans Serif To Sans Sharif: #Fontgate Leads To Calls For Pakistan's Prime Minister To Resign

from the fun-with-fonts dept

Some people get really worked up about fonts. Here, for example, is a thread on Reddit, spotted by Leigh Beadon, about the appearance of the serif font Cambria on the show "Better Call Saul". The problem is that the show is set in the years 2002 and 2003, while Cambria was designed in 2004. The (mock?) outrage about this slip-up is all good fun, but obviously nothing too serious. Unlike in Pakistan, where another apparent font faux pas is leading to calls for the country's prime minister to resign.

As the Guardian explains, the daughter of Pakistan's prime minister is being investigated by the country's supreme court as a result of revelations in the Panama Papers that linked her to expensive properties in London. Documents produced in her defense had a slight problem, as spotted by font aficionados:

Documents claiming that Mariam Nawaz Sharif was only a trustee of the companies that bought the London flats, are dated February 2006, and appear to be typed in Microsoft Calibri.

But the font was only made commercially available in 2007, leading to suspicions that the documents are forged.

Social media users have derided Sharif for this apparent misstep, coining the hashtag #fontgate.

Such is the interest in #fontgate and the humble sans serif Calibri font, that visits to the relevant Wikipedia page have ballooned from 500 visits per day to 150,000 in just two days. As a result of the intense interest and some dubious editing attempts, Wikipedia has been forced to act:

After users seemingly tried to change the article's content to say the font was available from 2004, Wikipedia suspended editing on its Calibri page "until July 18 2017, or until editing disputes have been resolved".

Although you might think this is pretty much at the level of the Reddit discussion of Cambria, rival politicians in Pakistan see it as much more serious -- and an opportunity they can exploit:

Opposition parties have urged prime minister Nawaz Sharif to step down after the investigation found a "significant disparity" between his family's declared wealth and known sources of income.

However things turn out in Pakistan for the country's prime minister and his daughter -- Nawaz Sharif has denied wrongdoing -- #fontgate has already had one positive outcome. It allowed the Indian newspaper Financial Express to use the memorable headline: "Awesome story of Calibri, the font that may leave Pakistan sans Sharif."

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

15 Comments | Leave a Comment..

Posted on Techdirt - 17 July 2017 @ 6:34pm

When The 'Sharing Economy' Turns Into The 'Missing Or Stolen Economy'

from the anybody-seen-my-300,000-umbrellas-lying-around? dept

The sharing economy -- actually better-described as a rental economy -- is very much in vogue, inspired by the high-profile examples of Airbnb and Uber. But Western enthusiasm pales in comparison to that of Chinese entrepreneurs, who seem to have taken the view that the model will work for anything. For example, alongside the companies that rent out homes and cars, there are now some that will let you pick up an umbrella in a public spot, use it for a short while, and then return it. At least, that's the theory. But the South China Morning Post reports that the Sharing E Umbrella startup ran into a few problems:

Just weeks after making 300,000 brollies available to the public via a rental scheme, Sharing E Umbrella announced that most of them had gone missing, news website Thepaper.cn reported on Thursday.

The company was launched back in April, and is operating in 11 Chinese cities. Customers borrow umbrellas after paying a deposit of about $3, and a fee of 10 cents for every 30 minutes. Undeterred by the fact that each missing umbrella represents a loss of $9, the company's founder says he hopes to proceed on a larger scale by making 30 million of them available across the country by the end of the year. Here's why he's convinced he's on to a winner:

After seeing the launch of bike-sharing schemes across the country, the Shenzhen-based businessman said he "thought that everything on the street can now be shared".

Perhaps he should have waited a little before modelling his business on bike sharing. Caixin reported last month that Wukong, one of the smaller players in this crowded market, has just closed down -- after most of its bikes went missing:

Wukong operated its 1,200 bikes in the southwestern city of Chongqing. But most of the bikes were lost because the firm didn't embed GPS devices in the vehicles. By the time the company decided the devices were necessary, it had run out of money and failed to raise more

Wukong isn't the only rental company that lost track of most of its bikes, as Shanghaiist.com notes:

Wu Shenghua founded Beijing-based 3Vbike in February, using 600,000 RMB ($89,000) of his own money to purchase the first 1,000 bikes. But only four months later, he told the Legal Evening News that there were only dozens left.

Despite those failures, money continues to pour into the Chinese bicycle rental sector: last month, one of the leading startups, Mobike, announced $600 million in new funding, which it will use it to expand outside China. Let's hope people there remember to bring the bikes back.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

33 Comments | Leave a Comment..

Posted on Techdirt - 13 July 2017 @ 6:46pm

Canada Capitulates: Supreme Court Throws Away Government's Great Pharma Patent Victory

from the who-needs-the-law-when-you-can-bully? dept

Techdirt readers will probably recall a long-running saga involving corporate sovereignty, $500 million, the US pharma company Eli Lilly, and drug patents. In its claim against the Canadian government, made using NAFTA's Chapter 11, Eli Lilly insisted it should have been given some drug patents, despite Canada's courts finding that they had not met the requirements for patentability -- specifically that there was no evidence that the drugs in question provided the benefits in the patent. Eli Lilly said that Canada was being unreasonable in setting a slightly higher bar than other countries by demanding that a patented drug should actually do something useful. As Mike reported back in March, even the lawyers that made up the corporate sovereignty tribunal hearing this case agreed that Canada was within its rights to take this view. They not only dismissed the claim, but ordered Eli Lilly to pay Canada's legal fees.

This was a huge win for Canada in particular, and governments in general. At the time, it all felt a little too good to be true. And now seems it was: as infojustice.org reports, the Supreme Court of Canada has just overturned decades of precedent -- and implicitly the Eli Lilly ruling -- by making it easier for Big Pharma to gain patents on medicines that don't really work:

This reversal in AstraZeneca Canada Inc. v. Apotex, Inc. is particularly disconcerting because Canada had just won an investor-state arbitration award in the long awaited Eli Lilly v. Canada case upholding its more stringent promise/utility doctrine that had been used successfully to overturn two dozen secondary patents, particularly those claiming new uses of known medicines, where patent claimants failed to present evidence in support of the prediction of therapeutic benefit promised in their patent applications.

Thus Canada's Supreme Court has inexplicably thrown away the government's earlier victory, and undermined the country's more rigorous approach to granting pharma patents. Writing for infojustice.org, Brook K. Baker believes this stunning capitulation is a result of unremitting bullying from the US:

Canada had been under intense pressure from the US, which had placed Canada on its Special 301 Watch List for five years threatening that the promise/utility doctrine unreasonably harmed Big Pharma in the US and from the pharmaceutical industry itself which claimed that the doctrine violated global patentability criteria. President Trump's hardball campaign promise to rewrite or leave the North American Free Trade Agreement because of its failure to adequately protect US intellectual property interests may also have played a role. Likewise, President Trump's more recent assertions that US payers are unreasonably subsidizing biomedical research and development because other countries, like Canada, are paying lower prices for innovator medicines than insurers and other payers in the US may also have increased pressure on the Court.

It's really sad to see the Canadian court kowtowing like this, undermining its own independence and moral authority in the process. Weaker patents will lead to the Canadian taxpayer paying higher prices for less-effective drugs. Worst of all, the Big Pharma bullies, aided and abetted by a newly-aggressive US government indifferent to other countries' health problems, will be encouraged to push for even more patent protection all around the world. That will lead not just to higher prices, but to more suffering and avoidable deaths, as crucial medicines become unaffordable for poorer patients.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

27 Comments | Leave a Comment..

Posted on Techdirt - 12 July 2017 @ 6:36pm

EU's Brexit Strategy Shows How Aggressive Transparency Can Be Used To Gain The Upper Hand In Negotiations

from the poor-Theresa-May-doesn't-stand-a-chance dept

We're big fans of transparency around here, as you may have noticed. In particular, Techdirt has repeatedly called for trade deals to be negotiated more openly to allow greater input from the public -- and less backlash when they find out what has been agreed without them behind closed doors. But a fascinating post from the Institute for Government, a UK-based think-tank "working to make government more effective", points out that aggressive transparency can also be used to gain the advantage during high-level political negotiations.

In this case it is the critical "Brexit" negotiations between the EU and the UK that will determine their future relationship if and when the UK leaves the European Union. The stakes are incredibly high: the financial implications alone run into hundreds of billions of euros. Moreover, the UK's place in the world is also at play, as it extracts itself from the biggest geopolitical bloc in an attempt to go it alone. As the post points out, the approaches taken by the EU and the UK could hardly be more contrasted:

The European Council [one of the key EU bodies setting strategy] has published its "transparency regime" for the Brexit negotiations, committing the EU to a far greater degree of transparency than anything that we have seen in the UK. It sets out the ten classes of documents that could be issued by the Council, the [European] Commission and [EU] member states, along with a default level of public disclosure for each.

The UK government, by contrast, has said rather sniffily it would not be offering a "running commentary on Brexit negotiations", and aims to keep its plans totally under wraps. The Institute for Government points out that this is a big mistake:

The EU wants to be able to control the public narrative around Brexit. Two weeks ago, the EU published its draft negotiating mandate. Its proposals on the prerogative of the European Court of Justice, the rights of EU citizens in the UK and the sequencing of the negotiations were in all the UK papers. Having taken a self-imposed vow of secrecy, Prime Minister Theresa May was unable to respond to any of the issues of substance.

In other words, 500 million Europeans are only hearing the EU's side of the story, and the EU's views on what should happen during Brexit. Theresa May's secrecy means that she cannot rebut any of the assertions, nor offer her own vision (cynics say that is because she has neither a vision nor a plan….) The post points out that the EU's approach is not naïve or simplistic, but carefully planned and nuanced -- open for this aspect, but more reticent elsewhere:

A degree of secrecy is necessary to allow negotiators the space to think innovatively, to propose and weigh potential compromises. So, the EU stops short of a commitment to total transparency. It wants talks to be open, but not wide open.

The UK on the other hand wants to run as much of the negotiations behind closed doors as possible. That may just be the preferred operating style of this government or it may be a conscious decision. Whatever the reason, it will play right into the EU's hands.

It's a perceptive analysis that adds to the already compelling reasons why such high-level talks should be open and transparent as a matter of course. It's a pity that the one person who needs to take heed of that fact -- the UK's Prime Minister -- almost certainly won't. Both she and the country she nominally controls are likely to pay a high price as a result.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

54 Comments | Leave a Comment..

Posted on Techdirt - 10 July 2017 @ 3:32pm

Former Head Of GCHQ Says Don't Backdoor End-To-End Encryption, Attack The End Points

from the putting-an-end-to-the-end-to-end-debate dept

When he was head of GCHQ, Robert Hannigan said some pretty clueless things about the Internet and encryption. For example, in 2014, he accused tech companies of 'facilitating murder', and joined in the general demonization of strong crypto. Last year, he called for technical experts to work more closely with governments to come up with some unspecified way around encryption. Nobody really knew what he meant when he said:

"I am not in favor of banning encryption. Nor am I asking for mandatory back doors. … Not everything is a back door, still less a door which can be exploited outside a legal framework."

Now, speaking to the BBC, he has clarified those remarks, and revealed how he thinks governments should be dealing with the issue of end-to-end encryption. As he admits:

"You can't uninvent end-to-end encryption, which is the thing that has particularly annoyed people, and rightly, in recent months. You can't just do away it, you can't legislate it away. The best that you can do with end-to-end encryption is work with the companies in a cooperative way, to find ways around it frankly."

He emphasized that backdoors are not the answer:

"I absolutely don't advocate that. Building in backdoors is a threat to everybody, and it's not a good idea to weaken security for everybody in order to tackle a minority."

So what is the solution? This:

"It's cooperation to target the people who are using it. So obviously the way around encryption is to get to the end point -- a smartphone, or a laptop -- that somebody who is abusing encryption is using. That's the way to do it."

As Techdirt reported earlier this year, this is very much the approach advocated by top security experts Bruce Schneier and Orin Kerr. They published a paper describing ways to circumvent even the strongest encryption. It seems that Hannigan has got the message that methods other than crypto backdoors exist, some of which require cooperation from tech companies, which may or may not be forthcoming. It's a pity that he's no longer head of GCHQ -- he left for "personal reasons" at the beginning of this year. But maybe that has given him a new freedom to speak out against stupid approaches. We just need to hope the UK government still listens to him.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

16 Comments | Leave a Comment..

Posted on Techdirt - 6 July 2017 @ 3:21am

China's Surveillance Plans Include 600 Million CCTV Cameras Nationwide, And Pervasive Facial Recognition

from the I-saw-what-you-did-there,-and-know-who-you-are dept

Two of the recurrent themes here on Techdirt recently are China's ever-widening surveillance of its citizens, and the rise of increasingly powerful facial recognition systems. Those two areas are brought together in a fascinating article in the Wall Street Journal that explores China's plans to roll out facial recognition systems on a massive scale. That's made a lot easier by the pre-existing centralized image database of citizens, all of whom must have a government-issued photo ID by the age of 16, together with billions more photos found on social networks, to which the Chinese government presumably has ready access.

As for the CCTV side of things, the article quotes industry research figures according to which China already has 176 million surveillance cameras in public and private hands, and is forecast to add another 450 million by 2020. If those figures are to be believed, that would mean around 600 million CCTV cameras by that date -- around one for every three people in China. According to the Wall Street Journal:

Facial-recognition cameras are being used in China for routine activities such as gaining entrance to a workplace, withdrawing cash from an ATM and unlocking a smartphone. A KFC restaurant in Beijing is scanning customer faces, then making menu suggestions based on gender and age estimates. One popular park in the capital has deployed it to fight toilet-paper theft in restrooms, using face-scanning dispensers that limit each person to one 2-foot length of paper every nine minutes.

Other existing uses include on a running track to check that people aren't taking shortcuts, and in churches, mosques and temples, where CCTV cameras are deployed in conjunction with facial recognition to keep tabs on exactly who is engaging in these activities, which are regarded with suspicion by the authorities. Future possibilities are also explored by the article. Inevitably, police use of facial recognition systems figures prominently here:

Still to come: a police car with a roof-mounted camera able to scan in all directions at once and identify wanted lawbreakers. Researchers at the University of Electronic Science and Technology of China in Sichuan province have developed a working prototype. "We’ve tested it at up to 120 kilometers per hour," said Yin Guangqiang, head of the university's security-technology lab.

If the prospect of being recognized by a police car hurtling past you at high speed isn't exciting enough, you can look forward to being spotted by a squadron of facial-recognition drones that a Chinese company is working on. The bad news is that this is still "a little ways into the future", but we can be pretty sure that once it is possible, China will be among the first to deploy it as part of its ever-more pervasive high-tech surveillance system, with facial recognition playing a central role.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

24 Comments | Leave a Comment..

Posted on Techdirt - 30 June 2017 @ 1:34pm

First And Only Snippet Tax Deal In Spain Is With Big Supporter Of Snippet Tax In Germany

from the maybe-just-a-coincidence dept

Two years ago, Techdirt wrote about an industry study of Spain's "Google tax", which requires a Web site to pay for sending traffic to publishers when it quotes snippets of their texts. Just as everyone who actually understands the Internet predicted, Spain's new law had a disastrous effect on the publishing industry there, especially on smaller companies. Despite that unequivocal evidence, the law is still in place, and it's a further sign of how pointless it is that only now has the Spanish Center for Reprographic Rights (Cedro) finally managed to sign up its first deal with a news aggregator, called Upday (original in Spanish). Cedro is claiming that this "pioneering" move possesses a "strategic importance" because it recognizes the rights of those whose publications appear elsewhere as snippets.

The fact that it has taken so long to find anyone willing to accept that point is bad enough, but it gets worse. Upday operates across Europe, and was launched in Spain at the beginning of March this year. It turns out to be a partnership between Axel Springer and Samsung. As Techdirt readers may recall, the giant publishing group Axel Springer is one of the biggest supporters of the Google tax in Germany. Initially, it tried to take a hard line against the US search company. But Axel Springer was soon forced to back down humiliatingly and offer Google a free license to post snippets from its publications. A two-week experiment without search engine leads caused Web traffic to Axel Springer's sites to plunge.

So, far from being a "pioneering" move that validates the whole snippet tax approach in Spain, Upday's deal with Cedro is simply a key German supporter of this daft idea trying to give the impression that the moribund Spanish Google tax is still twitching somewhat. It's pretty clear why Axel Springer and Cedro would be keen to do that now, after years of nothing happening in Spain. The European Union is currently revising the main EU Copyright Directive. Article 11 of the proposed text is an EU-wide version of the snippet tax, despite the fact that the idea has failed miserably everywhere that it has been tried. The agreement between Upday and Cedro will presumably be used as "evidence" that the Google tax is "working" in Spain. The fact that it is a "circular" deal between German and Spanish supporters of the idea proves the exact contrary.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

23 Comments | Leave a Comment..

Posted on Techdirt - 23 June 2017 @ 8:07am

Why Is US Government Giving A Pharma Giant Exclusive Rights To A Zika Vaccine Whose Development Was Paid For By The US Public?

from the please-tell-me-again-why-making-drugs-unaffordable-will-save-lives dept

Here on Techdirt we've written much about the way Western pharma companies fight for their "right" to charge unaffordable prices for medicines in emerging and developing economies. In particular, they routinely take governments and local generic suppliers to court in an attempt to shore up highly-profitable monopolies on life-saving drugs. But to be fair, it's not only poorer people who are dying as a result of Big Pharma's desire to maximize profits: Western drug companies are equally happy to charge even higher prices in richer countries -- notably in the US. That's old news. But there is a pharmaceutical saga unfolding that manages to combine all the worst aspects of this kind of behavior, and to throw in a few new ones.

It concerns something really exciting and important: a vaccine that shows great promise against the devastating Zika virus, which can cause microcephaly, blindness, deafness, and calcification of the brain in children whose mothers were infected during their pregnancy. If effective, such a vaccine could be a tremendous boon not just for developing countries, but for Western ones too, since the Zika virus has already begun to spread in the US, and Europe. The vaccine was developed at the Walter Reed Army Institute for Research, and the Department of the Army funded its development. Great news, you might think: the US public paid for it, so it's only right that it should have low-cost access to it. Moreover, as an act of compassion -- and to burnish its international image -- the US could allow other countries to produce it cheaply too. But an article in The Nation reports that the US Army has other ideas:

the Army is planning to grant exclusive rights to this potentially groundbreaking medicine -- along with as much as $173 million in funding from the Department of Health and Human Services -- to the French pharmaceutical corporation Sanofi Pasteur. Sanofi manufactures a number of vaccines, but it's also faced repeated allegations of overcharges and fraud. Should the vaccine prove effective, Sanofi would be free to charge whatever it wants for it in the United States. Ultimately, the vaccine could end up being unaffordable for those most vulnerable to Zika, and for cash-strapped states.

The Knowledge Ecology Institute (KEI), led by Jamie Love, made a reasonable suggestion to ensure that those most at need would have access to the drug at a reasonable price. KEI asked that, if Sanofi does get an exclusive deal, it should be obliged to make the vaccine available at an affordable price. The Army said it lacked the ability to enforce price controls, but it would ask those nice people at Sanofi to commit to affordable pricing on a voluntary basis. According to The Nation, those nice people at Sanofi refused. Speaking of nice people at Sanofi, the article notes the following:

Sanofi's record also includes a number of controversies related to its pricing practices, from a $190 million fine to settle charges that it defrauded Medicare and other government programs, to a $109 million fine to settle charges that it illegally provided product kickbacks to doctors. In 2014, a whistle-blower alleged the company engaged in another kickback scheme and the destruction of legal evidence. KEI maintains a comprehensive list of Sanofi's fraud fines, including the latest: a $19.9 million settlement, reached this April, for overcharging the Department of Veterans’ Affairs.

When there is an entire Web page dedicated to listing Sanofi's problems going back to 2009, you really have to wonder why the US Army is so keen to give the company a monopoly on this promising new treatment. The usual argument for the sky-high prices of drugs is that firms must be rewarded for taking on the financial risk of drug development, otherwise they won't proceed, and the world would be the poorer. Except, of course, in this case that risk was entirely borne by the US public, which paid for the early stage development of the vaccine with their taxes. So Sanofi risked nothing, but now looks likely to reap the benefits by being allowed to price the vaccine out of the reach of the people who most need it. You might think there ought to be a law against this kind of behavior. It turns out that there is:

KEI's Jamie Love pointed out that under the Bayh-Dole Act of 1980, it is already illegal to grant exclusive rights to a federally owned invention unless the license holder agrees to make it available at reasonable pricing. But that provision has rarely, if ever, been enforced.

Now would be a really great time to start enforcing that law.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

58 Comments | Leave a Comment..

Posted on Techdirt - 22 June 2017 @ 6:50pm

Facial Recognition Software Brings Personalized Ads To The Supermarket

from the I-saw-what-you-bought-there dept

Facial recognition software is getting to the point where there are some very interesting things that can be done with it in everyday life. That includes really bad ideas like enabling the police to run record checks on everyone who passes in front of their body-worn cameras. But it also means that businesses can start applying the technology in novel ways. Here's what is happening on a trial basis in some German supermarkets and post offices, as reported by Deutsche Welle:

There's a camera and a screen set up by the check-out. A visual sensor scans the faces of waiting customers who have looked directly at the camera and detects whether they're male or female and how old they are.

Marketing company Echion is running the cameras and screens. The brands that advertise with them have clearly delineated target groups. If the visual sensor detects that enough people who fall into a company's target demographic are looking at the screen, an ad by this company will start playing.

Being shown ads that are likely to be more relevant to you is probably no bad thing. But once cameras are in place, it would be natural for shops to start using them for other more complex tasks, like spotting known shoplifters:

faces of individuals caught on camera are converted into a biometric template and cross-referenced with a database for a possible match with past shoplifters or known criminals. Some stores in the US give shoplifting suspects the option of allowing themselves to be photographed, rather than arrested. All this had been made possible by the arrival of networked, high-resolution security cameras and rapidly advancing analytical capabilities.

That's from a story in the Guardian last year, so it's likely that the technology has moved on considerably since then. It's easy to think of more troubling extensions to the idea of scanning shoppers: for example, linking up to other databases of troublemakers and ne'er-do-wells, or to selfies derived from social networks.

As well as obvious privacy issues, explored in the Deutsche Welle report, a more general concern is the normalization this latest application of facial scanning might produce. Once cameras coupled with facial recognition software are routinely installed in everyday settings like supermarkets -- with appropriate warnings -- perhaps we will begin to accept them as the norm, and barely notice their silent spread to other locations and situations.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

26 Comments | Leave a Comment..

Posted on Techdirt - 21 June 2017 @ 6:31pm

Cheese: The Final Frontier For The Completion Of The Canada-EU Trade Deal CETA

from the blessed-are-the-cheesemakers dept

Remember CETA, the "Comprehensive Economic and Trade Agreement" between the EU and Canada? After years of on-off moments, including one last burst of uncertainty in March of this year, it finally seemed that everything had been settled, and that the deal would soon come into force. But it turns out that there is another, hitherto-unsuspected problem -- cheese:

Canada's CBC reported on its website that plans to have CETA (the Comprehensive Economic Trade Agreement) in place on 1 July were "threatened by a new cheese dispute". It said Europeans were upset at how Canada would allocate import quotas for new EU products, including 18,000 additional tonnes of cheese that Canada has agreed to import tariff-free.

Euractiv has all the details of the problem, which turns out to be bickering over how EU cheese producers will share that new tariff-free allowance. That's just last-minute haggling, and presumably will be solved with some appropriate sticks and carrots on both sides of the Atlantic. But an earlier report on the same site indicates there are deeper issues with CETA that remain unresolved:

In France, 110 MPs have demanded the opinion of the Constitutional Council on the legality of CETA. A ruling is due this summer. And Belgium, whose calls for additional guarantees had led to a confrontation with Brussels, has promised to take its concerns to the Court of Justice of the European Union in the coming weeks.

Most recently, it is France's new President Emmanuel Macron who has put the issue back on the negotiating table, promising in the last days of his presidential campaign to set up an expert committee to examine the CETA agreement before ratification.

The last one of these is particularly problematic. Macron has adopted a surprisingly muscular style in his first few days as French President, most famously in his handshake with Donald Trump, and won't want to be seen backing down from his promise to seek expert scrutiny of CETA before ratification. Looks like there's life in that cheesy CETA saga yet.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

13 Comments | Leave a Comment..

Posted on Techdirt - 16 June 2017 @ 8:35am

Multiple German Courts Rule Photos Of Public Domain Works Are Not In The Public Domain

from the and-no,-you-can't-take-your-own,-either dept

Back in November 2015, we wrote about a bad situation in Germany, where a museum in Mannheim was suing the Wikimedia Foundation over photos of public domain works of art, which were uploaded to Wikimedia Commons. Sadly, since then, things have not gone well for the public domain. No less than three German courts -- in Berlin, Stuttgart and now again in a higher Stuttgart court -- have ruled against the use of the photos. The latest court judgment is available in full (pdf in German), and it contains some pretty worrying statements.

For example, the upper Stuttgart court confirms that the museum's photographs of the public domain works are not in the public domain, because they were produced by a photographer, and not some mechanical process like a photocopier. Under German law, if there is any kind of creativity involved, however minimal, then the photograph produced enjoys protection as a "Lichtbildwerk" -- literally, a "light image work" -- and is not in the public domain.

The court also ruled that not even photos of works in the public domain taken by a Wikipedia supporter to put on Wikipedia could be used freely by Wikipedia. Making a photo in this way "injured" the museum's ownership of the objects in question, the judges said, even though the works were in the public domain, as a report on the iRights site explained (original in German). In addition, the court said that the museum was within its rights to make it a condition of entry that no photos were taken.

These are clearly dreadful rulings for Wikipedians in Germany. The good news is that the Stuttgart court has allowed an appeal to the country's top court, the Bundesgerichtshof. If even those judges fail to see how crazy this situation is, and how harmful to the public domain, there is always the hope that the Court of Justice of the European Union, the highest court in the EU, might consider the case, but there's no guarantee of that.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

49 Comments | Leave a Comment..

Posted on Techdirt - 14 June 2017 @ 6:33pm

Islamic State Using Small Drones Routinely In Iraq For Scouting And Dropping Explosives

from the drone-swarms-coming-up-next dept

Here on Techdirt we like to remind people that drones are not just death-dealing machines in the sky, but can also be a force for good. However, like any other technology, drones can and are utilized by the worst as well as the best. Inevitably, that includes terrorist groups like Islamic State (ISIS), as an interesting article from the Los Angeles Times reveals:

In the seven months of the Iraqi government's drive to recapture Mosul from the jihadists, small drones have become a signature tactic of the [ISIS] group: Their appearance on the horizon, loaded with a camera, signals that punishing mortar barrages will soon be on the way. Others guide car bombs to their target, or drop small explosives miles behind the front line.

Most of these drones come from the Chinese company DJI, generally regarded as the leading drone manufacturer in terms of market share. Clearly, the routine use of its products by ISIS is not the best publicity in the world:

Reports that Islamic State had used DJI products pushed the company in February to create a geofence, a software restriction that creates a no-fly zone, over large swaths of Iraq and Syria, specifically over Mosul.

But there are problems with geofencing. First, there is the issue of when a demand to geofence certain regions is legitimate, since answering that question requires a political judgment about who is really in power. Secondly, it's not that hard to get around geofencing, either by using quick fixes, or simply swapping to other drones that run on open source code that allows geofencing to be turned off.

Given that geofencing may not work, countermeasures are generally necessary. Those include rather crude solutions like shooting drones out of the sky with firearms, to more sophisticated ones like the DroneGun, from the Australia-based DroneShield Ltd., a company that specializes in counter-drone technology:

[the DroneGun] jams the GPS signal and radio linkages between the drone and its operator. The device, which sends out a jamming cone over a mile in length, forces the drone to either land immediately or to return to its base so that it can be tracked.

DroneShield's CEO, Oleg Vornik, already has some thoughts on what terrorists will do next:

"we believe organizations like ISIS will begin deploying swarms of drones. If you saw the Super Bowl halftime, you would have seen dozens of drones with little lights on them moving in a choreographed fashion," Vornik said. "That technology can be used to load grenades onto a large number of drones."

In other words, as drones continue to develop new and potentially exciting capabilities, so terrorists will eagerly embrace them -- just like everyone else.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

36 Comments | Leave a Comment..

More posts from Glyn Moody >>