Glyn Moody’s Techdirt Profile


About Glyn MoodyTechdirt Insider

Posted on Techdirt - 30 November 2015 @ 11:13pm

Open Insulin Project Could Help Save Thousands Of Lives And Billions Of Dollars

from the public-health,-public-domain dept

Techdirt has written a few times about the pharmaceutical industry's use of "evergreening", whereby small, sometimes trivial, changes are made to drugs in order to extend their effective patent life. It turns out the technique is applied to one of the most widely-used drugs of all, insulin:

There are currently about 387 million people worldwide living with diabetes. Meanwhile, as discussed by Jeremy A. Greene and Kevin R. Riggs in their March 2015 article in the New England Journal of Medicine, there is no generic insulin available on the market despite great demand in poorer communities and regions of the world. As a result, many go without insulin and suffer complications including blindness, cardiovascular disease, amputations, nerve and kidney damage, and even death. Pharmaceutical companies patent small modifications to previous insulins while withdrawing those previous versions from the market to keep prices up.
The obvious solution is to produce a generic version of insulin that can be sold cheaply enough that nobody dies or suffers complications simply because they cannot afford Big Pharma's hefty price tags. That's just what the Open Insulin project, with its crowdfunding page, aims to do:
A team of biohackers is developing the first open source protocol to produce insulin simply and economically. Our work may serve as a basis for generic production of this life-saving drug and provide a firmer foundation for continued research into improved versions of insulin.
As well as making insulin more readily available to those in the poorer communities, the Open Insulin project could save Western countries huge sums too. As an article in Popular Science explains:
Since there are no generic versions available in the United States, insulin is very expensive -- that cost was likely a large proportion of the $176 billion in medical expenditures incurred by diabetes patients in 2012 alone.
Any project that could help save thousands of lives and billions of dollars would be noteworthy. What makes Open Insulin even more remarkable is that it is operating on a shoestring -- the initial crowdsourcing target was just $6,000, already surpassed -- and that it intends to put all its results in the open:
All protocols we develop and discoveries generated by our research will be freely available in the public domain. We will also be proactively investigating strategies to protect the open status of our work.
However, it's important to keep those exciting prospects in perspective. The Popular Science article includes a comment from the Kevin Riggs mentioned in the Open Insulin quotation above. He doesn't believe that Open Insulin on its own will be enough to bring a generic insulin drug to the market:
"I don't think the major hurdle is that the companies don't know how to make insulin, because that part is reasonably straightforward," he says. "The real hurdles are getting the drug approved by the FDA (and since insulin is a biologic drug, it requires a lot more original data than an application for a small-molecule generic would), and then upfront manufacturing costs (because making a biologic drug is different, so it requires different equipment)." He suspects that it will take "an altruistic entity with a lot of start-up money" to make generic insulin commercially available.
That may be so, but at least the Open Insulin project is doing something in an attempt to change the status quo that sees huge numbers of people suffering unnecessarily. In any case, Open Insulin is a wonderful demonstration of how much biohacking has advanced, allowing suitably-skilled people to make potentially important contributions to global health. Let's hope it does eventually lead to a generic insulin that can be made available around the world very cheaply.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

35 Comments | Leave a Comment..

Posted on Techdirt - 30 November 2015 @ 9:31am

German Museum Sues Wikimedia Foundation Over Photos Of Public Domain Works Of Art

from the once-public-domain,-always-public-domain dept

The mission of museums and art galleries is generally to spread knowledge and appreciation of beautiful and interesting objects. So it's rather sad when they start taking legal action against others that want to help them by disseminating images of public domain works of art to a wider audience. This obsession with claiming "ownership" of something as immaterial as the copyright in a photograph of a work of art made centuries ago led the UK National Portrait Gallery (NPG) to threaten Derrick Coetzee, a software developer, when he downloaded images from the NPG and added them to Wikimedia Commons, the media repository for Wikipedia, of which he was an administrator. That was back in 2009, and yet incredibly the same thing is still happening today, as this Wikimedia blog post explains:

On October 28, the Reiss Engelhorn Museum in Mannheim, Germany, served a lawsuit against the Wikimedia Foundation and later against Wikimedia Deutschland, the local German chapter of the global Wikimedia movement. The suit concerns copyright claims related to 17 images of the museum's public domain works of art, which have been uploaded to Wikimedia Commons. The Wikimedia Foundation and Wikimedia Deutschland are reviewing the suit, and will coordinate a reply by the current deadline in December.
The problem, as usual, is that the museum is claiming that the photographs are new creations, and therefore covered by copyright:
The Reiss Engelhorn Museum asserts that copyright applies to these particular images because the museum hired the photographer who took some of them and it took him time, skill, and effort to take the photos. The Reiss Engelhorn Museum further asserts that because of their copyrights, the images of the artwork cannot be shared with the world through Wikimedia Commons.
As Wikimedia points out:
Even if German copyright law is found to provide some rights over these images, we believe that using those rights to prevent sharing of public domain works runs counter to the mission of the Reiss Engelhorn Museum and the City of Mannheim and impoverishes the cultural heritage of people worldwide.
The disagreement over the use of the NPG's images back in 2009 gradually fizzled out. The Museums Journal reported in 2012 that:
The National Portrait Gallery (NPG) has made changes to its image licensing to allow free downloads for non-commercial and academic uses.

The change means that more than 53,000 low-resolution images are now available free of charge to non-commercial users through a standard Creative Commons licence.

And more than 87,000 high-resolution images are available for free for academic use through the gallery’s own licence. Users will be invited to give a donation in return for the service.
Meanwhile, high-resolution NPG images are still available on Wikimedia Commons, although Wikipedia notes that these are prudently hosted in the US, where their legal position as works in the public domain seems clearer. It's really time for other countries to catch up with the US and recognize that photos of public domain works of art are still in the public domain, and that sharing them with the world is something to be praised as helpful, not pursued as harmful.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

25 Comments | Leave a Comment..

Posted on Techdirt - 25 November 2015 @ 3:00am

Chinese Company Learns From The West: Builds Up Big Patent Portfolio, Uses It To Sue Apple In China

from the just-as-Techdirt-predicted dept

For many years now, Western governments have been complaining about China's supposed lack of respect for intellectual monopolies, and constantly pushing the country's politicians to tighten the legal framework protecting them. To anyone not blinded by an unquestioning belief in the virtues of copyright and patent maximalism, it was pretty clear where this strategy would end. Indeed, over five years ago, Mike warned where this was leading: towards China repeatedly punishing foreign companies to protect domestic Chinese firms -- in other words, leveraging patents as a tool for protectionism. A post on the IAM blog about legal action taken by the Chinese company BYD, one of Apple's suppliers, shows that Techdirt's predictions are well on the way to being realized:

Apple says BYD filed a pair of patent infringement suits in the Shenzhen Intermediate People’s Court alleging that the antennae in the iPhone 6 plus and various other Apple products infringe BYD’s intellectual property.
Five other defendants working with Apple were also sued -- four Chinese suppliers, and one Chinese distributor.
BYD asked the [Chinese] bench to require "all six defendants to both cease allegedly infringing conduct and destroy allegedly infringing products".
In effect, this is a patent attack on Apple's supply chain in China, and one that would be devastating for the US company if successful. The IAM post points out:
Of the seven final assembly facilities for iPhones, only one is outside of China (a Foxconn facility in Sao Paolo, Brazil). That means any company with valid Chinese patents that it thinks reads on Apple products potentially has a lot of leverage.
There are two crucial elements that make Apple so vulnerable here. First, the fact that its assembly facilities are concentrated in China, and secondly, because there's a Chinese company with patents it thinks it can use against Apple in that country. A March 2014 press release from BYD boasted that it had already amassed more than 12,000 domestic patents and over 8,000 international ones; the figures today are doubtless much higher. Amongst those domestic patents there are probably many that could come in handy for future legal action against other Western companies that assemble their products in China.

Those in the West who pushed China to show more "respect" for patents must be feeling so proud of the progress that Chinese companies have made in this regard, and so pleased now to see Apple being sued in local courts using China's patent laws.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

52 Comments | Leave a Comment..

Posted on Techdirt - 18 November 2015 @ 11:05pm

Frequent Errors In Scientific Software May Undermine Many Published Results

from the it's-a-bug-not-a-feature dept

It's a commonplace that software permeates modern society. But it's less appreciated that increasingly it permeates many fields of science too. The move from traditional, analog instruments, to digital ones that run software, brings with it a new kind of issue. Although analog instruments can be -- and usually are – inaccurate to some degree, they don't have bugs in the same way as digital ones do. Bugs are much more complex and variable in their effects, and can be much harder to spot. A study in the F1000 Research journal by David A. W. Soergel, published as open access using open peer review, tries to estimate just how much of an issue that might be. He points out that software bugs are really quite common, especially for hand-crafted scientific software:

It has been estimated that the industry average rate of programming errors is "about 15-50 errors per 1000 lines of delivered code". That estimate describes the work of professional software engineers -- not of the graduate students who write most scientific data analysis programs, usually without the benefit of training in software engineering and testing. The recent increase in attention to such training is a welcome and essential development. Nonetheless, even the most careful software engineering practices in industry rarely achieve an error rate better than 1 per 1000 lines. Since software programs commonly have many thousands of lines of code (Table 1), it follows that many defects remain in delivered code -- even after all testing and debugging is complete.
To take account of the fact that even when there are bugs in code, they may not affect the result meaningfully, and that there's also the chance that a scientist might spot them before they get published, Soergel uses the following formula to estimate the scale of the problem:
Number of errors per program execution =
total lines of code (LOC)
* proportion executed
* probability of error per line
* probability that the error meaningfully affects the result
* probability that an erroneous result appears plausible to the scientist.
He then considers some different cases. For what he calls a "typical medium-scale bioinformatics analysis":
we expect that two errors changed the output of this program run, so the probability of a wrong output is effectively 100%. All bets are off regarding scientific conclusions drawn from such an analysis.
Things are better for what he calls a "small focused analysis, rigorously executed": here the probability of a wrong output is 5%. Soergel freely admits:
The factors going into the above estimates are rank speculation, and the conclusion varies widely depending on the guessed values.
But he rightly goes on to point out:
Nonetheless it is sobering that some plausible values can produce high total error rates, and that even conservative values suggest that an appreciable proportion of results may be erroneous due to software defects -- above and beyond those that are erroneous for more widely appreciated reasons.
That's an important point, and is likely to become even more relevant as increasingly complex code starts to turn up in scientific apparatus, and researchers routinely write even more programs. At the very least, Soergel's results suggest that more research needs to be done to explore the issue of erroneous results caused by bugs in scientific software -- although it might be a good idea not to use computers for this particular work....

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

45 Comments | Leave a Comment..

Posted on Techdirt - 18 November 2015 @ 3:37am

Elsevier Says Downloading And Content-Mining Licensed Copies Of Research Papers 'Could Be Considered' Stealing

from the gotta-protect-that-39%-profit-margin dept

Elsevier has pretty much established itself as the most hated company in the world of academic publishing, a fact demonstrated most recently when all the editors and editorial board resigned from one of its top journals to set up their own, open access rival. A blog post by the statistician Chris H.J. Hartgerink shows that Elsevier is still an innovator when it comes to making life hard for academics. Hartgerink's work at Tilburg University in the Netherlands concerns detecting potentially problematic research that might involve data fabrication -- obviously an important issue for the academic world. A key technique he is employing is content mining -- essentially bringing together large bodies of text and data in order to extract interesting facts from them:

I am trying to extract test results, figures, tables, and other information reported in papers throughout the majority of the psychology literature. As such, I need the research papers published in psychology that I can mine for these data. To this end, I started 'bulk' downloading research papers from, for instance, [Elsevier's] Sciencedirect. I was doing this for scholarly purposes and took into account potential server load by limiting the amount of papers I downloaded per minute to 9. I had no intention to redistribute the downloaded materials, had legal access to them because my university pays a subscription, and I only wanted to extract facts from these papers.
He spread out the downloads over ten days so as not to hammer Elsevier's servers -- which in any case are doubtless pretty beefy given the 39% profit margin the company enjoys:
I downloaded approximately 30GB of data from Sciencedirect in approximately 10 days. This boils down to a server load of 35KB/s, 0.0021GB/min, 0.125GB/h, 3GB/day.
Elsevier's response to this super-considerate researcher is a classic:
Approximately two weeks after I started downloading psychology research papers, Elsevier notified my university that this was a violation of the access contract, that this could be considered stealing of content, and that they wanted it to stop. My librarian explicitly instructed me to stop downloading (which I did immediately), otherwise Elsevier would cut all access to Sciencedirect for my university.
There are clear parallels with the situation that Aaron Schwarz found himself in, but with a key difference. Elsevier is not only stopping Hartgerink from carrying out his research, but threatening to cut off all access to the company's journals and books for everyone working at Tilburg University if he tries to continue. Alicia Wise, Elsevier's Director of Access & Policy, added the following comment on Hartgerink's blog post:
We are happy for you to text mind content that we publish via the ScienceDirect API, but not via screen scraping.
When she was asked why it was necessary to use the API, rather than simply downloading articles, she replied:
The reason that we require miners to use the API is so that we can meet their needs AND ALSO the needs of our human users who can continue to read, search and download articles and not have their service interrupted in any way.
But that doesn't make any sense when Hartgerink had taken such pains to avoid any such adverse affects. Moreover, another commenter noted that Elsevier’s API often fails to work, rendering it useless for content mining. Even when it does work:
In many cases the API returns only metadata in the XML, compared to the fulltext PDF I can access on the website. Simply downloading the paper via the normal web service for readers is easy -- much easier than using the API.
What is really at stake here is control. Elsevier wants to be acknowledged as the undisputed gatekeeper for all possible uses of the research it publishes -- most of which was paid for by the public through taxes. And as far as the company is concerned, daring to use that knowledge in new ways without additional permission is simply "stealing."

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

68 Comments | Leave a Comment..

Posted on Techdirt - 17 November 2015 @ 3:02am

TPP Says Food Health Policies Must Be 'Science-Based,' Except When That Would Harm Profits

from the heads-I-win,-tails-you-lose dept

The good news is that we finally have the complete text of the Trans-Pacific Partnership agreement. The bad news is that it runs to 6,194 pages, not including dozens of "related instruments" and "side chapters." There is no way that anybody could read through and fully understand the implications of all of that -- certainly not before it comes to a vote next year. But luckily, that's not necessary. Gone are the days when a single commentator would be expected to offer profound insights of a treaty's entire text. Instead, in our Internet-based world, it's very easy to do things in a highly-distributed fashion, parcelling out pieces of the task to many topic experts who carry out deep analysis in parallel.

One such source of expertise is the Institute for Agriculture and Trade Policy, which has recently produced an analysis of TPP's "Sanitary and Phytosanitary" (SPS) chapter dealing with key issues such as food safety, and animal and plant health in agricultural trade. It's well-worth reading for its detailed comments on this section, but there are two main points that it makes. First, it notes a trick that has been used in the SPS chapter:

Growth hormones, food and agricultural nanotechnology, endocrine disrupting chemicals, antimicrobial resistance to anti-biotics, plant synthetic biology and so many others. Nothing about them -- among other controversial food safety, and animal, plant and environmental health issues or technologies -- appears in the SPS chapter. Instead, the chapter describes administrative procedures and consultative arrangements for resolving SPS "issues" insofar as they might impede agricultural trade.
Here's what happened to those key areas:
The [TPP] negotiators decided to locate provisions on "Trade in Products of Modern Biotechnology" for agricultural trade (Article 2.29) in Chapter 2, "National Treatment and Market Access for Goods," apparently believing that "modern biotechnology" does not pose SPS issues about which there might be controversy.
That is, the TPP text tries to sidestep all the heated controversies over the possible safety issues of modern biotechnology by omitting them completely from the chapter dealing with this aspect. The other discovery made by the Institute for Agriculture and Trade Policy concerns a requirement to use "science-based" approaches when TPP countries establish their food safety rules:
It is crucial to understand how scientific evidence is subordinated and occulted as Confidential Business Information to realizing trade objectives through the regulatory process. Under the TPP rules and trade policy more generally, what trade and regulatory officials deem to be "appropriate" levels of protection are judged on whether SPS measures to provide that protection are potential or "disguised" trade barriers. Such judgments require a use and understanding of "science" that is filtered through confidentiality requirements, which are antithetical to the peer review that scientific consensus methodologically requires. TPP SPS Committee consultations about the science underlying SPS measures "shall be kept confidential unless the consulting Parties agree otherwise" (Article 7.17.6).
TPP does not require traditional rigorous science where results are published openly, subject to peer review, but permits the use of "confidential business information," where results are withheld and there is no peer review. How that will work in practice is shown by a recent decision by the US Environmental Protection Agency (EPA) that there was "no convincing evidence" that glyphosate, the most widely used herbicide in the US and the world, is an endocrine disruptor. As The Intercept discovered:
The EPA's exoneration -- which means that the agency will not require additional tests of the chemical's effects on the hormonal system -- is undercut by the fact that the decision was based almost entirely on pesticide industry studies. Only five independently funded studies were considered in the review of whether glyphosate interferes with the endocrine system. Twenty-seven out of 32 studies that looked at glyphosate's effect on hormones and were cited in the June review-- most of which are not publicly available and were obtained by The Intercept through a Freedom of Information Act request -- were either conducted or funded by industry. Most of the studies were sponsored by Monsanto or an industry group called the Joint Glyphosate Task Force. One study was by Syngenta, which sells its own glyphosate-containing herbicide, Touchdown.
TPP guarantees that companies can provide SPS Committee consultations with their own confidential research in a similar way, and that no outside scrutiny will be permitted unless those companies agree. As well as allowing secret "scientific" evidence to be used, the SPS chapter also gives TPP signatories an option to ignore scientific evidence completely on the grounds that it lacks "economic feasibility":
The "economic feasibility" of the science-based SPS measures to provide the appropriate level of protection is formulated in this provision: "Each Party shall . . . select a risk management option that is not more trade restrictive than necessary to achieve the sanitary or phytosanitary objective, taking into account technical and economic feasibility" (Article 7.6c). "Economic feasibility" provides TPP members with a crucial loophole against providing SPS measures that are science-based.
In other words, TPP requires decisions on food safety and animal welfare to be "science"-based, where "science" includes unpublished studies carried out by companies, except when the science shows unequivocally that more stringent measures should be taken to protect health. In that case, countries are allowed to put profits before people, and to ignore the facts completely.

This latest analysis from the Institute for Agriculture and Trade Policy is significant not just because it will help to inform the debate around TPP, and whether it should be ratified. It is also important because it reveals what will almost certainly be the approach taken in TAFTA/TTIP too. Since that is nowhere near finished, unlike TPP, that means it is still possible to put pressure on the negotiators not to sell out on public and animal health as we now know they have done in TPP.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

23 Comments | Leave a Comment..

Posted on Techdirt - 16 November 2015 @ 8:34am

Scientist Bans Use Of His Software By 'Immigrant-Friendly' Countries, So Journal Retracts Paper About His Software

from the open-is,-as-open-does dept

Retractions of scientific papers are by their nature quite dramatic -- the decision to withdraw recognition in this very public way is never taken lightly, especially given all the work that goes into writing a paper. But the specialist site Retraction Watch, which we wrote about back in August, has a new retraction story that is rather out of the ordinary. It concerns a much-cited 2004 paper about a piece of scientific software called Treefinder. The program is used to create phylogenetic trees, which show the probable evolutionary relationships between species based on comparing their respective DNA sequences. Retraction Watch explains what happened:

Recently, German scientist Gangolf Jobb declared that starting on October 1st scientists working in countries that are, in his opinion, too welcoming to immigrants -- including Great Britain, France and Germany -- could no longer use his Treefinder software, which creates trees showing potential evolutionary relationships between species. He'd already banned its use by U.S. scientists in February, citing the country’s "imperialism." Last week, BMC Evolutionary Biology pulled the paper describing the software, noting it now "breaches the journal’s editorial policy on software availability."
Here's the official retraction note published by the journal in question:
The editors of BMC Evolutionary Biology retract this article due to the decision by the corresponding author, Gangolf Jobb, to change the license to the software described in the article. The software is no longer available to all scientists wishing to use it in certain territories. This breaches the journal’s editorial policy on software availability which has been in effect since the time of publication.
The editorial policy on software availability is as follows:
If published, software applications/tools must be freely available to any researcher wishing to use them for non-commercial purposes, without restrictions such as the need for a material transfer agreement.
The policy then goes to make an important suggestion:
BMC Evolutionary Biology recommends, but does not require, that the source code of the software should be made available under a suitable open-source license that will entitle other researchers to further develop and extend the software if they wish to do so.
Another advantage of releasing the code as open source is that it would have avoided the current awkward situation, whereby the Treefinder program is no longer available to everyone, and BMC Evolutionary Biology retracted the original paper. Once code is published under a free software license, that can't be rescinded, although the same or modified versions of the source could be published later under a non-free license by the copyright holder. It's regrettable that Treefinder was not released under a free software license, but it's nonetheless good to see an open access journal sticking to its requirement for free availability of software, and retracting the offending paper.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

24 Comments | Leave a Comment..

Posted on Techdirt - 11 November 2015 @ 8:32am

Starting From Next Year, China Wants Music Services To Vet Every Song Before It Goes Online

from the silence-is-golden dept

Techdirt has reported on so many different aspects of China's online clampdown, that it's natural to wonder if there's anything left to censor. Surprisingly, the answer is "yes", according to this Tech in Asia post:

all Chinese companies operating any sort of internet music or streaming platform will be required to set up internal censorship departments. These departments will have to approve all songs before they're posted online, in strict accordance with the Ministry [of Culture]'s guidelines for permissible song content. Censors will also have to create and maintain a "warning" list and a blacklist for content creators and uploaders whose songs repeatedly fail to pass inspection.
As the article explains, online music companies are expected to bear all the costs of setting up censorship departments and training staff to vet all the songs, and will be punished if they fail to implement the new policy properly. At least some will have had practice, since a similar approach has been applied to online posts for some time. Tech in Asia has the following thoughts on how effective the censorship is likely to be:
The Ministry’s decade-long console ban was very poorly enforced, as have been most of its bans on video games. But online games have been easier for the Ministry to restrict because they typically require China-based servers, and the [Ministry of Culture] might similarly find that it has an easier time genuinely restricting online music than it has policing the offline music.
That seems likely. The real question raised by this latest move is: anything left to censor, or have you finished now, China?

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

13 Comments | Leave a Comment..

Posted on Techdirt - 10 November 2015 @ 3:17am

UK Health Minister Filibusters Bill To Use Off-Patent Drugs To Provide Effective New Treatments At Low Cost

from the and-why-on-earth-would-he-do-that? dept

Here on Techdirt we often write about pharmaceutical companies doing what they can to string out their drug monopolies as long as possible, sometimes resorting to quite extraordinary approaches. But whatever tricks they use to extend their monopolies, one day those end, and at that point generic manufacturers can make the drug in question without needing a license. You'd think governments would be delighted by the downward pressure this exerts on prices, since it means that they can provide life-changing and life-saving treatments much more cheaply. But last week, we had the bizarre spectacle of a UK government health minister filibustering a Bill that would have encouraged doctors to prescribe more off-patent medicines. As the Independent newspaper reports:

A Conservative health minister has deliberately blocked a new law to provide cheap and effective drugs for the [UK's National Health Service] by championing medicines whose patents have expired.

Alistair Burt spoke for nearly half an hour to "filibuster" the proposed Off-Patent Drugs Bill, a plan that had cross-party support from backbenchers.
The Off-Patent Drugs Bill attempted to address a particular problem that off-patent drugs suffer from. Although anyone can manufacture and sell them for their original purpose, more innovative uses require a new license for that application. Since the drugs are off-patent, the profit margins are lower, and generic manufacturers are reluctant to pay for the re-licensing process. An article in the online law journal "Keep calm and talk law", reporting on an earlier unsuccessful attempt to have off-patent drugs used more widely, gave the following examples of how cheap drugs could have a major impact:
Simvastatin was originally licensed for treating individuals with high cholesterol and although recently shown to slow brain atrophy in later stages of multiple sclerosis by over 40 per cent, it has not been relicensed for that purpose. Further, Lixisenatide and liraglutide, two common diabetes drugs, have been found to have a use in the prevention of Alzheimer's but is now also off patent.
The Bill in question would have required the UK government to carry out the re-licensing, thus allowing generic manufacturers to market the cheap off-patent drugs for new treatments. But the UK government minister rejected this approach, saying "there is another pathway", without spelling out what that might be. Perhaps he had in mind the possibility of prescribing off-patent drugs "off label", which in theory is an option open to UK doctors. But doing so means they would assume the legal responsibility for both negligent and non-negligent harm to the patient -- something that many are understandably reluctant to do. However, the "Keep calm and talk law" article offers a possible solution to that problem:
the introduction of a charitable organisation which would insure doctors against any negligence claims resulting from the prescription of specified off-patent drugs for unlicensed purposes. This could reassure doctors enough that they would more readily prescribe some useful off-patent drugs such as the examples given above. Although much research would still need to be done into whether a charity such as this could work in reality, in principle this is a solution which seems sensible.
It's possible that the UK government minister had precisely this approach in mind when he unceremoniously blocked a Bill that was trying to save both lives and money. But it's hard not to suspect that his "another pathway" might turn out to involve lots more lucrative pharma patents instead.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

17 Comments | Leave a Comment..

Posted on Techdirt - 9 November 2015 @ 4:18am

Pirate Party MEP Julia Reda: EU Preparing 'Frontal Attack On The Hyperlink'

from the propping-up-publishing-dinosaurs dept

Back in January of this year, we wrote about a remarkable report proposing a number of major changes to EU copyright law. Part of an extremely long-drawn out process that aims to update the current 2001 copyright directive, the document was written by the sole Pirate Party MEP in the European Parliament, Julia Reda. In the short time she's been an MEP -- she was only elected in 2014 -- she's emerged as the European Parliament's leading expert on copyright, which means it's always worth taking her warnings in this area very seriously. Earlier this year, Techdirt noted that Reda was worried about moves to restrict outdoor photography in the EU. Now she's picked up something even more disturbing after studying a draft version of the European Commission's imminent communication on copyright reform, which was leaked to the IPKat blog. According to her interpretation of this document:

the Commission is considering putting the simple act of linking to content under copyright protection. This idea flies in the face of both existing interpretation and spirit of the law as well as common sense. Each weblink would become a legal landmine and would allow press publishers to hold every single actor on the Internet liable.
the Commission bemoans a lack of clarity about which actions on the Internet need a permission and which ones do not: in legal terms, they put forward the question when something is an 'act of communication to the public'.

This is a reference to a ruling of the European Court of Justice in the Svensson case. While on one hand the judges established that the simple act of linking to publicly available content is no copyright infringement, because it does not reach a new public, a few questions were left open by this ruling, however: For example when exactly content can be seen as accessible by the public and how e.g. links surpassing paywalls are to be treated.

What worries Reda is that the European Commission may try to introduce ancillary copyright -- aka a "Google tax" -- in the EU under the guise of "clearing up" the questions left unanswered by the Svensson case. As she notes, and Techdirt has tracked for some time, every attempt to bring in ancillary copyright in Europe has been an abysmal failure. Bringing in an EU-wide Google tax in a misguided attempt to prop up publishers that still haven't figured out how to work with, rather than against, the Internet, would be disastrous. Maybe Reda is reading more into the leaked document than is warranted, but it's certainly worth being alert to this possibility when the European Commission releases its official version of the document on 9 December, and making it quite clear before then that the idea is a complete non-starter.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

33 Comments | Leave a Comment..

Posted on Techdirt - 6 November 2015 @ 10:34am

Germany Wants To Define A Snippet As Seven Words Or Less; Doing So Is Likely To Breach Berne Convention

from the how-copyright-maximalism-defeats-itself dept

Techdirt has been following with a certain amusement the humiliating failure of German publishers to bring in a "snippet tax" that would force Google and other search engines to pay for displaying even short quotations from their publications. The most recent defeat for the copyright industry was the German competition authority announcing that it would not "punish" Google for refusing to take out a license for snippets because, well, Google had a perfect right not to do so. The Disruptive Competition (DisCo) Project Web site has an update on the continuing saga, and it's as crazy as the rest of the story:

The Copyright Arbitration Board of the German Patent and Trade Mark Office (DPMA) recently recommended that snippets, i.e. small text excerpts used by search engines and online aggregators below hyperlinks, can comprise exactly seven words. This suggestion is part of the DPMA's recommendation to privately settle a dispute between online services and press publishers over Germany's ancillary copyright, also termed the 'snippet levy'. Should a court confirm this recommendation, snippets which go beyond this limit of seven words would in theory have to be licensed from news publishers.
As the DisCo post goes on to explain, that weirdly precise limit is a result of a last-minute change to the German snippet law, which carved out "individual words and smallest text excerpts" from its scope. Of course, that begs invites the question: how big could that "smallest text excerpt" be? For reasons that are not clear, the Copyright Arbitration Board suggested that the answer was "seven words long". The DisCo post points out there would be an interesting and unexpected consequence of adopting that seven-word limit on snippets officially: it would put Germany in conflict with its obligations under the Berne Convention on copyright. In the 1967 revision to the Convention, a mandatory right to "short quotations" was changed to one allowing "quotations". Here's why that matters:
imagine a situation in which snippets, a modern form of quotations, are reduced to seven words. From a practical point of view, it is safe to say that they are useless for Internet users who would find it difficult to find the information they are actually looking for. From a legal point of view, most would probably agree that a seven word quotation is rather 'short' -- and exactly this conflicts with international copyright law which guarantees meaningful (and useful) quotations going beyond 'short' quotations.
In a delicious irony, then, the German publishers' insane pursuit of the completely unworkable "ancillary" copyright protection for snippets could result in the country breaching fundamental obligations under the world's main copyright convention.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

34 Comments | Leave a Comment..

Posted on Techdirt - 6 November 2015 @ 3:15am

Will Molecular Biology's Most Important Discovery In Years Be Ruined By Patents?

from the GNU-Emacs-for-DNA dept

Techdirt readers hardly need to be reminded that, far from promoting innovation, patents can shut it down, either directly, through legal action, or indirectly through the chill they cast on work in related areas. But not all patents are created equal. Some are so slight as to be irrelevant, while others have such a wide reach that they effectively control an entire domain. Patents on a new biological technique based on a mechanism found in nature, discussed in a long and fascinating piece in the Boston Review, definitely fall into the second category. Here's the article's explanation of the underlying mechanism, known as CRISPR-Cas:

Bacteria use CRISPR-Cas to attack the DNA of invading viruses. The workings of this natural defense mechanism were elucidated through basic research carried out mostly within universities. By hijacking and recombining its bacterial parts -- a flexible kind of engineering that is the hallmark of molecular biology -- researchers have shown how CRISPR-Cas can be used to edit the human genome, or any genome, in principle.
CRISPR-Cas can be thought of as the first really powerful and general-purpose genome editor -- a GNU Emacs for DNA. It is widely expected that it will have a massive impact on molecular biology, both for pure research and in industrial applications. Given those very high stakes, it will not come as a surprise to learn that there is already a fierce tussle over who owns a key patent in this field:
A patent battle is raging between the University of California, Berkeley and the Broad Institute of MIT and Harvard. MIT's Technology Review has called the legal dispute a "winner-take-all match" with "billion-dollar implications," as the contenders all have stakes in startup companies based on the technique. The Broad team was granted the first patent in 2014, but the Berkeley group filed paperwork contesting the decision.
As the Boston Review rightly points out, the Broad Institute patent is problematic for several reasons. It is very general, and lays claim to using CRISPR-Cas to edit all animal and plant DNA. The Broad Institute has granted an exclusive license for therapeutic applications, which means that the company concerned has a monopoly on what is expected to be one of the most important areas for CRISPR-Cas. Any other company wanting to use the technique, even for non-therapeutic work, must pay for a license. To top it all, it's generally accepted that CRISPR-Cas is the result of a global, collaborative effort:
Academics around the globe, from Japan to Lithuania to Spain and the United States, have contributed to our understanding of CRISPR-Cas. No group can claim sole credit for discovering the system or the know-how for using it to edit genomes.
And yet the winner of the current patent battle, whether the Broad Institute, or the University of California, is likely to end up with immense power over the use of CRISPR-Cas. The article notes:
Monopolizing a core technology developed collectively using public funding ought to require an extraordinary argument. Even if we limit ourselves to looking through the economic lens, this would require making the case that a monopoly on CRISPR-Cas therapeutics would be so wildly effective -- and wide enough in scope to tackle the huge range of diseases mentioned in the patent -- that it would far outweigh competitive efforts with tens, or hundreds of other companies. In the current debate, no such argument has been given.
As well as giving many other details about this important case and its historical background, the Boston Review article goes on to suggest an alternative approach to one based on intellectual monopolies, one that builds on the fact that CRISPR-Cas is a tool for editing the biological software at the heart of all life:
We can take a leaf from the software world's book and sketch a free biology (as in "free software") that respects these responsibilities. This will require new mechanisms for describing research ownership and sharing that are in the public interest and that support the university’s research branch.
Although that may sound Utopian, for biology at least, it's starting to happen:
Synthetic biology is already making steps in this direction, with projects such as BioBricks that provide a mechanism for scientists to contribute their work to a public registry and allow others to build on it.
Moreover, for those who think the idea of free biology will never really take off, it's worth remembering people said the same about free software, which now powers most of the digital world.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

30 Comments | Leave a Comment..

Posted on Techdirt - 3 November 2015 @ 11:33pm

Chevron's Star Witness In $9.5 Billion Corporate Sovereignty Case Admits He Lied

from the well,-that's-awkward dept

One of Techdirt's earliest posts on corporate sovereignty was back in October 2013, when we wrote about the incredible case of Chevron. It used the investor-state dispute settlement mechanism to suspend the enforcement of a historic $18 billion judgment against the oil corporation made by Ecuador's courts because of the company's responsibility for mass contamination of the Amazonian rain forest. Given the huge sums involved, it's no surprise that things didn't end there. As the site Common Dreams reports, in 2013:

Ecuador's National Court of Justice upheld the verdict but cut the initial mandated payment from $18 billion to $9.5 billion.

Chevron has repeatedly refused to pay the $9.5 billion ordered by Ecuadorian courts and even took the step of removing most of its assets from Ecuador in an apparent effort to avoid paying.
Chevron not only refused to pay, but asked a judge in New York to invalidate the claim. And that's precisely what happened in 2014, as Vice News explains:
California-based oil giant Chevron hailed a sweeping victory in a two-decade long legal battle in the Ecuadorian Amazon. A New York federal judge, Lewis Kaplan, ruled that a $9.5 billion Lago Agrio judgment leveled against the company by the small Andean country's highest court, was obtained by way of fraud and coercion.
Vice News notes that central to Chevron's case in New York was the testimony of Alberto Guerra, a former Ecuadorian judge:
In New York, Guerra testified that he had struck a deal between the plaintiffs [the Ecuadorian government] and the presiding judge [in Ecuador], Nicolas Zambrano: Guerra would ghostwrite the verdict, Zambrano would sign it, and the two would share an alleged $500,000 in kickbacks from the plaintiffs.
Pretty damning stuff, which seems to have played a major part in convincing the New York judge to dismiss the $9.5 billion award. But in a rather dramatic turn of events, the following just emerged:
Guerra has now admitted that there is no evidence to corroborate allegations of a bribe or a ghostwritten judgment, and that large parts of his sworn testimony, used by Kaplan in the RICO case to block enforcement of the ruling against Chevron, were exaggerated and, in other cases, simply not true.
In keeping with the rest of the case, Guerra's confession is not entirely straightforward, and it's not clear what really happened during the 2013 Ecuadorian court case -- it's worth reading the fascinating Vice News story to get the full details of the continuing confusion. It does appear that the advantage has passed back to the government of Ecuador in this high-stakes legal battle, but it's by no means over -- all thanks to corporate sovereignty's disturbing power to overrule otherwise "final" rulings from national courts.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

17 Comments | Leave a Comment..

Posted on Techdirt - 29 October 2015 @ 9:47am

Canadian Judge Says Asking For A Copy Of A Legally-Obtained But Paywalled Article Is Circumvention

from the and-will-cost-you-$10,000-in-damages dept

One of the worst ideas that the copyright maximalists have managed to foist on the world is that there should be anti-circumvention laws forbidding users from doing a range of entirely sensible things with their own possessions, simply "because copyright". Required by the WIPO Copyright Treaty, and implemented by the DMCA (pdf) in the US, and Copyright Directive in the EU, anti-circumvention laws have reduced people in the US to begging for permission to unlock their mobile phones, or to check whether software in their car is lying about emissions. In the EU, they are not even allowed to beg.

If anyone had any doubts about the inherent ridiculousness of anti-circumvention laws, they might like to consider an extraordinary decision by a judge in Canada, reported by Teresa Scassa on her blog. It concerns a certain Dan Pazsowski, who was quoted in an article published by a news service called Blacklock's Reporter. When Pazsowski heard about this, he naturally wanted to find out more:

Since his company did not have a subscription to the service, he contacted a colleague at another company that did have a subscription and asked if they could forward a copy to him. They did so. He then contacted Blacklock's to discuss the content of the story, about which he had some concerns. He was asked how he had obtained access to the story, and was later sent an invoice for the cost of two personal subscriptions (because he had shared the story with another employee of his organization).
His refusal to pay the $314 (Canadian -- about US$240) plus HST (Harmonized Sales Tax -- a value-added sales tax) led to a lawsuit alleging breach of copyright. Despite the fact that Pazsowski had simply asked a colleague for a copy, the judge in the case took a very dim view of the matter:
Judge Gilbert also found that the defendant had unlawfully circumvented technical protection measures in order to access the material in question, in contravention of controversial new provisions of the [Canadian] Copyright Act. It would seem that, in the eyes of the court, to ask someone for a copy of an article legally obtained by that person could amount to a circumvention of technical protection measures.
The judge returned to the issue of circumvention when it came to awarding damages (all figures in Canadian dollars):
the plaintiffs originally sought the price of two personal annual subscriptions as compensation for the access to the article by the defendant ([CA]$314 plus HST). The court ordered damages in the amount of $11,470 plus HST -- the cost of a corporate annual subscription. Judge Gilbert cited as justification for this amount the fact that the defendants "continued to stand steadfast to the notion that they had done nothing wrong while knowing that they had taken steps to bypass the paywall." (at para 64). In addition, he awarded $2000 in punitive damages.
So, for requesting a copy of an article that was legally obtained by a colleague from a paywalled source, Pazsowski found himself hit with around US$10,000-worth of damages. This completely disproportionate punishment for what is at most a minor case of copyright infringement is a perfect demonstration of where the anti-circumvention madness leads.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

63 Comments | Leave a Comment..

Posted on Techdirt - 28 October 2015 @ 11:43am

Copyright Fail: 'Pirating' Academic Papers Not Only Commonplace, But Now Seen As Mainstream

from the icanhaz-#icanhazpdf dept

Techdirt has been writing about open access for many years. The idea and practice are certainly spreading, but they're spreading more slowly than many in the academic world had hoped. That's particularly frustrating when you're a researcher who can't find a particular academic paper freely available as open access, and you really need it now. So it's no surprise that people resort to other methods, like asking around if anyone has a copy they could send. The Internet being the Internet, it's also no surprise that this ad-hoc practice has evolved into a formalized system, using Twitter and the hashtag #icanhazpdf to ask other researchers if they have a copy of the article in question. But what is surprising is that recently there have been two articles on mainstream sites that treat the approach as if it's really quite a reasonable thing to do. Here's Quartz:

Most academic journals charge expensive subscriptions and, for those without a login, fees of $30 or more per article. Now academics are using the hashtag #icanhazpdf to freely share copyrighted papers.

Scientists are tweeting a link of the paywalled article along with their email address in the hashtag -- a riff on the infamous meme of a fluffy cat’s "I Can Has Cheezburger?" line. Someone else who does have access to the article downloads a pdf of the paper and emails the file to the person requesting it. The initial tweet is then deleted as soon as the requester receives the file.
And here's BBC News:
In many countries, it's against the law to download copyrighted material without paying for it -- whether it's a music track, a movie, or an academic paper. Published research is protected by the same laws, and access is generally restricted to scientists -- or institutions -- who subscribe to journals.

But some scientists argue that their need to access the latest knowledge justifies flouting the law, and they're using a Twitter hashtag to help pirate scientific papers.
Both stories go on to give some background to the approach and its hashtag. But what's striking is that after mentioning that this kind of activity may be against the law, there's none of the traditional hand-wringing about "piracy", and how it will end Western civilization as we know it unless tough measures are brought in to stop it.

It's surely no accident that this novel relaxed attitude to sharing materials covered by copyright concerns academic papers. After all, such sharing lies at the heart of research, which derives much of its power from the fact that people can build on what has been found before, rather than being forced to re-discover old knowledge. The idea of locking away that knowledge behind paywalls, and making it hard for any researcher to access it, is so self-evidently absurd, that even mainstream publications like Quartz or BBC News apparently have no difficulty accepting that viewpoint, implicitly through their coverage, if not explicitly. It's a further sign of copyright's dwindling relevance in a world whose central technology -- the Internet -- is built on sharing and openness.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

38 Comments | Leave a Comment..

Posted on Techdirt - 27 October 2015 @ 3:16am

Leaked TAFTA/TTIP Chapter Shows EU Breaking Its Promises On The Environment

from the toxic-trade-deal dept

As far as trade agreements are concerned, the recent focus here on Techdirt and elsewhere has been on TPP as it finally achieved some kind of agreement -- what kind, we still don't know, despite promises that the text would be released as soon as it was finished. But during this time, TPP's sibling, TAFTA/TTIP, has been grinding away slowly in the background. It's already well behind schedule -- there were rather ridiculous initial plans to get it finished by the end of last year -- and there's now evidence of growing panic among the negotiators that they won't even get it finished by the end of President Obama's second term, which would pose huge problems in terms of ratification.

One sign of that panic is that the original ambitions to include just about everything are being jettisoned, as it becomes clear that in some sectors -- cosmetics, for example -- the US and EU regulatory approaches are just too different to reconcile. Another indicator is an important leaked document obtained by the Guardian last week. It's the latest (29 September) draft proposal for the chapter on sustainable development. What emerges from every page of the document, embedded below, is that the European Commission is now so desperate for a deal -- any deal -- that it has gone back on just about every promise it made (pdf) to protect the environment and ensure that TTIP promoted sustainable development. Three environmental groups -- the Sierra Club, Friends of the Earth Europe and PowerShift -- have taken advantage of this leak to offer an analysis of the European Commission's real intent in the environmental field. They see four key problems:

The leaked text fails to provide any adequate defense for environment-related policies likely to be undermined by TTIP. For example, nothing in the text would prevent foreign corporations from launching challenges against climate or other environmental policies adopted on either side of the Atlantic in unaccountable trade tribunals.

The environmental provisions are vaguely worded, creating loopholes that would allow governments to continue environmentally harmful practices. The chapter lacks any obligation to ratify multilateral agreements that would bolster environmental protection and includes a set of vague goals with respect to biological diversity, illegal wildlife trade, and chemicals.

The leaked text includes several provisions that the European Commission may claim as "safeguards," such as a recognition of the "right of each Party determine its sustainable development policies and priorities" but none would effectively shield environmental policies from being challenged by rules in TTIP.

There is no enforcement mechanism for any of the provisions mentioned in the text. Even if one were included, it would still be weaker than the enforcement mechanism provided for foreign investors either through the investor-state dispute settlement mechanism or the renamed investment court system.
The environmental groups have produced a detailed five-page document (pdf) that goes through each of these points in turn, and it's well-worth reading. But it's striking that the central problem is Techdirt's old friend, corporate sovereignty, aka investor-state dispute settlement (ISDS):
Nothing in the text would prevent foreign corporations, on either side of the Atlantic, from challenging climate or other environmental policies via an "investor-state dispute settlement" (ISDS) mechanism or via the European Commission's proposed "Investment Court System." Both enable foreign investors to challenge environmental policies before a tribunal that would sit outside any domestic legal system and be able to order governments to compensate companies for the alleged costs of an environmental policy. While the Commission claims that its new investment "reforms" would protect the right to regulate, States could still be "sued" if foreign investors considered that a policy change violated the broad, special rights that the Commission’s "reformed" investment proposal would give them.
In other words, at the heart of the European Commission's philosophy is the implicit acceptance that investors' rights take precedence over the public's rights -- in this case, those concerning the environment. Everything in the leaked sustainable chapter is couched in terms of aspirations -- the US and EU are encouraged to do the right thing as far as sustainable development is concerned, but there are few, if any, obligations or enforcement mechanisms. When it comes to protecting investors, on the other hand, everything is compulsory, backed up by supranational tribunals that can impose arbitrarily large fines, payable by the public. Although it is true that governments are given the "right" to legislate as they wish when it comes to the environment, investors are given the "right" to sue those governments black and blue if they attempt to do so.

Nor is this mere theory. Research carried out last year by Friends of the Earth Europe shows that of the 127 known ISDS cases that have been brought against 20 EU member states since 1994, fully 60% concern environment-related legislation. In other words, if the European Commission's proposals or something like them became part of the final TTIP agreement, it would almost guarantee a torrent of litigation aimed at blocking or neutering environmental legislation on both sides of the Atlantic.

This is an important leak because it reveals, once more, that a central problem of TAFTA/TTIP is the corporate sovereignty that is inherent in ISDS -- the fact that companies are allowed to place the preservation of their future profits above any other consideration, such as the environment, health and safety or social goals. The EU's sustainability chapter -- an area that is widely recognized as increasingly important in a world where lack of sustainability poses all kinds of problems -- is framed entirely in outdated, 20th-century terms: boosting trade and maximizing profits are the only metrics that matter. The European Commission's willing embrace of that approach confirms both its contempt for the 500 million Europeans it supposedly serves, and the fact that, far from protecting the environment, TAFTA/TTIP is shaping up to be a very toxic trade deal.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

Read More | 12 Comments | Leave a Comment..

Posted on Techdirt - 23 October 2015 @ 11:44am

Russian Experiment Tried -- And Failed -- To Block Citizens' Access To The Rest Of Internet

from the in-Russia,-the-Internet-disconnects-you dept

For some time, Techdirt has been reporting on Russia's efforts to control every aspect of online activity in the country. But it seems that the authorities there are still worried that its citizens will find ways around these measures. As a result, The Telegraph reports, the Russian government carried out a rather interesting experiment recently:

Russia's ministry of communications and Roskomnadzor, the national internet regulator, ordered communications hubs run by the main Russian internet providers to block traffic to foreign communications channels by using a traffic control system called DPI.

The objective was to see whether the Runet -- the informal name for the Russian internet -- could continue to function in isolation from the global internet.
The blocking part failed for an interesting reason:
Smaller providers account for over 50 per cent of the market in some Russian regions, [and] generally lack the DPI technology used by the larger companies to implement the blocking orders, and often use satellite connections that cannot be easily blocked.
It's not clear why DPI (Deep Packet Inspection) was needed if it was simply a matter of determining the destination of traffic. But in any case, if lack of DPI capabilities and satellite links are the problem, a fix would be to "encourage" smaller, regional ISPs to consolidate so that they could hook up with the main Internet backbones in Russia and acquire the necessary DPI kit.

Russia denies that anything took place, but according to The Telegraph story, a similar test took place last year:

security agencies including the FSB [Federal Security Service], the defence ministry, and the interior ministry collaborated with the national telephone operator to see if a national intranet made up of the domain names ending in .ru or .рФ could continue to operate if cut off from other parts of the Internet.

That test was reportedly ordered personally by Vladimir Putin, the Russian president, to assess the Russian internet’s ability to continue operating if Western countries introduce sanctions cutting off the country from the internet, and resulted in a decision to build backup infrastructure to ensure the Runet's continued operation.
Despite the denials, that no-nonsense approach is consistent with the way Mr Putin tends to do things, so it seems likely that the tests did indeed take place. It also means that the attempts to create a system that allows Runet to be cut off from the rest of the Internet are likely to continue until they succeed.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

29 Comments | Leave a Comment..

Posted on Techdirt - 23 October 2015 @ 6:19am

China's Ministry Of Culture Joins Social Media, Immediately Inundated With 100,000 Hostile Comments

from the culture-clash dept

Here on Techdirt we've written numerous posts about China's progressive clampdown on social media, as it tries to control what is said, when, and by whom. That makes the following story in the Wall Street Journal unusual, since it tells of a move by China's Ministry of Culture to open things up by joining the popular social media platform Weibo. Here's the Ministry's first post there:

"Hello all netizens, the Ministry of Culture’s official Weibo account is now officially open! In the future, we will publish cultural policies and information here. We’re looking forward to everyone’s support and attention!"
It certainly got plenty of attention, but not much support, as the Wall Street Journal explains:
Three messages posted to the feed since Thursday afternoon had attracted over 100,000 comments a day later, most of them unfavorable or outright hostile.
There are a number of interesting points here. First, that the Ministry of Culture was so clueless about social media that it did not foresee this happening. Secondly, that the Chinese public are so desperate for some way of making their views known to the authorities that they seized on this new Weibo account, with dramatic results. Finally, as the Wall Street Journal article rightly notes, the comments betray considerable confusion about which Chinese ministry does what kind of censorship these days:
Many of those criticizing the culture ministry appeared to be under the mistaken impression that it was in charge of the widely reviled film and TV regulator. Later comments asked the ministry to post a message clarifying the different types of censorship undertaken by different government agencies, while others begged the ministry to convince the film and TV regulator to open its own Weibo account.
Given the experience of the Ministry of Culture, opening itself up to a massive criticism by joining social media is now probably the last thing the film and TV regulator plans to do.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

12 Comments | Leave a Comment..

Posted on Techdirt - 22 October 2015 @ 3:21am

Familial DNA Searches May Make You Think Twice About Signing Up With Private Genetic Services

from the running-in-family dept

Using DNA found at the scene of a crime to identify the guilty party is pretty routine these days, but, as Mike discussed many years ago, police have been getting more aggressive in going much further: carrying out "familial" DNA searches on privately-held genetic databases. That link is to a recent Wired article that concerns the case of Michael Usry, who became a suspect in a 1966 1996 murder case because of his father's DNA:

Detectives had focused on Usry after running a familial DNA search, a technique that allows investigators to identify suspects who don’t have DNA in a law enforcement database but whose close relatives have had their genetic profiles cataloged. In Usry's case the crime scene DNA bore numerous similarities to that of Usry’s father, who years earlier had donated a DNA sample to a genealogy project through his Mormon church in Mississippi. That project’s database was later purchased by Ancestry, which made it publicly searchable -- a decision that didn’t take into account the possibility that cops might someday use it to hunt for genetic leads.
Using general similarities as the yardstick rather than more exact matches means that false positives are more likely -- as in this case, when Usry's own DNA proved he had nothing to do with the murder. Those similarities were only found because his father's DNA was in a privately-held genetic database that the police could access. That's unusual, but becoming more common as services like and 23andMe gather many more DNA samples.

According to an article on, Ancestry now has over 800,000 samples, while 23andMe has a million customers (Ancestry says that a more up-to-date figure is 1.2 million members in its database). Those are significant holdings, and it's only natural that the police would try to use them to solve crimes; both companies confirm that they will turn over information from their databases to law enforcement agencies if served with a suitable court order. A more recent post on notes that 23andMe has produced its first transparency report (direct link, but Techdirt readers outside the US may have to use the Google cache version for reasons that are not clear.) The report shows that a total of four requests were received from the US authorities, concerning five 23andMe customers, and indicates that the company was successful in denying those requests, without giving details.

Although understandable, this kind of access is problematic for people who sign up for these services, for reasons made clear by the Usry case. The DNA that goes into the database affects not only the donor, if a rough match to crime materials is found by the police, but many close relatives whose genetic make-up is necessarily similar. As a result, it's entirely possible that completely innocent people might have to go through the traumatic experience of being a suspect just as Usry did before more precise tests ruled him out.

That's unfortunate, because it adds a complicating factor to the decision about whether to provide DNA to interesting and innovative services like and 23andMe: doing so means future generations might be put at risk of erroneous police interest. The only way to prove their innocence would be to hand over their DNA to the authorities for detailed testing, which then raises the question of what happens to it afterwards.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

26 Comments | Leave a Comment..

Posted on Techdirt - 19 October 2015 @ 11:44am

BBC Blocks VPN Access To Its On-Demand Service, Even From UK

from the bad-for-everyone dept

The BBC is a rather odd organization. Unlike commercial broadcasters, or those given money directly by national governments, it is mainly funded by a public licensing fee that must be paid by anyone in the UK who watches or records TV programs in real time, using:

TVs, computers, mobile phones, games consoles, digital boxes and DVD/VHS recorders.
To justify the £145 (about $220) annual fee, the BBC takes the line that many of the programs available through its iPlayer service are only available to UK viewers. Of course, that's easy to circumvent using a VPN that allows those outside the UK to access content as if they were in the country. The BBC has finally woken up to this fact, and drawn exactly the wrong conclusion about what it should do, as TorrentFreak (TF) reports:
Over the past several days TF has received several reports from VPN users who can no longer access iPlayer from UK-based VPN servers.

"BBC iPlayer TV programmes are available to play in the UK only," is the notice they receive instead.
Instead of gracefully accepting the reality that geoblocking makes no sense in a world where VPNs allow users to appear to be more or less wherever they wish, the BBC has decided to try to block such access, including VPNs used by UK license-payers:
The BBC informs TF that the VPN ban was implemented to keep iPlayer 'pirates' at bay. The company is doing its best to keep company and school VPNs [in the UK] open but advises regular users to disconnect their VPN service in advance if they want to access iPlayer.
In our post-Snowden world, where the use of a VPN is becoming ever-more prudent, the BBC has just provided a strong disincentive for doing so in the UK. That's really shabby treatment for BBC license-payers, who ought to be allowed to access content in a secure manner. It's also bad news for everyone online, since the more widely VPNs are deployed, the less using one marks you out for special attention by government intelligence agencies. What the BBC should have done here is see the desire of people outside the UK to view its programs as a great opportunity to meet an evident need -- and to generate extra income.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

41 Comments | Leave a Comment..

More posts from Glyn Moody >>