Glyn Moody’s Techdirt Profile


About Glyn MoodyTechdirt Insider

Posted on Techdirt - 20 February 2018 @ 8:40pm

France Says 'No' To Company Hack-Backs Following Online Attacks -- But Wants To Keep The Option Open For Itself

from the French-have-a-word-for-it dept

Ten years ago, Techdirt was warning about the hype surrounding the concept of "cyberattacks", and after that "cyberwar", both of which were routinely presented in apocalyptic terms. As we now know, the real online battles are being fought much more subtly in the form of low-profile foreign organizations subverting nations in sophisticated ways. Unlike the predicted take-downs of an entire electricity grid, these kind of attacks by foreign states and their proxies have already happened, and with troubling effects.

Governments have a responsibility to consider all possible attacks that may be conducted via the Internet, which means that drawing up policy documents in the field is important. The French government has just published its "Revue stratégique de cyberdéfense (pdf)" -- that is, a Strategic Review of Cyberdefense. It was written by the General Secretariat for Defense and National Security, which operates under the authority of the French Prime Minister, and assists the head of government in designing and implementing security and defense policies. It's extremely thorough and well worth reading, but it's also rather long (and in French). Fortunately, Lukasz Olejnik has put together a post discussing some of the main highlights of the document, which is much shorter -- and in English. As he notes, in France, cyberdefense and cyberoffense are two separate domains, and the strategy document lays out six main approaches to the former: prevention, anticipation, protection, detection, attribution, and reaction (remediation). On the offense side:

France strongly opposes giving private companies the rights to retaliate following a cyberattack. In the French view, such actions would constitute a point of instability in cyberspace. Especially when considering retaliation against actors located in a different state. France wants to put forward the issue of hack-back on the international level.

Notable thing. The fact that the strategy mentions these concepts should probably be interpreted as an indirect response to the ideas discussed in the US, where certain proposals considered giving companies the powers to hack-back.

As far as offensive actions are concerned, the review may not want companies to unleash hack-backs after an online attack, but it does want to keep that option open for the French authorities:

Annex 7 considers retaliatory actions following a cyberattack. Although the text points out that such actions should be considered provided that all the other approaches (prevention, cooperation, negotiation) fail, it acknowledges that a response can be made using cyber or non-cyber means. The strategy also highlights that major cyberattack can be interpreted as an armed aggression, in line with the Article 51 of Charter of United Nations.

Olejnik points out the following interesting idea from the document:

France apparently suggested a desire to put the security liability in hands of product suppliers. In other words, making companies responsible for the security of products they put on the market -- as long as the products are commercially available. The strategy then mentions that one of the solutions could be to release source code and documentation after an end of support date. The strategy itself mentions taking this discussion to the international level.

France's Strategic Review offers a good starting point for thinking about these issues. It would be great if somebody could translate it into English for even wider appreciation.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

6 Comments | Leave a Comment..

Posted on Techdirt - 15 February 2018 @ 6:38pm

Mozilla's Open Letter To Expert Committee Drafting India's First Data Protection Law Slams Aadhaar Biometric Identity System

from the the-lizard-wrangler-speaks dept

Techdirt has been covering India's monster biometric database, Aadhaar, since 2015. Media in India, naturally, have been on the story longer, and continue to provide detailed coverage of its roll-out and application. But wider knowledge of the trailblazing identity project remains limited. One international organization that has been working to raise awareness is Mozilla, home of the Firefox browser and Thunderbird email client.

Last May, an opinion piece entitled "Aadhaar isn't progress -- it's dystopian and dangerous", by Mozilla Executive Chairwoman and Lizard Wrangler Mitchell Baker and Mozilla community member Ankit Gadgil, appeared in India's Business Standard newspaper. In July 2017, Mozilla released a statement on the Indian Supreme Court hearings on Aadhaar. A blog post in November pointed out that the Aadhaar system is increasingly being used by private companies for their services, something Techdirt covered earlier. Similarly, after it was revealed that anybody's Aadhaar details could be bought for around $8 each, Mozilla issued a statement saying "this latest, egregious breach should be a giant red flag to all companies as well as to the UIDAI [Unique Identification Authority of India] and the [Indian] Government."

Following the creation of a committee to draft India’s first comprehensive data protection law, Mozilla has now paid for an open letter to appear in The Hindustan Times. It was written by Baker, and co-signed by 1,447 Mozilla India community members. Although the letter welcomes the work being carried out by the committee of experts, it criticizes Aadhaar for its many failings, and points out some serious omissions in the committee's report on data protection:

The current proposal exempts biometric info from the definition of sensitive personal information that must be especially protected. This is backwards, biometric info is some of the most personal info, and can’t be "reset" like a password.

The design of Aadhaar fails to provide meaningful consent to users. This is seen, for example, by the ever increasing number of public and private services that are linked to Aadhaar without users being given a meaningful choice in the matter. This can and should be remedied by stronger consent, data minimization, collection limitation, and purpose limitation obligations.

Instead of crafting narrow exemptions for the legitimate needs of law enforcement, you propose to exempt entire agencies from accountability and legal restrictions on how user data may be accessed and processed.

Your report also casts doubt on whether individuals should be allowed a right to object over how their data is processed; this is a core pillar of data protection, without a right to object, consent is not meaningful and individual liberty is curtailed.

On a Web page called "Key challenges and the way forward", Mozilla calls on the Indian government to "pause further roll out of Aadhaar until the major problems with Aadhaar have been addressed." It also has a further suggestion:

The Indian government must release Aadhaar as true open source software rather than use language of open source, and encourage the use, development, and adoption of open source as a pillar of the Aadhaar system

Of course, you might expect an open source foundation like Mozilla to say that, but nonetheless it's good to see what is at heart a software organization engaging with global problems that affect huge numbers of people in this way. Others should do the same.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

8 Comments | Leave a Comment..

Posted on Techdirt - 14 February 2018 @ 7:50pm

What Are The Ethical Issues Of Google -- Or Anyone Else -- Conducting AI Research In China?

from the don't-be-evil,-but-AI-first? dept

AI is hot, and nowhere more so than in China:

The present global verve about artificial intelligence (AI) and machine learning technologies has resonated in China as much as anywhere on earth. With the State Council’s issuance of the "New Generation Artificial Intelligence Development Plan" on July 20 [2017], China's government set out an ambitious roadmap including targets through 2030. Meanwhile, in China's leading cities, flashy conferences on AI have become commonplace. It seems every mid-sized tech company wants to show off its self-driving car efforts, while numerous financial tech start-ups tout an AI-driven approach. Chatbot startups clog investors' date books, and Shanghai metro ads pitch AI-taught English language learning.

That's from a detailed analysis of China's new AI strategy document, produced by New America, which includes a full translation of the development plan. Part of AI's hotness is driven by all the usual Internet giants piling in with lots of money to attract the best researchers from around the world. One of the companies that is betting on AI in a big way is Google. Here's what Sundar Pichai wrote in his 2016 Founders' Letter:

Looking to the future, the next big step will be for the very concept of the "device" to fade away. Over time, the computer itself -- whatever its form factor -- will be an intelligent assistant helping you through your day. We will move from mobile first to an AI first world.

Given that emphasis, and the rise of China as a hotbed of AI activity, the announcement in December last year that Google was opening an AI lab in China made a lot of sense:

This Center joins other AI research groups we have all over the world, including in New York, Toronto, London and Zurich, all contributing towards the same goal of finding ways to make AI work better for everyone.

Focused on basic AI research, the Center will consist of a team of AI researchers in Beijing, supported by Google China's strong engineering teams.

So far, so obvious. But an interesting article on the Macro Polo site points out that there's a problem with AI research in China. It flows from the continuing roll-out of intrusive surveillance technologies there, as Techdirt has discussed in numerous posts. The issue is this:

Many, though not all, of these new surveillance technologies are powered by AI. Recent advances in AI have given computers superhuman pattern-recognition skills: the ability to spot correlations within oceans of digital data, and make predictions based on those correlations. It's a highly versatile skill that can be put to use diagnosing diseases, driving cars, predicting consumer behavior, or recognizing the face of a dissident captured by a city's omnipresent surveillance cameras. The Chinese government is going for all of the above, making AI core to its mission of upgrading the economy, broadening access to public goods, and maintaining political control.

As the Macro Polo article notes, Google is unlikely to allow any of its AI products or technologies to be sold directly to the authorities for surveillance purposes. But there are plenty of other ways in which advances in AI produced at Google's new lab could end up making life for Chinese dissidents, and for ordinary citizens in Xinjiang and Tibet, much, much worse. For example, the fierce competition for AI experts is likely to see Google's Beijing engineers headhunted by local Chinese companies, where knowledge can and will flow unimpeded to government departments. Although arguably Chinese researchers elsewhere -- in the US or Europe, for example -- might also return home, taking their expertise with them, there's no doubt that the barriers to doing so are higher in that case.

So does that mean that Google is wrong to open up a lab in Beijing, when it could simply have expanded its existing AI teams elsewhere? Is this another step toward re-entering China after it shut down operations there in 2010 over the authorities' insistence that it should censor its search results -- which, to its credit, Google refused to do? "AI first" is all very well, but where does "Don't be evil" fit into that?

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

7 Comments | Leave a Comment..

Posted on Techdirt - 8 February 2018 @ 7:51pm

An English-Language, Algorithmically-Personalized News Aggregator, Based In China -- What Could Possibly Go Wrong?

from the no-social-graph-required dept

Techdirt has been exploring the important questions raised by so-called "fake news" for some time. A new player in the field of news aggregation brings with it some novel issues. It's called TopBuzz, and it comes from the Chinese company Toutiao, whose rapid rise is placing it alongside the country's more familiar "BAT" Internet giants -- Baidu, Alibaba and Tencent. It's currently expanding its portfolio in the West: recently it bought the popular social video app for about $800 million:

Toutiao aggregates news and videos from hundreds of media outlets and has become one of the world's largest news services in the span of five years. Its parent company [Bytedance] was valued at more than $20 billion, according to a person familiar with the matter, on par with Elon Musk's SpaceX. Started by Zhang Yiming, it's on track to pull in about $2.5 billion in revenue this year, largely from advertising.

An in-depth analysis of the company on Ycombinator's site explains what makes this aggregator so successful, and why it's unlike other social networks offering customized newsfeeds based on what your friends are reading:

Toutiao, one of the flagship products of Bytedance, may be the largest app you’ve never heard of -- it's like every news feed you read, YouTube, and TechMeme in one. Over 120M people in China use it each day. Yet what's most interesting about Toutiao isn't that people consume such varied content all in one place... it's how Toutiao serves it up. Without any explicit user inputs, social graph, or product purchase history to rely on, Toutiao offers a personalized, high quality-content feed for each user that is powered by machine and deep learning algorithms.

However, as people are coming to appreciate, over-dependence on algorithmic personalization can lead to a rapid proliferation of "fake news" stories. A post about TopBuzz on the Technode site suggests this could be a problem for the Chinese service:

What's been my experience? Well, simply put, it's been a consistent and reliable multi-course meal of just about every variety of fake news.

The post goes on to list some of the choice stories that TopBuzz's AI thought were worth serving up:

Roy Moore Sweeps Alabama Election to Win Senate Seat

Yoko Ono: "I Had An Affair With Hillary Clinton in the '70s"

John McCain's Legacy is DEMOLISHED Overnight As Alarming Scandals Leak

Julia Roberts Claims 'Michelle Obama Isn't Fit To Clean Melania's Toilet'

The post notes that Bytedance is aware of the problem of blatantly false stories in its feeds, and the company claims to be using both its artificial intelligence tools as well as user reports to weed them out. It says that "when the system identifies any fake content that has been posted on its platform, it will notify all who have read it that they had read something fake." But:

this is far from my experience with TopBuzz. Although I receive news that is verifiably fake on a near-daily basis, often in the form of push notifications, I have never once received a notification from the app informing me that Roy Moore is in fact not the new junior senator from Alabama, or that Hillary Clinton was actually not Yoko Ono's sidepiece when she was married to John Lennon.

The use of highly-automated systems, running on server farms in China, represents new challenges beyond those encountered so far with Facebook and similar social media, where context and curation are being used to an increasing degree to mitigate the potential harm of algorithmic newsfeeds. The fact that a service like TopBuzz is provided by systems outside the control of the US or other Western jurisdictions poses additional problems. As deep-pocketed Chinese Internet companies seek to expand outside their home markets, bringing with them their own approaches and legal frameworks, we can expect these kind of issues to become increasingly thorny. We are also likely to see those same services begin to wrestle with some of the same problems currently being tackled in the West.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

21 Comments | Leave a Comment..

Posted on Techdirt - 7 February 2018 @ 10:39am

Single-Pixel Tracker Leads Paranoid Turkish Authorities To Wrongly Accuse Over 10,000 People Of Treason

from the tiny-web-beacons,-massive-consequences dept

We've written many articles about the thin-skinned Turkish president, Recep Tayyip Erdoğan, and his massive crackdown on opponents, real or imagined, following the failed coup attempt in 2016. Boing Boing points us to a disturbing report on the Canadian CBC News site revealing how thousands of innocent citizens have ended up in prison because they were falsely linked with the encrypted messaging app Bylock:

The Turkish government under President Recep Tayyip Erdogan links Bylock with treason, because of the app's alleged connection to followers of Fethullah Gülen, the man the Turkish government believes is behind the deadly 2016 coup attempt. Gülen denies the allegations.

Alleged Bylock users are a large part of the nearly 150,000 Turks detained, arrested or forced from their jobs under state of emergency decrees since the summer of 2016.

An estimated 30,000 are believed to be among the innocent swept up in this particular campaign, victims of the chaos, confusion and fear in Turkey.

It's bad enough that the Turkish authorities are equating the mere use of the Bylock app with treason. But it gets worse. It turns out that many of those arrested for that reason didn't even use Bylock, but were falsely accused:

it was due to a single line of code, which created a window "one pixel high, one pixel wide" -- essentially invisible to the human eye -- to Hypothetically, people could be accused of accessing the site without having knowingly viewed it.

That line redirected people to the Bylock server using several other applications, including a Spotify-like music app called Freezy and apps to look up prayer times or find the direction of Mecca. Some people have been accused because someone they shared a wifi connection with was linked to Bylock.

According to the CBC News report, the single-pixel trackers that linked back to were used intentionally by the Bylock developers in order to muddy the waters, and make it harder to identify real Bylock users. However, it's not clear how these Web "beacons" came to be associated with other apps. Whatever the mechanism used to accuse innocent people, the Turkish authorities have confirmed indirectly that the misleading calls to did indeed take place, albeit releasing that information in a way that violates the victims' privacy pretty badly:

The Turkish government and the country's courts rarely admit they are wrong, but in December, they revealed the gravity of the mistake they'd made by publishing a list of 11,480 mobile phone numbers. Each number represented a person wrongly accused of terrorism in the Bylock affair.

As well as confirming that Turkey remains in the grip of institutionalized paranoia emanating from the country's president, this episode underlines just how serious the implications of single-pixel tracking can be. In an ideal world, such surreptitious tracking would not be taking place. As a second best, browsers would incorporate technology that warned users of such tricks and blocked their callbacks as a matter of course, but it's hard to see how this could be done in a way that isn't easily circumvented.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

26 Comments | Leave a Comment..

Posted on Techdirt - 1 February 2018 @ 3:28am

China Exporting Its Surveillance Tech And Philosophy To Other Countries, Helped By Equipment Donations

from the laboratory-for-comprehensive-security-systems dept

It will probably come as zero surprise to Techdirt readers to learn the following:

China's state surveillance apparatus is trying out a new tool in one of its favorite test beds, the restive region of Xinjiang.

The Muslim-dominated villages on China's western frontier are testing facial-recognition systems that alert authorities when targeted people venture more than 300 meters (1,000 feet) beyond designated "safe areas," according to a person familiar with the project. The areas comprise individuals' homes and workplaces, said the person, who requested anonymity to speak to the media without authorization.

As that Bloomberg report rightly notes, this further extension of the Chinese state's surveillance system is taking place in Xinjiang, which acts as a kind of test-bed for moves of this kind, many of which are then rolled out to the rest of the country. That's clearly terrible news for people in China, but superficially doesn't directly concern the rest of the world. However, a story in the South China Morning Post reveals that China's surveillance tech and philosophy are now appearing in countries outside China:

Ecuador has introduced a security system using monitoring technology from China, including facial recognition, as it tries to bring down its crime rate and improve emergency management, according to state-run Xinhua news agency.

A network of cameras has been installed across the South American nation's 24 provinces -- keeping watch on its population of 16.4 million people -- using a system known as the ECU911 Integrated Security Service, Xinhua reported.

The article explains that experiments are being run to turn footage from Chinese surveillance cameras into data at the Laboratory for Comprehensive Security Systems, which is located in the ECU911 headquarters in Ecuador's capital. China had a hand in that, too:

State-owned China National Electronics Import and Export Corporation (CEIEC) was involved in setting up the laboratory. CEIEC is a subsidiary of China Electronics Corporation (CEC), one of the country's largest defence contractors.

CEC's reach extends far beyond China’s homeland security, and the system in Ecuador is not its first project in South America. In Brazil, CEC was involved in using Chinese technology to monitor environmental risks in the Amazon rainforest. But in Bolivia and Venezuela, as in Ecuador, its projects are to do with public security.

Soft power is key focus for China at the moment, particularly as part of its One Belt, One Road mega infrastructure project. Another way China spreads its influence around the world is by donating surveillance equipment, as happened in Ecuador. It's a shrewd move. Local governments can say that it would be foolish to turn down generous gifts from such a powerful nation, and that once accepted, it would be a waste not to use the equipment. China can claim that it is "helping" other nations improve their internal security, while establishing a beachhead for Chinese companies that manufacture the surveillance equipment. The latter can then build on that to win further sales -- and to help spread Chinese-style surveillance yet further.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

14 Comments | Leave a Comment..

Posted on Techdirt - 30 January 2018 @ 7:31pm

EU's Highest Court Says Privacy Activist Can Litigate Against Facebook In Austria, But Not As Part Of A Class Action

from the new-EU-law-will-soon-make-that-possible-anyway dept

Last November we reported on the legal opinion of one of the Advocates General that advises the EU's top court, the Court of Justice of the European Union (CJEU). It concerned yet another case brought by the data protection activist and lawyer Max Schrems against Facebook, which he claims does not follow EU privacy laws properly. There were two issues: whether Schrems could litigate against Facebook in his home country, Austria, and whether he could join with 25,000 people to bring a class action against the company. The Advocate General said "yes" to the first, and "no" to the second, and in its definitive ruling, the CJEU has agreed with both of those views (pdf). Here's what Schrems has to say on the judgment (pdf):

The Court of Justice of the European Union (CJEU) confirms that Max Schrems can litigate in Vienna against Facebook for violation of EU privacy rules. Facebook's attempt to block the privacy lawsuit was not successful.

However, today the CJEU gave a very narrowly definition of the notion of a "consumer" which deprives many consumer from consumer protection and also makes an Austrian-style "class action" impossible.

The rest of his press release gives background details of the case. Schrems explains why being able to bring a class action in Austria is important:

If a multinational knows that they cannot win a case, they try to find reasons so that a case is not admissible, or they try to squeeze a plaintiff out of the case by inflating the costs.. Facebook wanted to ensure that the case can only be heard in Dublin [where its EU headquarters are located], as Ireland does not have any class action and litigating even one model claim would cost millions of Euros in legal fees. In this case we'd have a valid claim, but it would be basically unenforceable.

EU consumer groups agree that an option to bring class actions is needed, as EurActiv reports:

The ECJ's finding has caused outrage among consumer groups that have campaigned for years for the [European] Commission to propose legislation allowing for EU-level class action lawsuits involving complainants from different member states. They argue that collective redress will make it easier and cheaper for consumer to sue.

The same articles notes that Schrems and the consumer groups may soon get their wish: the European Commission is expected to unveil proposals for a new law that will allow collective redress to be sought across the EU. Even if that does happen, it's likely to take years to implement. Before then, Facebook has many other problems it needs to confront. First, there is Schrem's personal suit against the company, which can now proceed in the Austrian courts. As he points out:

Facing a lawsuit, which questions Facebook's business model, is a huge risk for the company. Any judgement in Austria is directly enforceable at Facebook's Irish headquarter and throughout Europe.

That is, if Schrems wins his case, other EU citizens will be able to use the judgment to sue Facebook more easily. And Facebook may have headed off the threat of a class action under existing law, but the EU's new General Data Protection Regulation (GDPR), which will be enforced from May of this year, explicitly allows non-profit organizations to sue on the behalf of individuals. Article 80 of the GDPR says:

The data subject shall have the right to mandate a not-for-profit body, organisation or association which has been properly constituted in accordance with the law of a Member State, has statutory objectives which are in the public interest, and is active in the field of the protection of data subjects' rights and freedoms with regard to the protection of their personal data to lodge the complaint on his or her behalf

With that in mind, Schrems is crowdfunding just such a public interest, not-for-profit body, None of Your Business (noyb), which Techdirt discussed in December. At the time of writing, noyb is within a few percent of achieving its goal of €250,000. Facebook is naturally well-aware of the GDPR's likely impact. The company's Chief Operating Officer Sheryl Sandberg said recently:

We're rolling out a new privacy center globally that will put the core privacy settings for Facebook in one place and make it much easier for people to manage their data

Satisfying the requirements of the GDPR is not the only looming problem for Facebook. In Germany, the company is facing what is potentially an even more serious challenge, as the FT reports (paywall):

Germany is threatening curbs on how Facebook amasses data from millions of users in what would be an unprecedented intervention in the social network's business model.

Andreas Mundt, head of Germany's main antitrust agency, the Federal Cartel Office, said Facebook could be banned from collecting and processing third-party user data as one possible outcome of an investigation that in December concluded the US technology group was abusing its dominant market position.

If Germany goes ahead with these plans, it will drastically reduce the scope for Facebook to make money by using consolidated data about its users to sell advertising space, and may well encourage other EU nations to follow suit.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

4 Comments | Leave a Comment..

Posted on Techdirt - 25 January 2018 @ 7:22pm

Genome Of A Man Born In 1784 Recreated From The DNA Of His Descendants

from the I-am-your-(great-great-great-grand)-father dept

The privacy implications of collecting DNA are wide-ranging, not least because they don't relate solely to the person from whom the sample is taken. Our genome is a direct product of our parents' genetic material, so the DNA strings of siblings from the same mother and father are closely related. Even that of more distant relations has many elements in common, since they derive from common ancestors. Thus a DNA sample contains information not just about the donor, but about many others on the relevant family tree as well. A new paper published in Nature Genetics (behind a paywall, unfortunately) shows how that fact enables the genomes of long-dead ancestors to be reconstructed, using just the DNA of their descendants.

As an article in Futurism explains, the unique circumstances of the individual chosen for the reconstruction, the Icelander Hans Jonatan, aided the research team as they sought to piece together his genome nearly two centuries after his death in 1827. The scientists mainly came from the Icelandic company deCODE Genetics, one of the pioneers in the world of genomics, and highly-familiar with Iceland's unique genetic resources. The following factors were key:

For one, Jonatan was the first Icelandic inhabitant with African heritage. Iceland also boasts an extensive and highly detailed collection of genealogical records. The combination of Jonatan's unique heritage and the country's record-keeping for inhabitants' family trees made this remarkable recreation possible.

For cultural and historical reasons, Iceland has one of the most complete genealogical records of any nation. This allowed the research team to establish with high probability 788 of Jonatan's descendants. Samples were taken from 182 of those individuals and then genotyped -- a kind of DNA screening. The deCODE group picked out those genomes most likely to provide the longest DNA sequences that had been passed down through the generations from Jonatan's mother, by looking for fragments of African-pattern chromosomes amidst the otherwise European genetic material. The full genomes of 20 of those 182 were sequenced, and then the parts derived from Jonatan's African ancestry pieced together to recreate 38% of his mother's DNA. From this, the researchers were able to establish that Jonatan's mother was probably from the African region spanned by Benin, Nigeria and Cameroon.

This kind of large-scale reconstruction in the absence of physical samples has never been achieved before, and is certainly a major triumph of biological and computational technology. An important question is whether this is a one-off, made possible by the unique circumstances of Jonatan's life, or whether it could be applied more widely. According to the Futurism article:

Theoretically, a technique like this could help researchers create "virtual ancient DNA," which would allow scientists to recreate the DNA of historical figures. Agnar Helgason of deCODE stated that "Any historic figure born after 1500 who has known descendants could be reconstructed."

While it's exciting, there are still major hurdles to overcome in terms of the potential future applications. The quantity, scale, and detail of the DNA from living ancestors required to recreate a person's DNA make it impractical for use within most families. Additionally, with each new generation identifiable DNA fragments get smaller and more difficult to work with.

As DNA sequencing becomes cheaper and more accurate, it will be possible to carry out DNA profiling and collection faster and more economically. Similarly, as computational power increases, chromosome fragments can be analyzed and stitched together more easily. In due course, these kinds of genomic reconstructions will probably become more common. Already, deCODE's research confirms how DNA can establish the connections not just between present-day members of a family, but also with those long dead. When unexpected patterns of maternity or paternity are revealed, they will bring with them who knows what social consequences for their descendants.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

9 Comments | Leave a Comment..

Posted on Techdirt - 25 January 2018 @ 11:54am

TPP Is Back, Minus Copyright Provisions And Pharma Patent Extensions, In A Clear Snub To Trump And The US

from the canada,-leader-of-the-free-world? dept

As Techdirt noted back in November, the Trans Pacific Partnership (TPP) agreement was not killed by Donald Trump's decision to pull the US out of the deal. Instead, something rather interesting happened: one of the TPP's worst chapters, dealing with copyright, was "suspended" at the insistence of the Canadian government, which suddenly took on a leading role. At the time, it wasn't clear whether this was merely a temporary ploy, or was permanent. With news that the clumsily-named "Comprehensive and Progressive Agreement for Trans-Pacific Partnership" (CPTPP) has been "concluded", it now seems that the exclusion of both copyright and pharma patent extensions is confirmed. As Michael Geist writes:

the IP chapter largely reflected U.S. demands and with its exit from the TPP, an overhaul that more closely aligns the agreement to international standards was needed. Canada succeeded on that front with an agreement to suspend most of the controversial IP provisions including those involving copyright term, patent extension, biologics protection, and digital lock rules.

That's the good news. But there's still plenty of bad stuff in the CPTPP, a sample of which is listed here by The Atlantic:

A controversial arrangement whereby companies can sue countries over their domestic laws, known as the investor-state dispute settlement [ISDS -- corporate sovereignty] system, remains in a reduced fashion. Labor and environmental protections are largely unchanged [and unsatisfactory]. The EFF's [Jeremy] Malcolm pointed to e-commerce provisions that provide only weak privacy protections, among other issues, as still being problematic. But overall, the new deal is so similar to the original that Canadian labor unions are furious that their government is still advancing it, just as labor groups in the U.S. objected under Obama.

That anger means that even in the absence of the copyright and pharma patent extensions, there is still likely to be some resistance to the new deal, and not just in Canada. For example, economists estimate that the CPTPP will boost Australia's economy by only 0.04% per year -- a negligible amount that will be swamped by fluctuations in other factors. Some Australian businesses warn that the continuing existence of bilateral trade deals with eight of the CPTPP countries will lead to a complex "noodle bowl" of rules and regulations that could make it harder, not easier, to conduct business with them. In New Zealand, a long-standing critic of TPP, Professor Jane Kelsey, is particularly worried about a chapter on electronic commerce. And in Malaysia, a consumer group has urged the government there not to sign the deal, which it said would be "even worse" than TPP for the country.

Although we still don't have the final details of the deal, and the lingering presence of corporate sovereignty is regrettable, the CPTPP signals a hopeful shift away from the usual intellectual monopoly maximalism. The omission of copyright and patents from the new deal is a significant defeat for the US, which has been the main driving force behind their routine inclusion. And the fact that the CPTPP is going ahead at all without the US is a clear snub to Trump and his rejection of such multilateral trade negotiations.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

58 Comments | Leave a Comment..

Posted on Techdirt - 24 January 2018 @ 7:54pm

Danish Police Charge Over 1,000 People With Sharing Underage Couple's Sexting Video And Images

from the some-kind-of-progress dept

Techdirt posts about sexting have a depressingly similar story line: young people send explicit photos of themselves to their partners, and one or both of them end up charged with distributing or possessing child pornography. Even more ridiculously, the authorities typically justify branding young people who do this as sex offenders on the grounds that it "protects" the same individuals whose lives they are ruining. Judging by a story in The Local, reporting on a press release that first appeared on the MyNewsDesk site (original in Danish), the police in Denmark seem to be taking a more rational approach. Rather than charging the two young people involved for sexting, they are charging 1,004 people who shared the video and images afterwards, some several hundred times:

The video was primarily sent to and shared between young people, the police said in a major announcement on Monday morning.

Individuals under police suspicion in the case may have broken Danish child pornography laws, police wrote.

The material contains sexual images involving persons under the age of 15 years at the time of recording, the Danish National Police (Rigspolitiet) confirmed in a press statement.

The case came to light after Facebook received reports of sexual video material involving young people under 18 being shared on its Messenger platform last year, and alerted the US authorities as a result. They, in their turn, passed the information on to Europol, the European police agency, who forwarded it to the authorities in Denmark. The Local quotes a Danish police officer pointing out the long-term effects of being convicted of breaking the country's child pornography laws:

"If you receive a criminal conviction as a minor it can stay on your record for it least ten years. That means you cannot get a job in a daycare or as a football coach. If American authorities are informed, it can also cause difficulties with travelling to the USA. So this is serious and has serious consequences far into the future."

It could be argued that child pornography laws are not the right way to deal with this kind of sharing by third parties. And it is not clear how the explicit material came to be spread around so widely -- to what extent, for example, one or both of the people involved in the sexting started sharing it elsewhere themselves. But it is surely some kind of progress that the police are concentrating on that wider diffusion, which involved hundreds of people, rather than on the initial sexting by two young people, as so many previous cases have done.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

19 Comments | Leave a Comment..

Posted on Techdirt - 23 January 2018 @ 11:56am

China's Solution To The VPN Quandary: Only Authorized, And Presumably Backdoored, Crypto Links Allowed

from the will-Russia-follow-suit? dept

Two of the most important developments in China's clampdown on the digital world took place last year, when the country's Ministry of Industry and Information Technology declared that all VPN providers needed prior government approval to operate, and then apps stores were forced to remove the many VPNs on offer there. In some parts of China, VPNs were banned completely, but such a total shutdown is not really an option for cities with many businesses that require secure overseas communication channels. That put the Chinese authorities in something of a quandary: how could they reconcile their desire to prevent VPNs being used to circumvent online controls, while ensuring that the country's increasingly important corporate sector had access to the encryption tools it needed for operating globally? An article in the FT provides us with the answer (paywall). In recent months, international companies and organizations have found their VPNs blocked more frequently:

regulators have been pushing multinationals to buy and use state-approved VPNs. The state-approved versions can cost tens of thousands of dollars a month and expose users' communications to Beijing's scrutiny.

"China's intention is to control the flow of information entirely, making people use only government-approved VPNs by making it difficult, if not impossible, to use alternatives," said Lester Ross, partner at legal firm WilmerHale in Beijing.

The great thing about state-approved VPNs is that they can include backdoors for the government to use, and can be to shut down quickly if really serious problems arise that require even more stringent controls.

Backdoored crypto is inherently vulnerable to attacks against those built-in weaknesses, but the Chinese authorities are doubtless willing to let companies run that risk for the sake of maintaining overall control. Since Russia's views on VPNs are closely aligned with those of China, it will be interesting to see if it decides to adopt Beijing's solution to the VPN dilemma to tidy up its own rather clumsy approach.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

19 Comments | Leave a Comment..

Posted on Techdirt - 22 January 2018 @ 7:26pm

Tunisia's Plans To Bring In Its Own National 'Aadhaar' Biometric ID System Halted -- For Now

from the peace-be-with-you dept

The last time that Techdirt wrote about Tunisia was back in 2011, when the Internet helped bring about a major regime change there. Although violent protests against the government have flared up recently, in general, the processes that are being applied to shift national policies in Tunisia are both peaceful and successful. Here, for example, is some good news from Access Now on the privacy front:

This week the people of Tunisia won a major victory for privacy: the dangerous biometric ID card proposal has officially been withdrawn from consideration in the Assembly of the Representatives of the People (ARP).

We worked hard with our partners at Al Bawsala to oppose passage of the bill, including encouraging members of the assembly to adopt a set of key amendments to ensure that if it did pass, the bill would protect citizens' data and their right to consult and rectify their own information. Over the past week, we spent hours talking to assembly members, highlighting the dangers of pushing the bill through without adding necessary and vital protections for Tunisians' privacy and data security.

It worked! The assembly members advanced the amendments that we proposed, and nearly all were adopted within the Consensus Commission. The Ministry of the Interior, which had pushed hard to pass the bill without these important safeguards, dropped the proposal entirely.

That's particularly welcome at a time when the problems of India's biometric ID card system "Aadhaar" are becoming all-too clear. However, as Access Now rightly notes:

Even though we're overjoyed, we must remain vigilant. We could see this proposal revived. If that happens, we will continue working to ensure that any new legislation protects human rights.

Let's hope Tunisia's democratic legislative processes continue to function as effectively as they have in this particular case, and that the country does not end up with another Aadhaar.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

1 Comments | Leave a Comment..

Posted on Techdirt - 22 January 2018 @ 10:42am

Marriott Freezes Its Social Media Globally, And Makes Grovelling Apology To China, All For A Drop-Down Menu And Liking A Tweet

from the rectification-of-names dept

As Techdirt readers are well aware, China is rapidly growing more powerful, both economically and politically. Its economic rise has been clear enough for some time, not least in its technological prowess. Its political might, however, has only recently become more evident, as it begins to assert itself in various ways around the world. China's sense of its own power, and its increasing impatience with anyone that dares to go against it, is nicely illustrated in a recent incident. It concerns a drop-down menu and a like on a tweet, both belonging to the "global lodging company" Marriott International, which issued the following corporate statement a few days ago:

Marriott International respects and supports the sovereignty and territorial integrity of China. Unfortunately, twice this week, we had incidents that suggested the opposite: First, by incorrectly labelling certain regions within China, including Tibet, as countries in a drop-down menu on a survey we sent out to our loyalty members; and second, in the careless "like" by an associate of a tweet that incorrectly suggested our support of this position [that Tibet is a country in its own right]. Nothing could be further from the truth: we don't support anyone who subverts the sovereignty and territorial integrity of China and we do not intend in any way to encourage or incite any such people or groups. We recognize the severity of the situation and sincerely apologize.

In addition to this grovelling apology, Marriott publicly punished itself by shutting down large chunks of its digital activities, as China Daily, the English-language news organ of the Chinese government, reported:

After identifying its errors, the company has taken the survey offline, "unliked" the post, shut down its six websites and apps in Chinese, and put a freeze on its social media across the world. The CEO has volunteered to issue an apology.

It has also terminated the contract with the third-party vendor that built the survey, a Canadian company that Marriott has been working with for a long time, and with the US-based employee who "liked" the tweet.

It's not clear whether Marriott was ordered directly to take these actions, or decided to carry them out voluntarily. Either way, it's striking that Marriott is apologizing so abjectly for actions taken in the US and Canada -- not in China -- and even shut down its social media activity globally, for a while. That's a striking demonstration of China's reach today: no matter where something happens, if the Chinese authorities don't like it, they now expect businesses that want to work in China to come up with a "rectification plan" for these slips, just as Marriott has done, according to China Daily. That's probably a reference to a concept in Confucian philosophy, the "rectification of names", which means making words correspond to reality -- in this case, the policies of the Chinese government. An article in Business Insider notes that other multinationals have received loud and clear the message China wishes to send by its humiliation of Marriott:

A number of international companies, including Zara, Marriott, Qantas, and Delta Air Lines, have apologized to China in the last week for listing Taiwan and Hong Kong as "countries" on their websites.

Zara, Marriott and Delta Air Lines all deleted references to these regions as countries and were publicly reprimanded by Chinese authorities, while Qantas discovered and fixed the same type of "error" during a routine review of its website.

Expect much more of this kind of thing, as a newly-confident, and increasingly arrogant, China starts to swing its weight around. It will doubtless seize on even the most trivial "hurt", real or perceived, as a pretext for humbling Western companies and thus, implicitly, their governments -- just as they did to China once upon a time.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

44 Comments | Leave a Comment..

Posted on Techdirt - 17 January 2018 @ 7:49pm

Using AI To Identify Car Models In 50 Million Google Street Views Reveals A Wide Range Of Demographic Information

from the you-are-what-you-drive dept

Google Street View is a great resource for taking a look at distant locations before travelling, or for visualizing a nearby address before driving there. But Street View images are much more than vivid versions of otherwise flat maps: they are slices of modern life, conveniently sorted by geolocation. That means they can provide all kinds of insights into how society operates, and what the differences are geographically. The tricky part is extracting that information. An article in the New York Times reports on how researchers at Stanford University have applied artificial intelligence (AI) techniques to 50 million Google Street View images taken in 200 US cities. Since analyzing images of people directly is hard and fraught with privacy concerns, the researchers concentrated on a proxy: cars. As an academic paper published by the Stanford team notes (pdf):

Ninety five percent of American households own automobiles, and as shown by prior work cars are a reflection of their owners' characteristics providing significant personal information.

First, the AI system had to be trained to find cars in the Google Street Map images. That's something that's easy for humans to do, but hard for computers, while the next stage of the work -- identifying car models -- is much easier using AI. As another paper reporting on the research (pdf) explains:

the fine-grained object recognition task we perform here is one that few people could accomplish for even a handful of images. Differences between cars can be imperceptible to an untrained person; for instance, some car models can have subtle changes in tail lights (e.g., 2007 Honda Accord vs. 2008 Honda Accord) or grilles (e.g., 2001 Ford F-150 Supercrew LL vs. 2011 Ford F-150 Supercrew SVT). Nevertheless, our system is able to classify automobiles into one of 2,657 categories, taking 0.2 s per vehicle image to do so. While it classified the automobiles in 50 million images in 2 wk, a human expert, assuming 10 s per image, would take more than 15 y to perform the same task.

The difference between the two weeks taken by the AI software, and the 15 years a human would need, means that it is possible to analyze much larger data collections than before, and to extract new kinds of information. This is done by using existing datasets, for example the American Community Survey, which is performed by the US Census Bureau each year, to train the AI system to spot correlations between cars and demographics. The New York Times article lists some of the results that emerge from mining and analyzing the Google Street Map images, and adding in metadata from other sources:

The system was able to accurately predict income, race, education and voting patterns at the ZIP code and precinct level in cities across the country.

Car attributes (including miles-per-gallon ratings) found that the greenest city in America is Burlington, Vt., while Casper, Wyo., has the largest per-capita carbon footprint.

Chicago is the city with the highest level of income segregation, with large clusters of expensive and cheap cars in different neighborhoods; Jacksonville, Fla., is the least segregated by income.

New York is the city with the most expensive cars. El Paso has the highest percentage of Hummers. San Francisco has the highest percentage of foreign cars.

The researchers point out that the rise of self-driving cars with on-board cameras will produce even more street images that could be fed into AI systems for analysis. They also note that walking around a neighborhood with a camera -- for example, in a smartphone -- would allow image data to be gathered very simply and cheaply. And as AI systems become more powerful, it will be possible to extract even more demographic information from apparently innocuous street views. Although that may be good news for academic researchers, datamining offline activities clearly creates new privacy problems at a time when people are already worried about what can be gleaned from datamining their online activities.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

18 Comments | Leave a Comment..

Posted on Techdirt - 12 January 2018 @ 1:22pm

Chinese Internet Users Start To Rebel Against Lack Of Online Privacy

from the just-a-blip-or-the-start-of-something-bigger? dept

We recently reported how China continues to turn the online world into the ultimate surveillance system, which hardly comes as a surprise, since China has been relentlessly moving in this direction for years. What is rather more surprising is that Chinese citizens are beginning to push back, at least in certain areas. For example, The New York Times reports on an "outcry" provoked by a division of the Alibaba behemoth when it assumed that its users wouldn't worry too much if they were enrolled automatically in one of China's commercially-run tracking systems:

Ant Financial, an affiliate of the e-commerce giant Alibaba Group, apologized to users on Thursday after prompting an outcry by automatically enrolling in its social credit program those who wanted to see the breakdown [of their spending made via Ant Financial's online payment system]. The program, called Sesame Credit, tracks personal relationships and behavior patterns to help determine lending decisions.

When one of China's business leaders complained publicly about the lack of privacy in China, and how Tencent's hugely-popular WeChat program spied on users, the company's denials were met with another outcry:

Tencent said that the company did not store the chat history of users and that it would never use chat history for big data analytics. The comments were met with widespread disbelief: WeChat users have been arrested over what they've said on the app, conversations have turned up as evidence in court proceedings, and activists have reported being followed based on WeChat conversations.

Meanwhile, the third of China's Big Three Internet companies -- Baidu -- has been hit with legal action over privacy concerns, reported here by Caixin:

Baidu Inc., China’s largest search-engine operator, is being sued by a consumer-protection organization that claims it collected users' information without consent, in the latest privacy dispute involving the country's tech giants.

Two mobile apps operated by New York-listed Baidu, a search engine and a web browser, could access a user's calls, location data, messages and contacts without notifying the user, the Jiangsu Consumer Council, a government-backed consumer rights association, claimed in a statement on its website.

The Chinese government may not worry too much about these calls for more privacy provided they remain directed at companies, since they offer a useful way for citizens to express their concerns about surveillance without challenging the state. It looks happy to encourage users to demand more control over how online services use their personal data -- so long as the authorities can still access everything themselves.

As well as government acquiescence in these moves, there's another reason why Chinese companies may well start to take online privacy more seriously. Аn article in the South China Morning Post points out that if Chinese online giants want to move beyond their fast-saturating home market, and start operating in the US and EU, they will need to pay much more attention to privacy to satisfy local laws. As Techdirt reported, an important partnership between AT&T and Huawei, China's biggest hardware company, has just been blocked because of unproven accusations that data handled by Huawei's products might make its way back to the Chinese government.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

15 Comments | Leave a Comment..

Posted on Techdirt - 11 January 2018 @ 7:35pm

Shareholder Groups Say Apple Should Do More To Address Gadget 'Addiction' Among Young People: Should It?

from the won't-somebody-think-of-the-children-even-more? dept

In an open letter to Apple, two of its major shareholders, Jana Partners and the California State Teachers' Retirement System, have raised concerns about research that suggests young people are becoming "addicted" to high-tech devices like the iPhone and iPad, and the software that runs on them. It asks the company to take a number of measures to tackle the problem, such as carrying out more research in the area, and providing more tools and education for parents to help them deal with the issue. The letter quotes studies by Professor Jean M. Twenge, a psychologist at San Diego State University, who is also working with the shareholders in an effort to persuade Apple to do more:

Professor Twenge's research shows that U.S. teenagers who spend 3 hours a day or more on electronic devices are 35% more likely, and those who spend 5 hours or more are 71% more likely, to have a risk factor for suicide than those who spend less than 1 hour.

Other quoted research found:

The average American teenager who uses a smart phone receives her first phone at age 10 and spends over 4.5 hours a day on it (excluding texting and talking). 78% of teens check their phones at least hourly and 50% report feeling "addicted" to their phones.

According to the letter, at least part of the solution needs to come from Apple:

we note that Apple's current limited set of parental controls in fact dictate a more binary, all or nothing approach, with parental options limited largely to shutting down or allowing full access to various tools and functions. While there are apps that offer more options, there are a dizzying array of them (which often leads people to make no choice at all), it is not clear what research has gone into developing them, few if any offer the full array of options that the research would suggest, and they are clearly no substitute for Apple putting these choices front and center for parents.

The Apple shareholders behind the letter admit that it is not entirely altruistic:

we believe that addressing this issue now will enhance long-term value for all shareholders, by creating more choices and options for your customers today and helping to protect the next generation of leaders, innovators, and customers tomorrow.

Building on this, they also shrewdly point out that Apple has little to fear from moves to give parents more control over their children's use of Apple products:

Doing so poses no threat to Apple, given that this is a software (not hardware) issue and that, unlike many other technology companies, Apple's business model is not predicated on excessive use of your products. In fact, we believe addressing this issue now by offering parents more tools and choices could enhance Apple's business and increase demand for its products.

That's in contrast to Facebook or Google, for example, both which want people to use their respective products as much as possible so as to maximize the opportunities for advertising. Apple has already responded with a fairly generic reply, published on the iMore site:

we are constantly looking for ways to make our experiences better. We have new features and enhancements planned for the future, to add functionality and make these tools even more robust.

Unless that functionality goes well beyond the perfunctory, it is unlikely to satisfy the shareholder groups, who presumably want the "full array of options" they mention. The danger for Apple is that a limited response might lead to it being swept up in the growing backlash against Silicon Valley and its products, evident in a number of recent articles. One thing Apple could do is to make it easier for third parties to write apps that address the problem in a thoroughgoing way -- something its tightly-controlled ecosystem may make harder than for Android.

A broader issue is how serious the problem of gadget "addiction" in children really is -- and how it should be tackled. Clearly, the parents play a key role here, but what about the hardware and software companies who profit from it? To what extent should they provide fine-grained parental controls -- should social media, for example, offer parents the capability to limit the number and timing of daily posts made by their children, and would that even help?

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

37 Comments | Leave a Comment..

Posted on Techdirt - 8 January 2018 @ 7:39pm

The Stasi's Tiny Torn-Up Analog Files Defeat Modern Digital Technology's Attempts To Re-Assemble East Germany's Surveillance Records

from the too-hard-for-today's-hardware dept

It is nearly 30 years since the wall separating East and West Berlin came down, and yet work is still going on to deal with the toxic political legacy of East Germany. As Techdirt readers are well aware, one of the defining characteristics of the regime in East Germany was the unprecedented -- for the time, at least -- level of surveillance inflicted on citizens by the Stasi (short for Staatssicherheitsdienst, or State Security Service). This led to the creation of huge archives holding dossiers about millions of people.

As it became clear that East Germany's government would fall, and that its long-suffering citizens would demand to know who had been spying on them over the years, Stasi officers began to destroy the most incriminating documents. But there were so many files -- a 2008 Wired article about them says they occupied 100 miles of shelving -- that the shredding machines they used started to burn out. Eventually, Stasi agents were reduced to tearing pages by hand -- some 45 million of them, ripping them into around 600 million scraps of paper.

After thousands of bags holding the torn sheets were recovered, a team working for the Stasi records agency, the body responsible for handling the mountain of paper left behind by the secret police, began assembling the pages manually. It was hoped that the re-assembled documents would shed further light on the Stasi and its deeper secrets. But it was calculated that it would take 700 years to deal with all the scraps of paper by hand. A computerized approach was devised by the Fraunhofer Institute, best-known for devising the MP3 format, and implemented following a pilot project. After some initial successes, the program has run into problems, as the Guardian reports:

A so-called ePuzzler, working with an algorithm developed by the Fraunhofer Institute and costing about €8m of [German] federal funds, has managed to digitally reassemble about 91,000 pages since 2013. However, it has recently run into trouble.

For the last two years, the Stasi records agency has been waiting for engineers to develop more advanced hardware that can scan in smaller snippets, some of which are only the size of a fingernail.

The ePuzzler works by matching up types of paper stock, typewriter fonts, or the outline of the torn-up page. It has struggled with hand-written files that were folded before being torn, leaving several snippets with near-identical outlines.

While the hardware engineers try to come up with a suitable scanner that can handle these tiny fragments, a small team continues to match up the more crudely ripped pages manually. Inevitably, some people will be thinking: "If only the Stasi had used blockchain, all these problems could have been avoided..."

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

25 Comments | Leave a Comment..

Posted on Techdirt - 8 January 2018 @ 12:03pm

Want Anybody's Personal Details From Aadhaar, India's Billion-Person Identity Database? Yours For $8

from the Aadhaar-admin-accounts-also-available-on-request dept

We've been writing about the world's largest biometric database, India's Aadhaar, since July 2015. Over 1.1 billion people have now been enrolled, and assigned an Aadhaar number and card, which represents 99.9% of India's adult population. There are currently around 40 million authentications every day, a number that will rise as Aadhaar becomes inescapable for every aspect of daily life in India, assuming it survives legal challenges. That scale necessarily entails a huge infrastructure to handle enrollment and authentication. So it will comes as no surprise to Techdirt readers that it turns out you can obtain unauthorized access to the Aadhaar system very easily, and for very little cost. As the Indian newspaper The Tribune revealed:

It took just Rs 500 [about $8], paid through Paytm [an Indian online payment system], and 10 minutes in which an "agent" of the group running the racket created a "gateway" for this correspondent and gave a login ID and password. Lo and behold, you could enter any Aadhaar number in the portal, and instantly get all particulars that an individual may have submitted to the UIDAI (Unique Identification Authority of India), including name, address, postal code (PIN), photo, phone number and email.

What is more, The Tribune team paid another Rs 300 [$4.75], for which the agent provided "software" that could facilitate the printing of the Aadhaar card after entering the Aadhaar number of any individual.

Given the repeated assurances by the UIDAI that the Aadhaar database was completely secure, this is big news, and led to some breathless damage limitation by the Indian authorities on Twitter. The UIDAI explained that: "Some persons have misused demographic search facility, given to designated officials to help residents who have lost Aadhaar/Enrollment slip to retrieve their details"; and: "There has not been any data breach of biometric database which remains fully safe & secure with highest encryption at UIDAI and mere display of demographic info cannot be misused without biometric". Although it may be true that this is not a biometric data breach, it nonetheless reveals a serious vulnerability in the system's design, and on a vast scale. According to the original article in The Tribune, more than 100,000 "village-level enterprise operators", hired to help with Aadhaar enrollment, have been offering this kind of unauthorized access to the database. In fact, the problem seems to be even more serious than simply providing login credentials to thousands of people. Here's what another Indian site discovered:

Following up on an investigation by The Tribune, The Quint found that completely random people like you and me, with no official credentials, can access and become admins of the official Aadhaar database (with names, mobile numbers, addresses of every Indian linked to the UIDAI scheme). But that's not even the worst part. Once you are an admin, you can make ANYONE YOU CHOOSE an admin of the portal. You could be an Indian, you could be a foreign national, none of it matters -- the Aadhaar database won't ask.

Even if biometric data is not involved, it's hard to see how UIDAI could claim that these aren't breaches of the database, or deny that the entire Aadhaar system is seriously compromised. It's almost inevitable that the security of an important database system will be defeated eventually in some way, since the rewards are by definition so high. The fundamental problem with Aadhaar is its underlying intent -- to create a single, giant database with key personal information about a billion people that can be accessed very frequently and very widely. That's never going to be safe, as the inevitable future breaches will confirm.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

13 Comments | Leave a Comment..

Posted on Techdirt - 4 January 2018 @ 10:45am

China Plans To Turn Country's Most Popular App, WeChat, Into An Official ID System

from the won't-that-be-convenient? dept

In one respect at least, China's embrace of digital technology is far deeper and arguably more advanced than that of the West. Mobile phones are not only ubiquitous, but they are routinely used for just about every kind of daily transaction, especially for those involving digital payments. At the heart of that ecosystem sits Tencent's WeChat program, which has around a billion users in China. It has evolved from a simple chat application to a complete platform running hugely popular apps that are now an essential part of everyday life for most Chinese citizens. The centrality of WeChat makes the following move, reported here by the South China Morning Post, entirely logical:

The government of Guangzhou, capital of the southern coastal province of Guangdong, started on Monday a pilot programme that creates a virtual ID card, which serves the same purpose as the traditional state-issued ID cards, through the WeChat accounts of registered users in the city's Nansha district, according to a report by state news agency Xinhua.

It said that trial will soon cover the entire province and further expand across the country from January next year.

The Wall Street Journal has some details of how people register:

Under the pilot program, funded by the National Development and Reform Commission, people create a basic identity card by scanning an image of their face into a WeChat mini program, reading aloud four numbers that pop up on the screen and entering their identification number as well as other information.

It obviously makes a lot of sense to use the WeChat platform to provide a virtual identity card. It's convenient for users who already turn to WeChat apps to handle most aspects of their lives. It means they don't need to carry around a physical ID card, but can let the software handle the necessary authentication when needed. That's also good news for businesses that want to confirm a person's identity.

But it's also an extremely powerful way for the Chinese government to implement its real-name policy for online activities, something that it has so far failed to push through. It will mean that the daily posts and transactions carried out using a mobile will not only be available to the Chinese authorities, but will be unambiguously linked to an individual once such digital IDs become obligatory for WeChat users, as they surely will. That, in its turn, will be very handy for implementing the proposed "citizen score" framework. Once this has been rolled out nationwide, it will form one of the most effective means of control available to the Chinese government, especially if combined with a similarly comprehensive plan to collect everyone's DNA.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

14 Comments | Leave a Comment..

Posted on Free Speech - 3 January 2018 @ 7:50pm

Revealed: Vietnam's 10,000-Strong Internet Monitoring Force, Tasked With Stamping Out 'Wrongful Views'

from the whatever-happened-to-occupying-the-moral-high-ground? dept

Over the years, Techdirt has published quite a few stories about Vietnam's moves to stifle dissent online. On Christmas Day, Colonel General Nguyen Trong Nghia, deputy chairman of the General Political Department of the People's Army of Vietnam, revealed that the country had secretly created a massive Internet monitoring unit called "Force 47":

Nghia said the special force tasked with combating wrongful information and anti-state propaganda is called the Force 47, named after Directive No. 47 that governs its foundation.

The team currently has more than 10,000 members, who are "the core fighters" in cyberspace.

The three-star general underlined that members of this team are "red and competent," implying that they have both technology expertise and good political ideals in addition to personality.

As Tuoi Tre News reports, Force 47 is tasked with fighting "wrongful views". Bloomberg points out some recent moves by the Vietnamese authorities to police the online world:

Facebook this year removed 159 accounts at Vietnam's behest, while YouTube took down 4,500 videos, or 90 percent of what the government requested, according to VietnamNet news, which cited Minister of Information and Communications Truong Minh Tuan last week. The National Assembly is debating a cybersecurity bill that would require technology companies to store certain data on servers in the country.

The Wall Street Journal notes that heavy sentences have been imposed on people for using the Internet to spread some of those "wrongful views":

In recent months, the country has increased the penalties for anyone using Facebook as a platform to attack the government. In November, a young blogger was given a seven-year prison sentence for "spreading propaganda against the state," while a well-known environmentalist, Nguyen Ngoc Nhu Quynh, was handed a 10-year sentence on the same charges in June.

Vietnam is hardly alone in wanting to censor online content on a massive scale. As well as the obvious example of China, Germany, too, now requires Internet companies to delete "hate speech". In addition, the UK is threatening to impose tax penalties on companies that don't take down "extremist" material. In order to meet these global demands for rapid and even pre-emptive removal of material, the leading online companies are taking on thousands of people as in-house censors. Both Google and Facebook have promised to increase their "safety" teams to 20,000 people. Against that background, it's hard for the West to condemn Vietnam's latest moves without appearing hypocritical.

Follow me @glynmoody on Twitter or, and +glynmoody on Google+

21 Comments | Leave a Comment..

More posts from Glyn Moody >>