Glyn Moody’s Techdirt Profile


About Glyn Moody Techdirt Insider

Posted on Techdirt - 20 September 2019 @ 3:39pm

Another Nail In the Coffin Of Corporate Sovereignty, As Massive Asian Trade Deal RCEP Nears Completion Without It

from the ISDS,-what-is-it-good-for? dept

Remember RCEP? The Regional Comprehensive Economic Partnership is a massive trade deal being negotiated by most of South-East Asia -- including China and India. Although still little-known, it has been grinding away in the background, and is drawing closer to a final agreement. Almost exactly a year ago Techdirt noted that there were some interesting rumors that corporate sovereignty -- officially known as investor-state dispute settlement (ISDS) -- might be dropped from the deal. A story in The Malaysian Reserve confirms that is the case:

After missing several deadlines, member countries of the proposed Regional Comprehensive Economic Partnership (RCEP) have agreed to exclude the investor-state dispute settlement (ISDS) mechanism, a move that might expedite conclusion of the talks by the end of the year.

[Malaysia's] Ministry of International Trade and Industry (MITI) Minister Datuk Darell Leiking … said all RCEP member states -- 10 Asean countries plus six free trade agreement (FTA) partners namely Australia, China, India, Japan, New Zealand and South Korea -- have decided to drop the ISDS, but the item could be brought up again within two years of the agreement's ratification.

So corporate sovereignty is definitely out of the initial agreement, but could, theoretically, be brought back after two years if every participating nation agrees. Despite that slight loophole, this is a significant blow against the entire concept of ISDS. It's part of a larger trend to drop corporate sovereignty that has been evident for some time now. That still leaves plenty of toxic ISDS clauses in older investment treaties and trade deals, but the tide is definitely turning.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

20 Comments | Leave a Comment..

Posted on Techdirt - 18 September 2019 @ 3:35pm

Australian Aboriginal Flag Mess Is Getting Worse -- All Thanks To Copyright

from the don't-give-them-ideas dept

One of the longer-running sagas here on Techdirt concerns the disgraceful situation regarding the flag of Australia's Aboriginal peoples. Mike first wrote about this in 2010, and again in June of this year. The problem is that what is now widely regarded as the flag of Australia's First Nations was designed fairly recently by a private individual, not a group representing those peoples, or some official Australian government body. The designer, Harold Thomas, signed a licensing deal with a clothing company, Wam Clothing, which imposes hefty fees for the use of the design, even on non-profit health organizations giving away items that bear the flag:

In August, Wam Clothing charged the Indigenous Wellbeing Centre in Bundaberg AU$2,200 [about $1500] to use the flag on T-shirts it had given to patients who came into the clinic for a preventive health check.

According to an article in the Guardian, the licensing agreement between Wam Clothing and Harold Thomas specified that the design may be used by Aboriginal people for non-profit purposes. However:

Wam Clothing has said the terms of any licence agreements are confidential and legally privileged and only for the benefit of the parties to that agreement. They said the documents seen by Guardian Australia may have been fraudulently created.

Wam Clothing claims that it is the exclusive worldwide licensee for the use of the Aboriginal flag not just on clothing, but also on digital media. To prove the point:

In mid-August, the company issued a "cease and desist" notice to the creator of a Facebook discussion page called "New Aboriginal flag or flags discussion" because its "use of the digital image of the Aboriginal flag on social media platforms are [sic] being used in a negative light".

Copyright and secret deals have made the situation so ridiculous that in June 2019 the Australian Senate passed a motion calling on the national government to do all it could to "ensure that all First Nations peoples and communities can use the flag whenever they want without cost or the need for consent". More recently, the Australian MP Linda Burney called for the government there to sort things out:

"This situation is untenable," Burney said. "It’s unthinkable that the use of the Aboriginal flag is now governed by a secret agreement at the discretion of a for-profit company.

"It is a discredit to the flag's history and the strength it represents."

This is not just about copyright, or the rights of Australia's Aboriginal peoples. For Burney, the issue is personal:

"Like so many proud Aboriginal people, I've got a tattoo of the flag. What are they going to do? Try and cut it out of me?"

Probably best not to give them ideas, Linda…

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

33 Comments | Leave a Comment..

Posted on Techdirt - 13 September 2019 @ 3:37pm

Denmark Releases 32 Prisoners Convicted Because Of Flawed Mobile Phone Tracking Data

from the but-how-many-more-to-come? dept

A few weeks ago, Techdirt wrote about Denmark reviewing 10,000 court verdicts because of errors in mobile phone tracking data that was offered as evidence in those cases. At that time, it wasn't clear how many of the group were affected by the unreliable data. However, the Guardian reports that 32 people have already been freed. Given the large number of cases involved, it seems unlikely that many have been reviewed in such a short space of time. If that's the case, it is possible that quite a few more verdicts will be overturned, and more people released. Companies providing mobile phone services in Denmark are naturally keen to distance themselves from this mess. Jakob Willer, speaking on behalf of the country's telecoms industry association, said it was not their job to provide evidence:

"We should remember: data is created to help deliver telecom services, not to control citizens or for surveillance," Willer said. He conceded it could be valuable to police, but insisted its primary purpose was to facilitate communication between users.

That's an important point. If the authorities wish to use this kind of data they need to take into account that it was never designed to track people, and therefore has limitations as evidence. Fortunately, Denmark's embarrassing discovery that an unknown number of over 10,000 verdicts may be based on unreliable evidence has been something of a wake-up call for the country's lawyers. Karoline Normann, the head of the Danish law society's criminal law committee, told Agence-France Presse:

"This situation has changed our mindset about cellphone data. We are probably going to question it as we normally question a witness or other types of evidence, where we consider circumstances like who produced the evidence, and why and how."

It's troubling that it didn't occur to the legal profession to do that before. Just because information comes from high-tech sources doesn't mean it is infallible or that it can't be challenged.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

19 Comments | Leave a Comment..

Posted on Techdirt - 4 September 2019 @ 3:42am

Getting Upload Filters Wasn't Enough: EU Copyright Industry Starts Stealth Campaign To Demonize Internet Companies Even More

from the won't-somebody-think-of-the-children? dept

The EU Copyright Directive was supposed to bring copyright into the digital age. Instead, it turned into an attack on the Internet ecosystem by companies that once dominated analog media, and which are still struggling to accept the arrival of online services with a global reach. For example, the upload filters that are unavoidable under Article 13/17 of the Directive are really directed against Google, which ironically won't be much inconvenienced by them. Ordinary people, by contrast, may find their perfectly legal uploads forbidden without explanation. You might think the EU copyright companies would be pretty satisfied now they have this powerful new right to block uploaded materials using automated filters as their proxy, without needing a judge's approval. Not a bit of it. The German Web site Netzpolitik has obtained a leaked document revealing a coordinated campaign by copyright companies to hammer home the message that Internet companies are today's baddies:

politicians at national and European level, as well as officials and judges who have to make decisions and judgments against the five digital monopolists Google, Facebook, Amazon, Apple and Microsoft, need to be supported indirectly. The successful enforcement of our rights as broadcasters and press publishers depends on precisely these authorities and court decisions (Federal Ministry of Justice and Consumer Protection, EU Competition Commission and [Court of Justice of the European Union], Regional and Higher Regional Courts). The importance of continued information to the wider public has been demonstrated by the adoption of the EU Copyright Directive.

Here are the specific aims and how the copyright companies hope to achieve them:

Objective: To influence the formation of public opinion on dealing with digital monopolists and the resulting indirect training of officials, politicians, judges and decision-makers to make judgments and decisions that ensure that the digital monopolists once more comply with the law. That is: antitrust law, data protection law, laws protecting children and adolescents, tax law, equality laws and the protection of intellectual property.

Selected path: The concerns of originators and their copyright holders, composers, music and press publishers, authors as well as broadcasters and their individual authors, are mentioned, but not highlighted. This problem is presented as one of many, perhaps even larger ones. Only in this way we avoid the comment of critics that we are only concerned with the economic interests of our media companies, rights owners and authors.

So the EU copyright companies want to pretend the campaign is about tackling society's big challenges -- and not about boosting their own profits. After all, that lie worked well enough when used during the vicious lobbying in support of the worst aspects of the Copyright Directive, so why not try it again, but on a larger scale? The new campaign has its own Web site, "Fair Net", which turns the "OK Google" command into a "Not OK" theme. For example:

Not OK, that profits are more important than freedom of speech and the press.

That's rather rich coming from the companies that helped ram through Article 13/17 and Article 11 in the EU Copyright Directive, which do precisely that -- putting copyright companies' profits ahead of freedom of speech and the press. The site's final argument is a classic:

Not OK, that the Internet giants are not doing enough to protect the little ones.

Resorting to this tired old kind of emotional blackmail pretty much sums up this shoddy campaign. It shows that there is no dirty trick that the copyright industry won't stoop to. And it confirms that no matter how many special privileges they are given to ride roughshod over the rights of citizens in an attempt to prop up their outdated business models, they always want more.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

63 Comments | Leave a Comment..

Posted on Techdirt - 30 August 2019 @ 12:08pm

You Know That Mobile Phone Tracking Data You Used As Evidence In Over 10,000 Court Cases? Turns Out Some Of It Was Wrong, But We're Not Sure Which Yet

from the sorry-about-that dept

As many have pointed out, our mobile phones are the perfect surveillance device. Most people carry them around -- voluntarily -- while they are awake. Put this together with the fact that mobile phones have to connect to a nearby transmitter in order to work, and you end up with a pretty good idea of where the person using the device is throughout the day. No surprise, then, that police and prosecutors around the world turn routinely to phone tracking data when they are investigating cases. But as the New York Times reports, there can be serious problems with simply assuming the results are reliable. The Danish authorities have to review over 10,000 court verdicts because of errors in mobile phone tracking data that was offered as evidence in those cases. In addition, Denmark's director of public prosecutions has ordered a two-month halt in the use of this location data in criminal cases while experts try to sort out the problems:

The first error was found in an I.T. system that converts phone companies' raw data into evidence that the police and prosecutors can use to place a person at the scene of a crime. During the conversions, the system omitted some data, creating a less-detailed image of a cellphone’s whereabouts. The error was fixed in March after the national police discovered it.

In a second problem, some cellphone tracking data linked phones to the wrong cellphone towers, potentially connecting innocent people to crime scenes, said Jan Reckendorff, the director of public prosecutions.

It's not clear yet how serious these blunders will turn out to be -- it might only be a few, relatively minor cases. Or it might involve a large number of more serious crimes. Either way, it's a salutary reminder that however useful a technology might appear for the purposes of solving crimes -- and however straightforward its application seems -- things can and will go wrong. There's another approach that some people tend to view as infallible: the use of DNA sequencing techniques to identify suspects from material left at the scene of the crime. DNA is undoubtedly a powerful way of pulling information from tiny amounts of material, but there are a number of ways in which it can mislead badly. The same applies to mobile phone location data, as the Danish experience usefully underlines.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

16 Comments | Leave a Comment..

Posted on Techdirt - 27 August 2019 @ 7:30pm

Do Citizens Have A Right To See The Algorithms Used By Publicly-Funded Software?

from the public-money-means-public-code dept

In 2009, the Spanish government brought in a law requiring electricity bill subsidies for some five million poor households in the country. The so-called Bono Social de Electricidad, or BOSCO, was not popular with energy companies, which fought against it in the courts. Following a 2016 ruling, the Spanish authorities introduced new, more restrictive regulations for BOSCO, and potential beneficiaries had to re-register by 31 December 2018. In the end, around 1.5 million households were approved, almost a million fewer than the 2.4 million who had benefited from the previous scheme, and a long way from the estimated 4.5 million who fulfilled the criteria to receive the bonus.

The process of applying for the subsidy was complicated, so a non-profit organization monitoring public authorities, Civio, worked with the National Commission on Markets and Competition to produce an easy-to-use Web page that allowed people to check their eligibility for BOSCO. Because of discrepancies between what the Civio service predicted, and what the Spanish government actually decided, Civio asked to see the source code for the algorithm that was being used to determine eligibility. The idea was to find out how the official algorithm worked so that the Web site could be tweaked to give the same results. As Civio wrote in a blog post, that didn't go so well:

Unfortunately both the government and the Council of Transparency and Good Governance denied Civio access to the code by arguing that sharing it would incur in a copyright violation. However, according to the Spanish Transparency Law and the regulation of intellectual property, work carried out in public administrations is not subjected to copyright.

Civio was not the only one to have problems finding out why details of the algorithm could not be released. The non-profit research and advocacy organization AlgorithmWatch also asked several times exactly whose copyright would be violated if the source code of the governmental BOSCO software were shared, but without success. The fact that code, apparently public-funded and thus not subject to copyright, is nonetheless being withheld for reasons of copyright, is one unsatisfactory feature of the BOSCO saga. Another is that secret government algorithms are being used to make important decisions about citizens. As Civio says:

"Being ruled through secret source code or algorithms should never be allowed in a democratic country under the rule of law," highlights Javier de la Cueva, Civio's lawyer and trustee, in the lawsuit. "The current interpretation of the law by the [Council of Transparency and Good Governance] will allow public administration to develop algorithms hidden from public scrutiny," he warns. For this reason, we appealed the refusal of the Transparency Council before court.

There is another issue beyond the lack of transparency of governmental algorithms that impact people's lives. Supporters of open access rightly point out that it is only fair for the public to have free access to academic research they have paid for. Similarly, it seems only fair for the public to have free access to the source code of software written and used by the government, since they have paid for that too. Or as a site created by the Free Software Foundation Europe on precisely this issue puts it: "If it is public money, it should be public code as well".

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

14 Comments | Leave a Comment..

Posted on Techdirt - 22 August 2019 @ 8:00pm

What3words Is A Clever Way Of Communicating Position Very Simply, But Do We Really Want To Create A Monopoly For Location Look-ups?

from the word-in-your-ear dept

The BBC News site has one of those heart-warming stories that crop up periodically, about how clever new technology averted a potentially dangerous situation. In this case, it describes how a group of people lost in a forest in England were located by rescue services. The happy ending was thanks to the use of the What3words (W3W) app they managed to download following a suggestion from the police when they phoned for help. W3W's creators have divided the world up into 57 trillion virtual squares, each measuring 3m by 3m (10ft by 10ft), and then assigned each of those squares a unique "address" formed by three randomly-assigned words, such as "mile.crazy.shade". The idea is that it's easier to communicate three words generated by the What3words app from your position, than to read out your exact GPS longitude and latitude as a string of numbers. It's certainly a clever approach, but there are number of problems, many of which were discussed in a fascinating post by Terence Eden from earlier this year. The most serious one is that the system is not open:

The algorithm used to generate the words is proprietary. You are not allowed to see it. You cannot find out your location without asking W3W for permission.

If you want permission, you have to agree to some pretty long terms and conditions. And understand their privacy policy. Oh, and an API agreement. And then make sure you don't infringe their patents.

You cannot store locations. You have to let them analyse the locations you look up. Want to use more than 10,000 addresses? Contact them for prices!

It is the antithesis of open.

Another issue is the fact that the physical locations of addresses are changing in some parts of the world:

Perhaps you think this is an edge case? It isn't. Australia is drifting so fast that GPS can't keep up.

How does W3W deal with this? Their grid is static, so any tectonic activity means your W3W changes.

Each language has its own list of words, and there's no simple way to convert between them for a given location. Moreover, there is no continuity in the naming between adjacent squares, so you can't work out what nearby W3W addresses are. Fortunately, there are some open alternatives to W3W, many of them listed on a page put together by the well-known OpenStreetMap (OSM) group. OSM also points out the main danger if W3W is widely used -- Mongolia has already adopted it as an official addressing system for the country:

What3words is fairly simple from a software point of view, and is really more about attempting establish a standard for location look-ups. It will only succeed through the network effect of persuading many people to adopt and share locations. If it does succeed, then it also succeeds in "locking in" users into the system which they have exclusive monopoly over.

Given that problem, it seems questionable that, according to the BBC story, the UK police are urging "everyone to download a smartphone app they say has already saved several lives". Since when has it been the police's job to do the marketing for companies? Moreover, in many emergencies W3W may not be needed. Eden mentions a situation described given by a W3W press release:

Person dials the emergency services
Person doesn't know their location
Emergency services sends the person a link
Person clicks on link, opens web page
Web page geolocates user and displays their W3W location
Person reads out their W3W phrase to the emergency services

Here's the thing... If the person's phone has a data connection -- the web page can just send the geolocation directly back to the emergency services! No need to get a human to read it out, then another human to listen and type it in to a different system.

There is literally no need for W3W in this scenario. If you have a data connection, you can send your precise location without an intermediary.

That seems to have been the case for the people who were lost in the forest: since they were able to download the W3W app, as suggested by the police, a Web page could have sent their geolocation to the emergency services directly. Maybe that boring technical detail is something the BBC should have mentioned in its story, along with all the heart-warming stuff.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

37 Comments | Leave a Comment..

Posted on Techdirt - 20 August 2019 @ 6:09am

It's On: Details Emerge Of Polish Government's Formal Request For Top EU Court To Throw Out Upload Filters

from the worth-a-try dept

Earlier this year, Techdirt wrote about an intriguing tweet from the account of the Chancellery of the Prime Minister of Poland, which announced: "Tomorrow #Poland brings action against copyright directive to CJEU". The hashtags for the tweet made clear what Poland was worried about: "#Article13 #Article17". However, at that time, no details were forthcoming about this potentially important legal move. It was disappointing that nothing more has been heard about this unexpected development since then -- until now. A notice on the Official Journal of the European Union includes the following: "Case C-401/19: Action brought on 24 May 2019 -- Republic of Poland v European Parliament and Council of the European Union". The corresponding entry indicates that the Polish government believes that the upload filters required by Article 13/17 represent an "infringement of the right to freedom of expression and information" guaranteed by Article 11 of the Charter of Fundamental Rights of the European Union:

The Republic of Poland claims specifically that the imposition on online content-sharing service providers of the obligation to make best efforts to ensure the unavailability of specific works and other subject matter for which the rightholders have provided the service providers with the relevant and necessary information (point (b) of Article 17(4) of [EU Copyright] Directive 2019/790) and the imposition on online content-sharing service providers of the obligation to make best efforts to prevent the future uploads of protected works or other subject-matter for which the rightsholders have lodged a sufficiently substantiated notice (point (c), in fine, of Article 17(4) of Directive 2019/790) make it necessary for the service providers -- in order to avoid liability -- to carry out prior automatic verification (filtering) of content uploaded online by users, and therefore make it necessary to introduce preventive control mechanisms. Such mechanisms undermine the essence of the right to freedom of expression and information and do not comply with the requirement that limitations imposed on that right be proportional and necessary.

Nothing new there, of course -- it's what Techdirt and many others pointed out repeatedly before the Directive was passed. But what is significant is that this time it is the Polish government that is making this statement, and in a complaint to the EU's highest court, the Court of Justice of the European Union (CJEU). As a previous post explained, some are of the view that the key importance of Poland's legal action is that it requires the CJEU to consider the questions raised. That will necessarily include whether upload filters are "proportional and necessary" as a response to the uploading of unauthorized copies by members of the public.

As to the remedies, the Polish government ideally wants points (b) and (c) of the following section of Article 13/17 cancelled:

If no authorisation is granted, online content-sharing service providers shall be liable for unauthorised acts of communication to the public, including making available to the public, of copyright-protected works and other subject matter, unless the service providers demonstrate that they have:

(a) made best efforts to obtain an authorisation, and

(b) made, in accordance with high industry standards of professional diligence, best efforts to ensure the unavailability of specific works and other subject matter for which the rightholders have provided the service providers with the relevant and necessary information; and in any event

(c) acted expeditiously, upon receiving a sufficiently substantiated notice from the rightholders, to disable access to, or to remove from their websites, the notified works or other subject matter, and made best efforts to prevent their future uploads in accordance with point (b).

If, however, the CJEU decides it is not possible to excise just those parts, the Polish government has a fallback position: it asks for Article 13/17 to be annulled in its entirely. It's too early to say whether Poland's request stands any chance of being granted. But it would certainly be rather fun watching the copyright industry go into meltdown if it saw all its lobbying for upload filters undone at a stroke.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

18 Comments | Leave a Comment..

Posted on Techdirt - 15 August 2019 @ 7:04pm

Kenyan Government Risks Squandering The Long-Term Potential Of Mobile Transactions In The Hope Of A Little Extra Tax Revenue

from the laffer-curve-is-no-joke dept

Back in October last year, Techdirt wrote about some unfortunate developments taking place in the African digital world. Governments across the continent are bringing in levies and taxes on Internet use, making it more expensive and thus harder for ordinary people to access the Internet at a time when the digital ecosystem in Africa is starting to take off in a big way. In February of this year, we reported on some evidence that the social media tax in Uganda was indeed causing fewer people there to use the Internet, and for the total value of mobile transactions to drop. Quartz Africa has a post about a new report from Brookings on the steep rise in taxes on mobiles and data in Kenya, and the harms it is likely to cause. Here's how things have gone from bad to worse:

In June 2009, the Kenyan government, recognizing the importance of enhancing access to mobile telephony, exempted mobile handsets from the VAT. This move increased the affordability of the handsets and made possible the more than 200 percent increase in handset purchases and a 50 percent to 70 percent increase in penetration rates (Strusani and Solomon, 2011). In turn, the use of mobile phones and related services such as mobile money deepened, and Kenya's total mobile subscribers almost doubled from 17.4 million in June 2009 to 29.8 million by March 2013. Then, the VAT Act 2013 paved the way for the taxation of previously exempted goods such as mobile phones, computer hardware, and software. Now, mobile phone users in Kenya had to pay a 16 percent VAT [Value-Added Tax] on the purchase of a mobile handset in addition to the 10 percent excise tax on airtime, which had been introduced earlier in financial year (FY) 2002/03.Further, in FY 2013/14, the government introduced an excise tax on retail financial transactions at a rate of 10 percent. The Finance Act 2018 then increased the excise tax on money transfer services by banks from 10 percent to 20 percent, on telephone services (airtime) from 10 percent to 15 percent, and introduced a 15 percent excise tax on internet data services and fixed line telephone services.

The report notes the many benefits of promoting mobile payments -- things like serving as an economic driver, and encouraging savings and credit. Particularly important for developing countries is the how mobile-based services increase financial inclusion, providing access to banking for even the poorest sectors of society, which can help to reduce overall levels of poverty.

The authors of the study point out that the tendency of taxes to operate on a Laffer curve means that as rates increase, tax revenue from mobiles and data use may decline at some point, making such moves self-defeating. Moreover, if people start to turn back to cash to avoid increased costs of mobile payments, the benefits of digital transactions are lost, including the ability for governments to track and tax transactions more easily, leading to further revenue losses. The report concludes:

The tax policy and design of taxes on retail electronic transactions as well as bank transactions has the potential to reverse the gains that technology has pushed Kenya to the frontier of electronic payments and financial inclusion and back to cash preference and financial exclusion for low-income earners.

The same applies to other African nations that think taxing mobile services is an easy way to raise a little extra revenue. As this new report emphasizes, they may find that that they inflict considerable harm on their digital economies for very little financial benefit.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

5 Comments | Leave a Comment..

Posted on Techdirt - 6 August 2019 @ 3:05pm

Have You Heard? If You Spread 'Hurtful' Rumors In China, You'll Be Thrown Off The Internet For Years

from the blatantly-attacking-revolutionary-martyrs dept

The Chinese authorities really don't like rumors being spread. Back in 2012, Techdirt reported on a "five strikes and you're out" plan for throwing rumormongers off social media for 48 hours. That obviously didn't work too well, since in 2013 a tougher line was introduced: three years in prison if you get 500 retweets of a "hurtful" rumor. But even that doesn't seem to have achieved its aim, judging by this post on Caixin Live about yet another law aimed at stamping out rumors:

A draft regulation released for public comment on July 22 by the Cyberspace Administration of China proposes restricting the internet access of users and providers of online information services that "fabricate, publish, or spread information that violates public morality, business ethics, or good faith" or deliberately provide technological assistance to those who do so.

Blacklisted individuals would be forbidden from using the Web or online services for three years. They would also be restricted from working in the Internet industry for that period. Depending on "whether the individual rectifies their behavior and prevents their disinformation from spreading further", that term could be reduced, or extended by up to three more years.

This isn't the only recent initiative to stamp out those hurtful messages. Last year, a platform called "Piyao" -- which means " to refute a rumor" -- was launched. It is a Web site and mobile app, and designed to spot "untrue rumors" with the help of AI and members of the public, who can report any bad stuff they've come across. According to Reuters, a promotional video released at the launch of the site warned:

Rumours violate individual rights; rumours create social panic; rumours cause fluctuations in the stock markets; rumours impact normal business operations; rumours blatantly attack revolutionary martyrs.

Terrible things, these rumors. Pity they seem a perennial part of the online world -- however much the Chinese authorities might try to eradicate them.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

13 Comments | Leave a Comment..

Posted on Techdirt - 1 August 2019 @ 3:35am

What Happens When The US Government Tries To Take On The Open Source Community?

from the maybe-we-are-about-to-find-out dept

Last year, Microsoft bought the popular code repository GitHub. As Techdirt wrote at the time, many people were concerned by this takeover of a key open source resource by a corporate giant that has frequently proved unfriendly to free software. In the event, nothing worrying has happened -- until this:

GitHub this week told Anatoliy Kashkin, a 21-year-old Russian citizen who lives in Crimea, that it had "restricted" his GitHub account "due to US trade controls".

As the ZDNet article explains, a user in Iran encountered the same problems. Naturally, many people saw this as precisely the kind of danger they were worried about when Microsoft bought GitHub. The division's CEO, Nat Friedman, used Twitter to explain what exactly was happening, and why:

To comply with US sanctions, we unfortunately had to implement new restrictions on private repos and paid accounts in Iran, Syria, and Crimea.

Public repos remain available to developers everywhere -- open source repos are NOT affected.

He went on to note:

The restrictions are based on place of residence and location, not on nationality or heritage. If someone was flagged in error, they can fill out a form to get the restrictions lifted on their account within hours.

Users with restricted private repos can also choose to make them public. Our understanding of the law does not give us the option to give anyone advance notice of restrictions.

We're not doing this because we want to; we're doing it because we have to. GitHub will continue to advocate vigorously with governments around the world for policies that protect software developers and the global open source community.

The most important aspect of this latest move by GitHub is that open source projects are unaffected, and that even those who are hit by the bans can get around them by moving from private to public repositories. Friedman rightly points out that as a company based in the US, GitHub doesn't have much scope for ignoring US laws.

However, this incident does raise some important questions. For example, what happens if the US government decides that it wants to prevent programmers in certain countries from accessing open source repositories on GitHub as well? That would go against a fundamental aspect of free software, which is that it can be used by anyone, for anything -- including for bad stuff.

This question has already come up before, when President Trump issued the executive order "Securing the Information and Communications Technology and Services Supply Chain", a thinly-disguised attack on the Chinese telecoms giant Huawei. As a result of the order, Google blocked Huawei's access to updates of Android. Some Chinese users were worried they were about to lose access to GitHub, which is just as crucial for software development in China as elsewhere. GitHub said that wasn't the case, but it's not hard to imagine the Trump administration putting pressure on GitHub's owner, Microsoft, to toe the line at some point in the future.

More generally, the worry has to be that the US government will attempt to dictate to all global free software projects who may and may not use their code. That's something that the well-known open source and open hardware hacker Bunnie Huang has written about at length, in a blog post entitled "Open Source Could Be a Casualty of the Trade War". It's well-worth reading and pondering, because the relatively minor recent problems with GitHub could turn out to be a prelude to a far more serious clash of cultures.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

136 Comments | Leave a Comment..

Posted on Techdirt - 26 July 2019 @ 3:22am

Why A 'Clever Hack' Against Nazis Shows How Upload Filters Have Made Copyright Law Even More Broken

from the cloak-of-invisibility dept

As Techdirt has pointed out many times, one of the biggest problems with the EU Copyright Directive's upload filters is that they will necessarily be automated, which means they will inevitably be flawed. After all, it can take the EU's top judges weeks to decide complex questions about whether something is copyright infringement or not. And yet Article 13/17 expects software to do the same in microseconds. This kind of collateral damage from clueless algorithms is already happening, albeit on a small scale. Boing Boing has an interesting new twist on this problem. Cory Doctorow writes about an idea that RJ Jones mentioned on Twitter:

My friend gave me a tip! If you need to drown out fascists, bring a speaker & play copyrighted music at their rallies cause it will be easy to report their videos & get them taken down for copyright.

Once the EU's upload filters are in place, it won't even be necessary to report the videos: they will almost certainly be blocked automatically by algorithms that don't know about fair use and the like. But Doctorow points out a big problem with this idea:

The thing is, as much as it's a cute way to sabotage Nazis' attempts to spread their messages, there is nothing about this that prevents it from being used against anyone. Are you a cop who's removed his bodycam before wading into a protest with your nightstick? Just play some loud copyrighted music from your cruiser and you'll make all the videos of the beatings you dole out un-postable.

As this underlines, using copyright material in the background creates a kind of cloak of invisibility for the foreground actors -- both good and bad -- that makes certain videos impossible to post to the Internet if upload filters are in place. This is not what copyright is supposed to do. It shows how far copyright has been perverted from its original purpose -- "the Encouragement of Learning", as the 1710 Statute of Anne puts it. The problem arises from the use of dumb algorithms that don't understand the context of the copyright material they are filtering. It confirms once more what an incredibly stupid idea it was for EU lawmakers to allow Article 13/17 to pass.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

82 Comments | Leave a Comment..

Posted on Techdirt - 19 July 2019 @ 1:33pm

Why Carl Malamud's Latest Brilliant Project, To Mine The World's Research Papers, Is Based In India

from the sci-hub-to-the-rescue-again dept

Carl Malamud is one of Techdirt's heroes. We've been writing about his campaign to liberate US government documents and information for over ten years now. The journal Nature has a report on a new project of his, which is in quite a different field: academic knowledge. The idea will be familiar to readers of this site: to carry out text and data mining (TDM) on millions of academic articles, in order to discover new knowledge. It's a proven technique with huge potential to produce important discoveries. That raises the obvious question: if large-scale TDM of academic papers is so powerful, why hasn't it been done before? The answer, as is so often the case, is that copyright gets in the way. Academic publishers use it to control and impede how researchers can help humanity:

[Malamud's] unprecedented project is generating much excitement because it could, for the first time, open up vast swathes of the paywalled literature for easy computerized analysis. Dozens of research groups already mine papers to build databases of genes and chemicals, map associations between proteins and diseases, and generate useful scientific hypotheses. But publishers control -- and often limit -- the speed and scope of such projects, which typically confine themselves to abstracts, not full text.

Malamud's project gets around the limitations imposed by copyright and publishers thanks to two unique features. First, Malamud "had come into possession (he won't say how) of eight hard drives containing millions of journal articles from Sci-Hub". Drawing on Sci-Hub's huge holdings means his project doesn't need to go begging to publishers in order to obtain full texts to be mined. Secondly, Malamud is basing his project in India:

Over the past year, Malamud has -- without asking publishers -- teamed up with Indian researchers to build a gigantic store of text and images extracted from 73 million journal articles dating from 1847 up to the present day. The cache, which is still being created, will be kept on a 576-terabyte storage facility at Jawaharlal Nehru University (JNU) in New Delhi.

India was chosen because of an important court battle that concluded two years ago. As Techdirt reported then, it is legal in India to make photocopies of copyright material in an educational context. Malamud's contention is that this allows him to mine academic material in India without the permission of publishers. But he also believes that his TDM project would be legal in the US:

The data mining, he says, is non-consumptive: a technical term meaning that researchers don't read or display large portions of the works they are analysing. "You cannot punch in a DOI [article identifier] and pull out the article," he says. Malamud argues that it is legally permissible to do such mining on copyrighted content in countries such as the United States. In 2015, for instance, a US court cleared Google Books of copyright infringement charges after it did something similar to the JNU depot: scanning thousands of copyrighted books without buying the rights to do so, and displaying snippets from these books as part of its search service, but not allowing them to be downloaded or read in their entirety by a human.

The fact that TDM is "non-consumptive" means that the unhelpful attitude of academic publishers is even more unjustified than usual. They lose nothing from the analytical process, which is merely extracting knowledge. But from a sense of entitlement publishers still demand to be paid for unrestricted computer access to texts that have already been licensed by academic institutions anyway. That selfish and obstructive attitude to TDM may be about to backfire spectacularly. The Nature article notes:

No one will be allowed to read or download work from the repository, because that would breach publishers' copyright. Instead, Malamud envisages, researchers could crawl over its text and data with computer software, scanning through the world's scientific literature to pull out insights without actually reading the text.

The thing is, if anyone were by any chance interested in reading the full text, there's an obvious place to turn to. After all, the mining is carried out using papers held by Sci-Hub, so…

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

9 Comments | Leave a Comment..

Posted on Techdirt - 18 July 2019 @ 7:57pm

Why The Appearance Of A One Terabyte microSD Card Means The War On Unauthorized Music Downloads Is (Almost) Over

from the war-on-video-downloads-over-after-that dept

Moore's Law is well known. But many people think it's about how chip processing power keeps increasing. It's actually about the number and/or density of components on silicon. As such, it applies just as much to memory storage products as to processor chips. It's why you can now buy a one terabyte microSD card for $449.99. Never mind the price: although it's steep, it will inevitably tumble in the next few years, just as happened with lower-capacity microSD cards. What's much more important is what you can store with one terabyte on a tiny, tiny card. Mashable has done the calculations:

About 1,000,000 e-books (at an average size of 1MB per e-book)

About 200,000 photos (12-megapixel iPhone XS Live Photos at an average size of 5MB) or 500,000 photos (12-megapixel iPhone XS photos at an average size of 2MB)

About 250,000 iTunes songs (at an average size of 4MB for an average 4-minute tune)

About 222 Full HD movies from iTunes (at an average of 4.5GB per movie)

Perhaps the most interesting one there is the music. Spotify says it has over 50 million tracks on its service. That means a 256 terabyte microSD could probably hold every track on Spotify, and thus most of the recorded music that is generally available in a digital form. Even with today's one terabyte card, you can probably store the complete catalog of songs in a particular style or genre, which is what many people will be most interested in.

In any case, assuming Moore's Law continues to hold, it will soon be possible to buy a 256 terabyte microSD card. Yes, it will be pricey to begin with, but progressively cheaper. At that point, moves to stop unauthorized sharing of music online will be even more pointless than they are now. People won't need to download lots of stuff from dodgy sites any more; they'll just find a friend who has a 256 terabyte microSD card loaded up with all recorded music, and make a copy. After that, they just need to update the parts that interest them -- or find someone with a more recent complete collection.

The same will happen to videos, although that's a little way off, since something like a 256 petabyte microSD card will be needed to hold every film that has been digitized. But it too will come, just as the milestone one terabyte capacity has finally arrived, however improbable that might have seemed a few years ago.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

44 Comments | Leave a Comment..

Posted on Techdirt - 17 July 2019 @ 3:32am

EU Looking To Regulate Everything Online, And To Make Sites Proactively Remove Material

from the if-you-thought-Copyright-Directive-was-bad,-just-wait-for-Digital-Services-Act dept

One of the reasons that Techdirt and many others fought so hard against the worst ideas of the EU Copyright Directive is that it was clearly the thin end of the wedge. If things like upload filters and the imposition of intermediary liability become widely implemented as the result of legal requirements in the field of copyright, it would only be a matter of time before they were extended to other domains. Netzpolitik has obtained a seven-page European Commission paper sketching ideas for a new EU Digital Services Act (pdf) that suggests doing exactly that. The Act's reach is extremely wide:

The scope would cover all digital services, and in particular online platforms. This means the clarification would address all services across the internet stack from mere conduits such as ISPs to cloud hosting services; while a special emphasis in the assessment would be dedicated to updated rules for online platforms such as social media, search engines, or collaborative economy services, as well as for online advertising services.

A core aim is to replace the e-Commerce Directive, passed in 2000. This is presented as "outdated", but the suggestions in the paper are clearly a continuation of attacks on the fundamental principles underlying the open Internet that began with the Copyright Directive.

One of the problems for the EU when pushing through the upload filters of Article 13/17 in the Copyright Directive is that Article 15 of the e-Commerce Directive explicitly states that there is "No general obligation to monitor". Constant surveillance is the only way that upload filters can work -- if you don't monitor all the time, you can't be sure you block everything that the law requires. Furthermore, Article 14 of the e-Commerce Directive emphasizes that "the service provider is not liable for the information stored at the request of a recipient of the service". That's subject to certain conditions, such as being required to remove material that infringes on copyright, but only after being informed of its presence on their servers. The new Digital Services Act wants to force Internet companies to move beyond reactive behavior:

a binding "Good Samaritan provision" would encourage and incentivise proactive measures, by clarifying the lack of liability as a result of Such measures

The paper goes on to repeat the EU's earlier attempts to pretend that upload filters are not a glaring example of general monitoring -- something that EU courts may well be asked to rule on. The leaked document says:

While the prohibition of general monitoring obligations should be maintained as another foundational cornerstone of Internet regulation, specific provisions governing algorithms for automated filtering technologies -- where these are used -- should be considered, to provide the necessary transparency and accountability of automated content moderation Systems.

That's a classic: affirming that general monitoring is prohibited, while bringing in rules for proactive automated filtering technologies -- aka general monitoring. It would tilt the playing field even more in favor of big, mostly US companies, and would guarantee that the EU never produces its own digital giants like Google or Facebook. The other main proposal of the paper is to bring in mandatory pan-European rules for tackling online hate speech and disinformation, drawing on ideas in national laws:

Uniform rules for the removal of illegal content such as illegal hate speech would be made binding across the EU, building on the Recommendation on illegal content and case-law, and include a robust set of fundamental rights safeguards. Such notice-and-action rules could be tailored to the types of services, e.g. whether the service is a social network, a mere conduit, or a collaborative economy service, and where necessary to the types of content in question, while maintaining the maximum simplicity of rules.

Simplicity? - hardly. This all sounds like a recipe for a completely unworkable set of complex requirements that once again will favor big companies with deep pockets and big legal departments. The authors of the leaked note have managed to come up with an option for making these plans even worse: creating a "central regulator" for the whole EU to enforce this locked down, permissioned Internet they want to create. Although this is only an internal paper, not a formal proposal from the EU, it shows the kind of really bad ideas that are already floating around the European Commission, and being seriously considered there. If you thought the EU Copyright Directive was bad, just wait until you see the new EU Digital Services Act.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

33 Comments | Leave a Comment..

Posted on Techdirt - 16 July 2019 @ 3:23am

Russian Spy Discovers The Hard Way How Much His Smartphone's Metadata Reveals About His Activities

from the imagine-what-it's-like-for-the-rest-of-us dept

Smartphones are not just amazing pieces of technology that pack a range of advanced capabilities into a pocket-sized device. They are also the best tracking device invented so far. They reveal where we are, and what we are doing, every minute we have them with us. And the most amazing aspect is that we carry them not because we are forced to do so by authoritarian governments, but willingly.

A permanent state of surveillance is something most people just accept as the price of using mobile phones. But for one class of users, the built-in tracking capabilities of smartphones are far worse than just annoying. For spies -- especially more senior ones -- the information revealed by their mobile phones is not just embarrassing but poses a serious threat to their future operational usefulness.

That's evident from a new investigation carried out by the Bellingcat team in partnership with various media organizations. Techdirt was one of the first to write about Bellingcat's use of "open source information" -- material that is publicly available -- to piece together the facts about what are typically dramatic events. The latest report from the group is slightly different, in that it draws on mobile phone data leaked by a whistleblower in Russia. According to Bellingcat's research, the account seems to be that of the mid-ranking Russian military intelligence (GRU) officer Denis Sergeev:

Newly obtained telephone metadata logs from a telephone number registered in the name of the (cover) persona "Sergey Fedotov" has allowed us to analyze Denis Sergeev's telephone usage -- including calls and data connections -- in the period of May 2017 -- May 2019. The data -- and especially the cell-ID metadata that we have been able to convert to geo-locations -- allowed us to recreate Sergeev's movements. These movements were both in Russia and abroad, as well as his pattern of communications during his overseas operations. Bellingcat obtained the telephone metadata records from a whistleblower working at a Russian mobile operator, who was convinced s/he was not breaching any data privacy laws due to the fact that the person to whom this phone number was registered ("Sergey Fedotov") does not in fact exist.

It's a nice irony that the use of a cover name meant that Russia's data privacy laws were not broken by leaking the telephone metadata. There are two Bellingcat posts. The first uses the records to track Sergeev's movements around central London. Nothing special in that, you might say. Except that Anatoliy Chepiga and Alexander Mishkin, the two Russians suspected by the UK police of attempting to poison a former Russian spy who had been a double agent for the UK, Sergei Skripal (and his daughter), just happened to be in London at exactly the same time:

according to the timeline of Chepiga and Mishkin's movements, as presented by British police, they arrived from their hotel to Waterloo station at approximately 11:45 on that day. Their train to Salisbury, however, would have left at 12:50. Waterloo station is approximately 10 minutes walk from the Embankment. Thus, had a meeting in person been necessary between Sergeev and the Chepiga/ Mishkin team -- whether to pass on final instructions or a physical object -- the area between the Embankment and the Waterloo would have been a convenient place, and the one-hour time gap between their arrival to the station and their departure would have likely sufficed.

The rest of the first Bellingcat post provides further fascinating details about Sergeev's movements in London, and telephone calls with a mysterious "Amir from Moscow", probably a senior intelligence officer who was his handler back home. The second post tracks Sergeev as he visited Switzerland multiple times between 2014 and 2018. As Bellingcat explains, it is not clear what he was doing there, but there are a number of tantalizing hints.

For example, Sergeev's mobile telephone connected to the cell antenna inside the Maison du Sport, where the Lausanne office of the World Anti-Doping Agency (WADA) is located. That's interesting given Russia's problems with doping in international sport. Sergeev's metadata also indicates that at one point he was physically close to the former US Ambassador to Switzerland, Suzan LeVine, but it's not clear why. Here's one suggestion from Bellingcat:

Was he keeping an eye on Suzan LeVine and her husband while another team tried to introduce a virus or hack into a laptop computer left at the Palace Beau-Rivage where the couple had left their luggage? No longer in office, the diplomat was not entitled to any special security, so perhaps this was seen as a low-hanging opportunity by a GRU team that was already in town. Targeting foreign former government officials -- who may or may not come back into positions of political relevance under a future administration -- appears to be compatible with the long-term strategy of an intelligence service.

There is the intriguing fact that the alleged assassins Chepiga and Mishkin were also present in Geneva during one of Sergeev's visits. Although there is no evidence that they met, it would have been remarkable had they not, since they were in the same city, and often travelled together. Finally, it seems that Sergei Skripal was also in Switzerland during one of Sergeev's trips -- another interesting "coincidence."

Both Bellingcat posts are worth reading for the fascinating insights they give into Russian spycraft. The fact that so much can be deduced about someone who has decades of experience of not leaving a trail is a useful reminder of how much more could be gleaned from the smartphone metadata of ordinary citizens, who aren't even trying to hide anything.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

13 Comments | Leave a Comment..

Posted on Free Speech - 10 July 2019 @ 3:27am

Politicians Queue Up To Make France's Proposed Law Against 'Hateful Content' Far, Far Worse

from the gag-that-ag-gag dept

The intent behind "ag-gag" laws is pretty evident. The aim is to prevent the general public learning about unsatisfactory or downright cruel conditions in which animals are kept by some farmers. Techdirt has been reporting on them for a number of years. Fortunately, US courts are increasingly throwing them out as unconstitutional. So far, ag-gag laws seems to be a US specialty, but that may be about to change. A new law under discussion in France would force online companies to remove "hateful content" from their networks within 24 hours. The journalist Marc Rees spotted a proposed amendment to the law that would define the following content as "hateful" (via Google Translate):

stigmatizing agricultural activities, breeding or sale of products from agriculture and livestock breeding and inciting acts of intrusion or violence vis-à-vis professionals of agriculture, livestock breeding and the processing and sale of products from these sectors

As an article in Numerama (original in French) points out:

All these criminal acts are already repressed by the law. For example, death threats are handled in the penal code, with several years in prison and a fine of up to tens of thousands of euros. Ditto for the night intrusions, which can be assimilated to a violation of domicile

The only thing that this proposed amendment adds to the current law in France is the requirement for online services to take down posts that "stigmatize" farmers in some vague way -- an ag-gag law, in other words.

Trying to turn proposed legislation against "hateful content" into an ag-gag law is just one example of how a bad idea is being made into a worse one. Other amendments have been put forward that would force online companies to remove within 24 hours "hateful" posts about physical appearance, disabilities, political opinions, mother tongue, cultural practices, and many other areas where feelings often run high (original in French, behind paywall). One amendment even wants an open list of "hateful" things that have to be taken down within 24 hours, so that new categories can be added in the future without needing to amend the legislation. It will be fun watching how French politicians fight among themselves over what should or shouldn't be included from the long list of proposed additions. The danger is clearly that whatever the outcome, the harm to freedom of speech in France -- and maybe beyond -- will be even worse than critics of the new law feared.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

14 Comments | Leave a Comment..

Posted on Techdirt - 3 July 2019 @ 3:27am

Yes, The EU Copyright Directive Does Have A Few Good Ideas -- But They Need To Be Implemented Properly

from the here's-the-next-battle dept

Techdirt's reporting on the EU's disastrous Copyright Directive concentrated on its three worst aspects: Article 13 (upload filters -- now renumbered as Article 17), Article 11 (ancillary copyright for press publications, now Article 15) and Article 3 (text and data mining). But there are some other sections, less well known, which could actually help to improve copyright law in the EU. One of them is Article 14, which concerns "Works of visual art in the public domain":

Member States shall provide that, when the term of protection of a work of visual art has expired, any material resulting from an act of reproduction of that work is not subject to copyright or related rights, unless the material resulting from that act of reproduction is original in the sense that it is the author's own intellectual creation.

The problem being addressed here is one Techdirt has written about before. For example, back in 2015, a German museum sued the Wikimedia Foundation for displaying 17 images of the museum's public domain works of art. Even though the works of art were unequivocally in the public domain, the museum claimed that the photographs were new creations, and therefore covered by copyright. Article 14 aims to clarify that once an object is in the public domain any reproduction of that object is also in the public domain. It's hardly a huge concession, but it's better than nothing.

Since it is a directive, the EU's new copyright law has to be implemented in national legislation by each of the EU's member states individually. The overall intent of Article 14 may be clear enough, but the exact details of its transposition into national law matter, as a post on the Communia site emphasizes. For example, it notes that Article 14 is a minimum harmonization measure, and EU governments are therefore able to make changes to their laws that go further than what is required by the Copyright Directive, in order to simplify its application:

While the scope of Article 14 is limited to "works of visual art" and to such works for which "the term of protection has expired", Member States would be well-advised exclude all non-original reproductions of copyrighted works from eligibility for any form of related rights protection. This would provide increased legal clarity as it would remove the need to find an unambiguous definition of the "visual arts." Also, it would avoid the need to introduce complicated rules regarding the application in time, such as what happens to related rights that were created before the term of protection of the underlying work expired.

Similarly, Communia says that national lawmakers must ensure that three-dimensional reproductions of three-dimensional works -- for example 3D models of sculptures created via 3D scans or similar technologies -- are excluded from related rights protections. It also wants the new laws in every EU country to "enshrine the fundamental principle that mere copies of existing artworks should not be subject to copyright or related rights."

Beyond the specificities of Article 14, there is an important general point here. The text of EU's copyright directive may contain some really bad things, but they, too, must be implemented through national legislation. That means there is still an opportunity to soften the worst aspects of things like Article 13/17 and Article 11/15 by tweaking the exact wording of the national laws to minimize their harm. It's an avenue in addition to legal challenges that can be brought to strike down bad ideas like upload filters. Even if the EU Copyright Directive text has now been agreed, the fight over its implementation is only just beginning.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

6 Comments | Leave a Comment..

Posted on Techdirt - 2 July 2019 @ 3:24am

Boris Johnson, UK's Answer To Trump, Offers A Masterclass In How To Use The Dead Cat Strategy Combined With A Google Bomb

from the sheer-genius-or-dumb-luck? dept

Boris Johnson -- full name Alexander Boris de Pfeffel Johnson -- was born in New York to English parents, studied at Eton and Oxford, became Mayor of London, and now stands a good chance of becoming the UK's next prime minister. That's not because of any outstanding ability, but largely because he belongs to the country's ruling class and assumes the position is his by right, as do many of his supporters. However, this smooth if completely unearned rise to the top of the UK's political system was threatened recently by an unexpected event. Police were called in the early hours to the London home of Johnson and his partner, Carrie Symonds, after neighbors heard "a loud altercation involving screaming, shouting and banging":

The argument could be heard outside the property where the potential future prime minister is living with Symonds, a former Conservative party head of press.

A neighbour told the Guardian they heard a woman screaming followed by "slamming and banging". At one point Symonds could be heard telling Johnson to "get off me" and "get out of my flat".

Despite repeated questions by interviewers, Johnson refused to comment on the incident, which naturally provoked yet more interest. Johnson's chances of becoming prime minister seemed to be dropping by the hour. And then came an interview with talkRADIO, in which Johnson was asked: "What do you do to relax?" He replied:

I like to paint. Or I make things. I have a thing where I make models of buses. What I make is, I get old, I don't know, wooden crates, and I paint them. It's a box that's been used to contain two wine bottles, right, and it will have a dividing thing. And I turn it into a bus.

So I put passengers -- I paint the passengers enjoying themselves on a wonderful bus -- low carbon, of the kind that we brought to the streets of London, reducing CO2, reducing nitrous oxide, reducing pollution.

As the Guardian reported, this surreal answer blew people's minds, and a variety of reasons were offered for this bizarre response. But Adam Bienkov, UK Political Editor of BusinessInsider, had the best explanation. He reminded people of something that Johnson had written in 2013:

Let us suppose you are losing an argument. The facts are overwhelmingly against you, and the more people focus on the reality the worse it is for you and your case. Your best bet in these circumstances is to perform a manoeuvre that a great campaigner describes as "throwing a dead cat on the table, mate".

That is because there is one thing that is absolutely certain about throwing a dead cat on the dining room table -- and I don't mean that people will be outraged, alarmed, disgusted. That is true, but irrelevant. The key point, says my Australian friend, is that everyone will shout "Jeez, mate, there's a dead cat on the table!"; in other words they will be talking about the dead cat, the thing you want them to talk about, and they will not be talking about the issue that has been causing you so much grief.

Throwing a dead cat on the dining room table -- talking about making models of buses -- worked for Johnson. Everyone in the UK press and beyond started talking about the model buses, and the story about the police being called to Johnson's home was forgotten. That's impressive enough, but it's possible that strange moment in the interview may have achieved even more.

One of Boris Johnson's claims to fame/infamy arose during the deeply-divisive 2016 Brexit referendum on whether the UK should leave the EU. Johnson supported Brexit, and he was photographed in front of the campaign's big red bus that bore the slogan: "We send the EU £350m a week: let’s fund our NHS [National Health Service] instead". It was a bogus statement: the true amount sent to the EU is closer to £160 million pounds a week. Johnson's willingness to endorse that misleading figure is another threat to his claim to be a fit person to become the UK's new prime minister.

A day after the dead cat was thrown on the table, twitter user @MrKennyCampbell realized that Johnson's incoherent rambling about model buses was also a Google bomb. Previously, searches for "boris bus" on Google threw up that lie about how much the UK sent to the EU, and Johnson's tacit agreement with it. Now the same search shows stories about Johnson's passion for making model buses. References to the big red Brexit bus and its slogan have been pushed off the top Google hits, effectively consigning the story about Johnson to relative digital oblivion.

This is such a brilliant example of political search engine optimization that it's hard to believe someone as buffoonish as Johnson would be capable of pulling it off intentionally. Nonetheless, whether it was fiendishly clever planning, or an unbelievably lucky improvisation, there's no denying the episode stands as an object lesson in how to combine the dead cat strategy with a Google bomb to great effect.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

21 Comments | Leave a Comment..

Posted on Techdirt - 24 June 2019 @ 3:40pm

If China Is A Glimpse Of Our Future Surveillance Nightmare, Maybe Hong Kong Shows How To Fight It

from the minimize-your-digital-footprint dept

Techdirt has been covering the roll-out of the extraordinarily comprehensive digital surveillance systems in China for many years. It's hardly news that the Chinese authorities continue to deploy the latest technologies in order to bolster their control. Many of the same approaches to surveillance are being tried in the special administrative region of Hong Kong. A British colony for 156 years, it was handed back to China in 1997 on the understanding that there would be "one country, two systems": Hong Kong would be part of China, but it would retain its very different economic and administrative systems for at least 50 years.

Well, that was the theory. In practice, Xi Jinping is clearly unwilling to wait that long, and has been asserting more and more control over Hong Kong and its people. In 2014, this provoked the youth-led "Umbrella Movement", which sought to fight interference by the Chinese authorities in Hong Kong's political system. More recently, there have been even bigger protests over a planned law that would allow extradition from Hong Kong to China. This time, though, there has been an important development. The protesters know they are increasingly under surveillance online and in the street -- and are actively taking counter-measures:

Protesters used only secure digital messaging apps such as Telegram and otherwise went completely analogue in their movements: buying single-ride subway tickets instead of prepaid stored-value cards, forgoing credit cards and mobile payments in favor of cash and taking no selfies or photos of the chaos.

They wore face masks to obscure themselves from CCTV, fearing facial-recognition software, and bought fresh pay-as-you-go SIM cards.

As The Washington Post report explains, in addition to minimizing their digital footprints, the protesters also adopted a decentralized approach to organization. The hope is that without clear leaders, it will be harder to shut down the protests by carrying out just a few targeted arrests. The protests are continuing, so it's too early to say how well these measures have worked. Moreover, the level of surveillance in Hong Kong has not yet matched what is happening in Tibet or the huge Western region of China inhabited by the Uyghurs. Nonetheless, the conscious attempts to blunt the force of privacy-hostile digital technologies form an important testing ground for approaches that others may soon need to adopt as China-style total surveillance spreads around the world.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

17 Comments | Leave a Comment..

More posts from Glyn Moody >>