Glyn Moody’s Techdirt Profile

glynmoody

About Glyn Moody Techdirt Insider




Posted on Techdirt - 16 September 2020 @ 3:27am

Copyright Companies Want Memes That Are Legal In The EU Blocked Because They Now Admit Upload Filters Are 'Practically Unworkable'

from the too-much-is-never-enough dept

The passage of the EU Copyright Directive last year represented one of the most disgraceful examples of successful lobbying and lying by the publishing, music, and film industries. In order to convince MEPs to vote for the highly controversial legislation, copyright companies and their political allies insisted repeatedly that the upload filters needed to implement Article 17 (originally Article 13) were optional, and that user rights would of course be respected online. But as Techdirt and many others warned at the time, this was untrue, as even the law's supporters admitted once it had been passed. Now that the EU member states are starting to implement the Directive, it is clear that there is no alternative to upload filters, and that freedom of speech will therefore be massively harmed by the new law. France has even gone so far as ignore the requirement for the few user protections that the Copyright Directive graciously provides.

The EU Copyright Directive represents an almost total victory for copyright maximalists, and a huge defeat for ordinary users of the Internet in the EU. But if there is one thing that we can be sure of, it's that the copyright industries are never satisfied. Despite the massive gains already enshrined in the Directive, a group of industry organizations from the world of publishing, music, cinema and broadcasting have written to the EU Commissioner responsible for the Internal Market, Thierry Breton, expressing their "serious concerns regarding the European Commission's consultation on its proposed guidance on the application of Article 17 of the Directive on Copyright in the Digital Single Market ("the Directive")." The industry groups are worried that implementation of the EU Copyright Directive will provide them with too little protection (pdf):

We are very concerned that, in its Consultation Paper, the Commission is going against its original objective of providing a high level of protection for rightsholders and creators and to create a level playing field in the online Digital Single Market. It interprets essential aspects of Article 17 of the Directive in a manner that is incompatible with the wording and the objective of the Article, thus jeopardising the balance of interests achieved by the EU legislature in Article 17.

In an Annex to the letter, the copyright industries raise four "concerns" with the proposed guidance on the implementation of Article 17. The former MEP Julia Reda, who valiantly led the resistance against the worst aspects of the Copyright Directive during its passage through the EU's legislative system, has answered in detail all of the points in a thread on Twitter. It's extremely clearly explained, and I urge you to read it to appreciate the full horror of what the copyright companies are claiming and demanding. But there is one "concern" of the copyright maximalists that is so outrageous that it deserves to be singled out here. Reda writes:

#Article17 clearly says that legal content must not be blocked. #Uploadfilters can't guarantee that, so rightholders claim that this is fulfilled as long as users have the right to complain about wrongful blocking *after* it has already happened.

This completely goes against what users fought for in the negotiations and what #Article17 says, that it "shall in no way affect legitimate uses". Of course, if all legal parodies, quotes etc. get automatically blocked by #uploadfilters, legitimate uses are affected pretty badly.

The copyright companies and their political friends tricked the European Parliament into voting through Article 17 by claiming repeatedly that it did not require upload filters, which were rightly regarded as unacceptable. Now, the companies are happy to admit that the law's requirement to assess whether uploads are infringing before they are posted -- which can only be done using algorithms to filter out infringing material -- is "practically unworkable". Instead, they want blocking to be the default when there is any doubt, forcing users to go through a process of complaining afterwards if they wish their uploads to appear. Since most people will not know how to do this, or won't have the time or energy to do so, this will inevitably lead to vast amounts of legal material being blocked by filters.

As Reda rightly summarizes:

The entertainment industry is joining forces to push for the worst possible implementation of #Article17, which would not only require out-of-control #uploadfilters without any safeguards, but also violate fundamental rights AND the very text of Article 17 itself.

The EU Copyright Directive's Article 17 already promises to be disastrous for user creativity and freedom of speech in the EU; unfortunately, the proposed EU guidance has some additional aspects that are problematic for end users (pdf), as a group of civil society organizations point out in their own letter to the EU Commissioner. What the industry's demands show once again is that no matter how strong copyright is made, no matter how wide its reach, and no matter how disproportionate the enforcement powers are, publishing, music, film and broadcasting companies always want more. Their motto is clearly: "too much is never enough".

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

66 Comments | Leave a Comment..

Posted on Techdirt - 3 September 2020 @ 3:05am

Academic Study Says Open Source Has Peaked: But Why?

from the share,-and-share-alike dept

Open source runs the world. That's for supercomputers, where Linux powers all of the top 500 machines in the world, for smartphones, where Android has a global market share of around 75%, and for everything in between, as Wired points out:

When you stream the latest Netflix show, you fire up servers on Amazon Web Services, most of which run on Linux. When an F-16 fighter takes off, three Kubernetes clusters run to keep the jet's software running. When you visit a website, any website, chances are it's run on Node.js. These foundational technologies -- Linux, Kubernetes, Node.js -- and many others that silently permeate our lives have one thing in common: open source.

Ubiquity can engender complacency: because open source is indispensable for modern digital life, the assumption is that it will always be there, always supported, always developed. That makes new research looking at what the longer-term trends are in open source development welcome. It builds on work carried out by three earlier studies, in 2003, 2007 and 2007, but using a much larger data set:

This study replicates the measurements of project-specific quantities suggested by the three prior studies (lines of code, lifecycle state), but also reproduce the measurements by new measurands (contributors, commits) on an enlarged and updated data set of 180 million commits contributed to more than 224,000 open source projects in the last 25 years. In this way, we evaluate existing growth models and help to mature the knowledge of open source by addressing both internal and external validity.

The new work uses data from Open Hub, which enables the researchers to collect commit information across different source code hosts like GitHub, Gitlab, BitBucket, and SourceForge. Some impressive figures emerge. For example, at the end of 2018, open source projects contained 17,586,490,655 lines of code, made up of 14,588,351,457 lines of source code and 2,998,139,198 lines of comments. In the last 25 years, 224,342 open source projects received 180,937,525 commits in total. Not bad for what began as a ragtag bunch of coders sharing stuff for fun. But there are also some more troubling results. The researchers found that most open source projects are inactive, and that most inactive projects never receive a contribution again.

Looking at the longer-term trends, an initial, transient exponential growth was found until 2009 for commits and contributors, until 2011 for the number of available projects, and until 2013 for available lines of code. Thereafter, all those metrics reached a plateau, or declined. In one sense, that's hardly a surprise. In the real world, exponential growth has to stop at some point. The real question is whether open source has peaked purely because it has reached its natural limits, or whether they are other problems that could have been avoided.

For example, a widespread concern in the open source community is that companies may have deployed free code in their products with great enthusiasm, but they have worried less about giving back and supporting all the people who write it. Such an approach may work in the short term, but ultimately destroys the software commons they depend on. That's just as foolish as over-exploiting the environmental commons with no thought for long-term sustainability. As the Wired article mentioned above points out, it's not just bad for companies and the digital ecosystem, it's bad for the US too. In the context of the current trade war with China, "the core values of open source -- transparency, openness, and collaboration -- play to America's strengths". The new research might be an indication that the open source community, which has selflessly given so much for decades, is showing signs of altruism fatigue. Now would be a good time for companies to start giving back by supporting open source projects to a much greater degree than they have so far.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

18 Comments | Leave a Comment..

Posted on Techdirt - 26 August 2020 @ 3:48am

Virtual Reconstruction Of Ancient Temple Destroyed By ISIS Is Another Reason To Put Your Holiday Photos Into The Public Domain

from the fighting-terrorism-by-sharing dept

The Syrian civil war has led to great human suffering, with hundreds of thousands killed, and millions displaced. Another victim has been the region's rich archaeological heritage. Many of the most important sites have been seriously and intentionally damaged by the Islamic State of Iraq and Syria (ISIS). For example, the Temple of Bel, regarded as among the best preserved at the ancient city of Palmyra, was almost completely razed to the ground. In the past, more than 150,000 tourists visited the site each year. Like most tourists, many of them took photos of the Temple of Bel. The UC San Diego Library's Digital Media Lab had the idea of taking some of those photos, with their many different viewpoints, and to combine them using AI techniques into a detailed 3D reconstruction of the temple:

The digital photographs used to create the virtual rendering of the Temple of Bel were sourced from open access repositories such as the #NEWPALMYRA project, the Roman Society, Oxford University and many individual tourists, then populated into Pointcloud, which allows users to interactively explore the once massive temple compound. Additionally, artificial intelligence applications were used to isolate the temple's important features from other elements that may have appeared in the images, such as tourists, weather conditions and foliage.

The New Palmyra site asks members of the public to upload their holiday photos of ancient Palmyra. The photos are sorted according to the monument: for example, the Temple of Bel collection currently has just over a 1000 images taken before the temple's destruction. Combining these with other images in academic and research institutions has allowed a detailed point cloud representation of the temple to be created. The model can be tilted, rotated and zoomed from within a browser. Using AI to put together images is hardly cutting-edge these days. In many ways, the key idea is the following note on the New Palmyra home page:

Unless otherwise specified, by uploading your photos or models to #NEWPALMYRA, they will be publicly available under a CC0 license.

Putting the images into the public domain (CC0) is necessary to make the combination of them easy without having to worry about attribution or, even more impossibly, licensing them individually. As the newly-resurrected Temple of Bel shows, once we ignore the copyright industry's obsession with people "owning" the things they create, and simply give them to the world for anyone to enjoy and build on, we all gain.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

10 Comments | Leave a Comment..

Posted on Tech & COVID - 17 August 2020 @ 7:54pm

England's Exam Fiasco Shows How Not To Apply Algorithms To Complex Problems With Massive Social Impact

from the let-that-be-a-lesson-to-you-all dept

The disruption caused by COVID-19 has touched most aspects of daily life. Education is obviously no exception, as the heated debates about whether students should return to school demonstrate. But another tricky issue is how school exams should be conducted. Back in May, Techdirt wrote about one approach: online testing, which brings with it its own challenges. Where online testing is not an option, other ways of evaluating students at key points in their educational career need to be found. In the UK, the key test is the GCE Advanced level, or A-level for short, taken in the year when students turn 18. Its grades are crucially important because they form the basis on which most university places are awarded in the UK.

Since it was not possible to hold the exams as usual, and online testing was not an option either, the body responsible for running exams in the UK, Ofqual, turned to technology. It came up with an algorithm that could be used to predict a student's grades. The results of this high-tech approach have just been announced in England (other parts of the UK run their exams independently). It has not gone well. Large numbers of students have had their expected grades, as predicted by their teachers, downgraded, sometimes substantially. An analysis from one of the main UK educational associations has found that the downgrading is systematic: "the grades awarded to students this year were lower in all 41 subjects than they were for the average of the previous three years."

Even worse, the downgrading turns out to have affected students in poorly performing schools, typically in socially deprived areas, the most, while schools that have historically done well, often in affluent areas, or privately funded, saw their students' grades improve over teachers' predictions. In other words, the algorithm perpetuates inequality, making it harder for brilliant students in poor schools or from deprived backgrounds to go to top universities. A detailed mathematical analysis by Tom SF Haines explains how this fiasco came about:

Let's start with the model used by Ofqual to predict grades (p85 onwards of their 319 page report). Each school submits a list of their students from worst student to best student (it included teacher suggested grades, but they threw those away for larger cohorts). Ofqual then takes the distribution of grades from the previous year, applies a little magic to update them for 2020, and just assigns the students to the grades in rank order. If Ofqual predicts that 40% of the school is getting an A [the top grade] then that's exactly what happens, irrespective of what the teachers thought they were going to get. If Ofqual predicts that 3 students are going to get a U [the bottom grade] then you better hope you're not one of the three lowest rated students.

As this makes clear, the inflexibility of the approach guarantees that there will be many cases of injustice, where bright and hard-working students will be given poor grades simply because they were lower down in the class ranking, or because the school did badly the previous year. Twitter and UK newspapers are currently full of stories of young people whose hopes have been dashed by this effect, as they have now lost the places they had been offered at university, because of these poorer-than-expected grades. The problem is so serious, and the anger expressed by parents of all political affiliations so palpable, that the UK government has been forced to scrap Ofqual's algorithmic approach completely, and will now use the teachers' predicted grades in England. Exactly the same happened in Scotland, which also applied a flawed algorithm, and caused similarly huge anguish to thousands of students, before dropping the idea.

The idea of writing algorithms to solve this complex problem is not necessarily wrong. Other solutions -- like using grades predicted by teachers -- have their own issues, including bias and grade inflation. The problems in England arose because people did not think through the real-life consequences for individual students of the algorithm's abstract rules -- even though they were warned of the model's flaws. Haines offers some useful, practical advice on how it should have been done:

The problem is with management: they should have asked for help. Faced with a problem this complex and this important they needed to bring in external checkers. They needed to publish the approach months ago, so it could be widely read and mistakes found. While the fact they published the algorithm at all is to be commended (if possibly a legal requirement due to the GDPR right to an explanation), they didn't go anywhere near far enough. Publishing their implementations of the models used would have allowed even greater scrutiny, including bug hunting.

As Haines points out, last year the UK's Alan Turing Institute published an excellent guide to implementing and using AI ethically and safely (pdf). At its heart lie the FAST Track Principles: fairness, accountability, sustainability and transparency. The fact that Ofqual evidently didn't think to apply them to its exam algorithm means its only gets a U grade for its work on this problem. Must try harder.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

31 Comments | Leave a Comment..

Posted on Techdirt - 11 August 2020 @ 12:07pm

Scientists Forced To Change Names Of Human Genes Because Of Microsoft's Failure To Patch Excel

from the code-is-law dept

Six years ago, Techdirt wrote about a curious issue with Microsoft's Excel. A default date conversion feature was altering the names of genes, because they looked like dates. For example, the tumor suppressor gene DEC1 (Deleted in Esophageal Cancer 1) was being converted to "1-DEC". Hardly a widespread problem, you might think. Not so: research in 2016 found that nearly 20% of 3500 papers taken from leading genomic journals contained gene lists that had been corrupted by Excel's re-interpretation of names as dates. Although there don't seem to be any instances where this led to serious errors, there is a natural concern that it could distort research results. The good news is this problem has now been fixed. The rather surprising news is that it wasn't Microsoft that fixed it, even though Excel was at fault. As an article in The Verge reports:

Help has arrived, though, in the form of the scientific body in charge of standardizing the names of genes, the HUGO Gene Nomenclature Committee, or HGNC. This week, the HGNC published new guidelines for gene naming, including for "symbols that affect data handling and retrieval." From now on, they say, human genes and the proteins they expressed will be named with one eye on Excel's auto-formatting. That means the symbol MARCH1 has now become MARCHF1, while SEPT1 has become SEPTIN1, and so on. A record of old symbols and names will be stored by HGNC to avoid confusion in the future.

So far, 27 genes have been re-named in this way. Modifying gene names in itself is not unheard of. The Verge article notes that, in the past, names that made sense to experts, but which might alarm or offend lay people, are also changed from time to time:

"We always have to imagine a clinician having to explain to a parent that their child has a mutation in a particular gene,” says [Elspeth Bruford, the coordinator of HGNC]. "For example, HECA [a cancer-related human gene] used to have the gene name 'headcase homolog (Drosophila),' named after the equivalent gene in fruit fly, but we changed it to 'hdc homolog, cell cycle regulator' to avoid potential offense."

It is nice to know that we won't need to worry about serious problems flowing from Excel's habit of automatically re-naming cell entries. But it's rather troubling that Microsoft doesn't seem to have thought the problem worthy of its attention or a fix, despite it being known for at least six years. It shows once again how people are being forced to adapt to the software they use, rather than the other way around. Or, as Lawrence Lessig famously wrote: "code is law

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

45 Comments | Leave a Comment..

Posted on Techdirt - 6 August 2020 @ 3:24am

In 10 Years Of Existence, The Long-Running French Farce Known As Hadopi Has Imposed Just €87,000 In Fines, But Cost Taxpayers €82 Million

from the shut-it-down dept

The French anti-piracy framework known as Hadopi began as tragedy and soon turned into farce. It was tragic that so much energy was wasted on putting together a system that was designed to throw ordinary users off the Internet -- the infamous "three strikes and you're out" approach -- rather than encouraging better legal offerings. Four years after the Hadopi system was created in 2009, it descended into farce when the French government struck down the signature three strikes punishment because it had failed to bring the promised benefits to the copyright world. Indeed, Hadopi had failed to do anything much: its first and only suspension was suspended, and a detailed study of the three strikes approach showed it was a failure from just about every viewpoint. Nonetheless, Hadopi has staggered on, sending out its largely ignored warnings to people for allegedly downloading unauthorized copies of material, and imposing a few fines on those unlucky enough to get caught repeatedly.

As TorrentFreak reports, Hadopi has published its annual report, which contains some fascinating details of what exactly it has achieved during the ten years of its existence. In 2019, the copyright industry referred 9 million cases to Hadopi for further investigation, down from 14 million the year before. However, referral does not mean a warning was necessarily sent. In fact, since 2010, Hadopi has only sent out 12.7 million warnings in total, which means that most people accused of piracy don't even see a warning.

Those figures are a little abstract; what's important is how effective Hadopi has been, and whether the entire project has been worth all the time and money it has consumed. Figures put together by Next INpact, quoted by TorrentFreak, indicate that during the decade of its existence, Hadopi has imposed the grand sum of €87,000 in fines, but cost French taxpayers nearly a thousand times more -- €82 million. Against that background of staggering inefficiency and inefficacy, the following words in the introduction to Hadopi's annual report (pdf), written by the organization's president, Denis Rapone, ring rather hollow:

Hadopi remains, ten years later and despite the pitfalls in its path in the past, the major player in the protection of copyright, so that creation can flourish unhindered.

Creation could have flourished rather more had those €82 million been spent supporting struggling artists directly, rather than wasting them on the bureaucrats running this pointless joke of an organization. Time to bring the curtain down on the Hadopi farce for good.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

39 Comments | Leave a Comment..

Posted on Techdirt - 29 July 2020 @ 3:32am

EU Plans To Use Supercomputers To Break Encryption, But Also Wants Platforms To 'Create Opportunities' To Snoop On End-To-End Communications

from the there-are-better-ways dept

They say that only two things are certain in life: death and taxes. But here on Techdirt, we have a third certainty: that governments around the world will always seek ways of gaining access to encrypted communications, because they claim that things are "going dark" for them. In the US and elsewhere, the most requested way of doing that is by inserting backdoors into encryption systems. As everyone except certain government officials know, that's a really bad idea. So it's interesting to read a detailed and fascinating report by Matthias Monroy on how the EU has been approaching this problem without asking for backdoors -- so far. The European Commission has been just as vocal as the authorities in other parts of the world in calling for law enforcement to have access to encrypted communications for the purpose of combating crime. But EU countries such as Germany, Finland and Croatia have said they are against prohibiting, limiting or weakening encrypted connections. Because of the way the EU works, that means the region as a whole needs to adopt other methods of gaining access. Monroy explains that the EU is pinning its hopes on its regional police organization:

At EU level, Europol is responsible for reading encrypted communications and storage media. The police agency has set up a "decryption platform" for that. According to Europol's annual report for 2018, a "decryption expert" works there, from whom the competent authorities of the Member States can obtain assistance. The unit is based at the European Centre for Cybercrime (EC3) at Europol in The Hague and received five million euros two years ago for the procurement of appropriate tools.

The Europol group uses the open source password recovery software Hashcat in order to guess passwords used for content and storage media. According to Monroy, the "decryption platform" has managed to obtain passwords for 32 cases out of 91 where it the authorities needed access to an encrypted device or file. A 39% success rate is not too shabby, depending on how strong the passwords were. But the EU wants to do better, and has decided one way to do that is to throw even more number-crunching power at the problem: in the future, supercomputers will be used. Europol is organizing training courses to help investigators gain access to encrypted materials using Hashcat. Another "decryption expert group" has been given the job of coming up with new technical and legal options. Unfortunately, the approaches under consideration are little more than plans to bully Internet companies into doing the dirty work:

Internet service providers such as Google, Facebook and Microsoft are to create opportunities to read end-to-end encrypted communications. If criminal content is found, it should be reported to the relevant law enforcement authorities. To this end, the Commission has initiated an "expert process" with the companies in the framework of the EU Internet Forum, which is to make proposals in a study.

This process could later result in a regulation or directive that would force companies to cooperate.

There's no way to "create opportunities" to read end-to-end encrypted communications without weakening the latter. If threats from the EU and elsewhere force major Internet services to take this step, people will just start using open source solutions that are not controlled by any company. As Techdirt has noted, there are far better ways to gain access to encrypted communications -- ones that don't involve undermining them.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

27 Comments | Leave a Comment..

Posted on Techdirt - 23 July 2020 @ 12:05pm

Japan's Top Court Says 45 Million Twitter Users Must Check That Anything They Retweet Is Not A Copyright Infringement

from the yeah,-that's-feasible dept

Earlier this year, Techdirt reported on an extremely serious development in the world of Japanese copyright, with a new law that will make copyright infringement a criminal offense. Now the country's Supreme Court has issued a ruling that will make using Twitter in Japan more of a risk, legally speaking. The case concerns a photo of a flower, originally posted on a web site in 2009, with the photographer's name and copyright notice. As often happens, the photo was then tweeted without the photographer's consent, and was further retweeted. The problem is that Twitter uses "smart auto-cropping" of images, with the aim of focusing on "salient" regions, and thus increasing the likelihood of someone looking at and engaging with the tweet. Twitter's auto-cropped version of the photo did not include the photographer's name or copyright notice.

As TorrentFreak explains, the photographer was not happy with these tweets and the trimmed versions of his image, even though the original photo showed up if viewers of the retweets clicked on the cut-down photo. He took legal action, and the Tokyo District Court found that the original posting of the flower had indeed infringed the photographer's copyright, but dismissed the photographer's demand for the identities of the people who re-tweeted the image. The photographer then took his case to the High Court division dealing with copyright matters in Japan. It agreed there had been a breach of copyright, and found also that the people posting the cropped image on Twitter had violated the photographer's moral rights because his name had been removed. As a result, the Japanese High Court ordered Twitter to hand over the email addresses of all those who had posted the image.

Twitter appealed to Japan's Supreme Court, arguing that the cropping of the images was automated, and therefore not under the control of users. According to TorrentFreak, the company warned that a judgment blaming Twitter's users could have a chilling effect on the platform in Japan. Nonetheless:

In a decision handed down yesterday, the Supreme Court ordered Twitter to hand over the email addresses of the three retweeters after finding that the photographer's rights were indeed infringed when Twitter's cropping tool removed his identifying information.

Four out of five judges on the bench sided with the photographer, with Justice Hayashi dissenting. He argued that ruling in favor of the plaintiff would put Twitter users in the position of having to verify every piece of content was non-infringing before retweeting. The other judges said that despite these problems, the law must be upheld as it is for content published on other platforms.

It's not clear what the photographer intends to do with the email addresses, but the larger problem is that the ruling makes retweeting images on Twitter much more of a legal risk for the service's 45 million users in Japan. Taken together with the earlier criminalization of copyright infringement, this latest move is likely to discourage people in Japan from precisely the kind of creativity the Internet has helped to unleash. Japan will be culturally poorer as a result -- just as the EU will be, thanks to the unworkable upload filters that are about to be introduced. And all because copyright fanatics seem to think their concerns must take precedence over everything else.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

23 Comments | Leave a Comment..

Posted on Techdirt - 15 July 2020 @ 7:26pm

Fan Uses AI Software To Lipread What Actors Really Said In TV Series Before Chinese Authorities Censored Them

from the I-saw-what-you-said-there dept

It's hardly news to Techdirt readers that China carries out censorship on a massive scale. What may be more surprising is that its censorship extends to even the most innocuous aspects of life. The ChinAI Newsletter, which provides translations by Jeff Ding of interesting texts from the world of Chinese AI, flags up one such case. It concerns a Chinese online TV series called "The Bad Kids". Here's how the site Sixth Tone describes it:

Since its first episodes were released on China's Netflix-like video platform iQiyi in mid-June, "The Bad Kids" has earned sweeping praise for its plot, cinematography, casting, dialogue, pacing, and soundtrack. It's also generated wide-ranging online discussion on human nature due to the psychology and complex motivations of its characters.

However, as the Sixth Tone article points out, the authorities required "a lot of changes" for the series to be approved. One fan of "The Bad Kids", Eury Chen, wanted to find out what exactly had been changed, and why that might be. In a blog post translated by ChinAI, Chen explained how he went about this:

Two days ago, I watched the TV series "The Bad Kids" in one go, and the plot was quite exciting. The disadvantage is that in order for the series to pass the review (of the National Radio and Television Administration), the edited sequences for episodes 11 and 12 were disrupted, even to the point that lines were modified, so that there are several places in the film where the actor's mouth movements and lines are not matched, which makes the plot confusing to people. Therefore, I tried to restore the modified lines through artificial intelligence technology, thereby restoring some of the original plot, which contained a darker truth.

The AI technology involved using Google's Facemesh package, which can track key "landmarks" on faces in images and videos. By analyzing the lip movements, it is possible to predict the sounds of a Chinese syllable. However, there is a particular problem that makes it hard to lipread Chinese using AI. There are many homophones in Chinese (similar sounds, different meanings). In order to get around this problem, Chen explored the possible sequences of Chinese characters to find the ones that best match the plot at that point. As his blog post (and the ChinAI translation) explains, this allowed him to work out why certain lines were blocked by the Chinese authorities -- turns out it was for totally petty reasons.

Perhaps more interesting than the details of this particular case, is the fact that it was possible to use AI to carry out most of the lipreading, leaving human knowledge to choose among the list of possible Chinese phrases. Most languages don't require that extra stage, since they rarely have the same number of homophones that Chinese does. Indeed, for English phrases, researchers already claimed in 2016 that their AI-based LipNet achieved "95.2% accuracy in sentence-level, overlapped speaker split task, outperforming experienced human lipreaders".

It's clear that we are fast approaching a situation where AI is able to lipread a video in any language. That is obviously a boon for the deaf or hard of hearing, but there's a serious downside. It means that soon all those millions of high-quality CCTV systems around the world will not only be able to use facial recognition software to work out who we are, but also run AI modules to lipread what we are saying.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

30 Comments | Leave a Comment..

Posted on Techdirt - 15 July 2020 @ 3:32am

Top EU Court Says Online Platforms Must Only Provide Postal Addresses Of People Who Upload Unauthorized Copies Of Copyright Material

from the cue-industry-demands-for-new-local-laws dept

The Court of Justice of the European Union (CJEU) is the EU's top court. As such, it regularly hands down judgments that cause seismic shifts in the legal landscape of the region. Sometimes, though, it makes decisions that can seem a little out of touch. Here, for example, is the CJEU judgment on a case that involved unauthorized uploads of videos to YouTube (pdf):

the Court ruled that, where a film is uploaded onto an online video platform without the copyright holder's consent, Directive 2004/48 [the 2004 EU Copyright Directive] does not oblige the judicial authorities to order the operator of the video platform to provide the email address, IP address or telephone number of the user who uploaded the film concerned. The directive, which provides for disclosure of the 'addresses' of persons who have infringed an intellectual property right, covers only the postal address.

What jumps out here is that platforms like YouTube only need to hand out the postal address of someone who uploaded unauthorized material. That seems rather 1990s, but the reasoning is quite straightforward and not at all backward-looking. When the EU copyright law was drawn up in 2003, the CJEU explains, there was no suggestion that "addresses" actually meant an email address, IP address or telephone number. Nor was it the case that EU politicians had never heard of these new-fangled things, and left them out through ignorance. People knew about IP addresses, yet chose not to mention them in the EU law, so the court ruled that "address" should only have its usual meaning, that is, postal address.

That's welcome news for people who upload material to online platforms, since they may not provide their postal addresses -- or if they do, they might provide incorrect ones. In these cases, without IP addresses or emails it will be hard for the copyright industry to do much against those making unauthorized uploads in the EU. However, that's not the end of the story. The full CJEU judgment concludes by noting:

although it follows from the foregoing considerations that the Member States are not obliged, under Article 8(2)(a) of Directive 2004/48, to provide for the possibility for the competent judicial authorities to order disclosure of the email address, telephone number or IP address of the persons referred to in that provision in proceedings concerning an infringement of an intellectual property right, the fact remains that the Member States have such an option. As is clear from the wording of Article 8(3)(a) of that directive, the EU legislature expressly provided for the possibility for the Member States to grant holders of intellectual property rights the right to receive fuller information, provided, however, that a fair balance is struck between the various fundamental rights involved and compliance with the other general principles of EU law, such as the principle of proportionality

In other words, the governments of the EU's Member States have the option of introducing local legislation that would give copyright companies the right to receive things like email and IP addresses. There is no appeal against CJEU's decision, which will now be used by the German courts to make a final judgment on the original case that was referred to the highest court for guidance. As a result, we can doubtless expect frenzied lobbying from the copyright world demanding that all the EU's national governments bring in legislation granting extra rights in order to prevent the end of civilization as we know it, etc. etc.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

14 Comments | Leave a Comment..

Posted on Techdirt - 7 July 2020 @ 7:46pm

Sci-Hub Downloads Boost Article Citations -- And Help Academic Publishers

from the time-to-drop-the-legal-threats dept

Techdirt readers know that Sci-Hub is a site offering free online access to a large proportion of all the scientific research papers that have been published -- at the time of writing, it claims to hold 82,605,245 of them. It's an incredible resource, used by millions around the world. Those include students whose institutions can't afford the often pricey journal subscriptions, but also many academics in well-funded universities, who do have institutional access to the papers. The latter group often prefer Sci-Hub because it provides what traditional academic publishers don't: rapid, frictionless access to the world's knowledge. Given that Sci-Hub does the job far better than most publishers, it's no wonder that the copyright industry wants to shut down the service, for example by getting related domains blocked, or encouraging the FBI to investigate Sci-Hub's founder, Alexandra Elbakyan, for alleged links to Russian intelligence.

These legal battles are likely to continue for some time -- the copyright industry rarely gives up, even when its actions are ineffective or counterproductive. Academics don't care: ultimately what they want is for people to read -- and, crucially, to cite -- their work. So irrespective of the legal situation, an interesting question is: what effect do Sci-Hub downloads have on article citations? That's precisely what a new preprint, published on arXiv, seeks to answer. Here's the abstract:

Citations are often used as a metric of the impact of scientific publications. Here, we examine how the number of downloads from Sci-hub as well as various characteristics of publications and their authors predicts future citations. Using data from 12 leading journals in economics, consumer research, neuroscience, and multidisciplinary research, we found that articles downloaded from Sci-hub were cited 1.72 times more than papers not downloaded from Sci-hub and that the number of downloads from Sci-hub was a robust predictor of future citations.

The paper explains which journals were selected, and the various analytical approaches that were applied in order to obtain this result. In all, the researchers compared 4,646 articles that were downloaded from Sci-Hub to 4,015 from the same titles that were not downloaded. Assuming that those are representative, and that the statistical calculations are correct, the end result is important. It suggests that articles that are downloaded from Sci-Hub are nearly twice as likely to be cited as those that aren't -- a big boost that will doubtless be of great interest to academics, whose careers are greatly affected by how widely they are cited. It seems to confirm that Sci-Hub does indeed help spread knowledge, not just in terms of the free downloads it offers, but also by virtue of leading to more citations for downloaded papers, and thus a wider audience for them.

The new paper notes a rather paradoxical implication of the result. Alongside Sci-Hub, which is happy to operate outside copyright law, there are alternatives like open access journals, and preprints, which are fully within it. However, as a result of Sci-Hub's ability to boost citations:

[it] may help preserve the current publishing system because the lack of access to publications, which preprints and open access journals are trying to solve, may no longer be felt so strongly to find required increasing support.

In other words, according to this latest analysis, it turns out that the copyright industry is attacking a site whose success might be seen as a reason for not changing the current academic publishing system.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

15 Comments | Leave a Comment..

Posted on Techdirt - 24 June 2020 @ 8:03pm

Top German Court Rules Facebook's Collection And Use Of Data From Third-Party Sources Requires 'Voluntary' Consent

from the oh-no,-not-more-pop-ups dept

Back at the end of 2017, Germany's competition authority, the Bundeskartellamt, made a preliminary assessment that Facebook's data collection is "abusive". At issue was a key component of Facebook's business model: amassing huge quantities of personal data about people, not just from their use of Facebook, WhatsApp and Instagram, but also from other sites. If a third-party website has embedded Facebook code for things such as the 'like' button or a 'Facebook login' option, or uses analytical services such as 'Facebook Analytics', data will be transmitted to Facebook via APIs when a user calls up that third party's website for the first time. The user is not given any choice in this, and it was this aspect that the Bundeskartellamt saw as "abusive".

After the preliminary assessment, in February 2019 the German competition authority went on to forbid Facebook from gathering information in this way without voluntary permission from users:

(i) Facebook-owned services like WhatsApp and Instagram can continue to collect data. However, assigning the data to Facebook user accounts will only be possible subject to the users' voluntary consent. Where consent is not given, the data must remain with the respective service and cannot be processed in combination with Facebook data.

(ii) Collecting data from third party websites and assigning them to a Facebook user account will also only be possible if users give their voluntary consent.

If consent is not given for data from Facebook-owned services and third party websites, Facebook will have to substantially restrict its collection and combining of data. Facebook is to develop proposals for solutions to this effect.

Naturally, Facebook appealed against this decision, and the Düsseldorf Higher Regional Court found in its favor. However, as the New York Times reports, the Federal Court of Justice, which monitors compliance with the German constitution, has just reversed that:

On Tuesday, the federal court said regulators were right in concluding that Facebook was abusing its dominant position in the market.

"There are neither serious doubts about Facebook's dominant position on the German social network market nor the fact that Facebook is abusing this dominant position," the court said. "As the market-dominating network operator, Facebook bears a special responsibility for maintaining still-existing competition in the social networking market."

Needless to say, Facebook vowed to fight on -- and to ignore the defeat for the moment. The case goes back to the lower court to rule again on the matter, but after the Federal Court of Justice guidance, it is unlikely to be in Facebook's favor this time. There is also the possibility that the case could be referred to the EU's top court, the Court of Justice of the European Union, to give its opinion on the matter.

Assuming that doesn't happen, the ruling could have a big impact not only on Facebook, but on all the other Internet giants that gather personal details from third-party sites without asking their visitors for explicit, voluntary permission. Although the ruling only applies to Germany, the country is the EU's biggest market, and likely to influence what happens elsewhere in the region, and maybe beyond. One bad outcome might be even more pop-ups asking you to give permission to have your data gathered, and be tracked as you move around the Internet.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

18 Comments | Leave a Comment..

Posted on Techdirt - 18 June 2020 @ 3:10am

Privacy Concerns Lead To Deletion Of All Data Collected By Norway's Contact Tracing App

from the not-enough-infections-is-a-nice-problem-to-have dept

In the early days of the coronavirus outbreak -- a few months ago, in other words -- there was a flurry of activity around contact tracing apps. Desperate to be seen to be doing something -- anything -- governments around the world rushed to announced their own digital solutions to tracing people who have been in the proximity of infected individuals. There are now over 40 in various stages of development. After the initial excitement, it's striking how quiet things have gone on the contact tracing front, as projects struggle to turn politicians' promises into useful programs. Some of the apps are beginning to emerge now, and we're likely to hear more about them over the next few weeks and months. For example, there's been an interesting development in Norway, one of the first to release its smartphone app, Smittestopp ("infection stop"), back in April. As the Guardian reports:

On Friday, the [Norwegian] data agency Datatilsynet issued a warning that it would stop the Norwegian Institute of Public Health from handling data collected via Smittestopp.

Datatilsynet said the restricted spread of coronavirus in Norway, as well as the app's limited effectiveness due to the small number of people using it, meant the invasion of privacy resulting from its use was disproportionate.

There are two important points there. One is about the balance between tackling COVID-19, and protecting privacy. In this case, the Norwegian data protection authority (NDPA) believes that the benefit is so small that the harm to privacy is unjustified. The other is that when the infection rate is low, as is the case in Norway, which has reported fewer than 250 deaths from coronavirus so far, people may not see much point in using it. Professor Camilla Stoltenberg, Director-General at the Norwegian Institute of Public Health, is unhappy with Datatilsynet's move:

We do not agree with the NDPA's evaluation, but now we will delete all data and put work on hold following the notification. This will weaken an important part of our preparedness for increased transmission because we are losing time in developing and testing the app. Meanwhile, we have a reduced ability to combat ongoing transmission. The pandemic is not over. There is no immunity in the population, no vaccine, and no effective treatment. Without the Smittestopp app, we will be less equipped to prevent new local or national outbreaks

It's worth noting that Stoltenberg admits that "the work involved in getting the app to work optimally has taken longer than planned, partly because there are few people who are infected". As the number of COVID-19 cases continues to fall in some countries, those developing contact tracing apps there may encounter similar problems.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

7 Comments | Leave a Comment..

Posted on Techdirt - 15 June 2020 @ 7:33pm

Australia Triumphs Definitively In Long-Running Battle With Big Tobacco Over Plain Packs For Cigarettes

from the no-sacred-right-to-use-trademarks dept

Techdirt has written a lot about corporate sovereignty -- also known as "investor-state dispute settlement" (ISDS) -- which allows companies to haul countries before special tribunals for alleged loss of profits caused by new laws or regulations. One industry's use of ISDS that Techdirt has been following particularly closely is tobacco. As a typically brilliant John Oliver segment explained back in 2015, Big Tobacco companies have used corporate sovereignty clauses in international trade and investment deals to sue countries for daring to try to regulate cigarettes, advertising or packaging. Thankfully, that didn't turn out so well. Philip Morris tried to use ISDS to roll back plain-pack laws, but cases against Australia and Uruguay were both thrown out. The tide against the use of corporate sovereignty by tobacco companies to undo health protection laws has turned so much that special carve-outs have been added to trade deals to prevent this kind of corporate bullying.

But the tobacco industry had one last trick up its sleeve. John Oliver noted five years ago that Big Tobacco persuaded three countries -- Honduras, Dominican Republic and Ukraine -- to file complaints with the World Trade Organization (WTO) against Australia, claiming the plain-packaging law violates trade agreements. As an article in the Financial Review explains, they were later joined by Indonesia and Cuba. A dispute panel backed Australia in June 2018, but Honduras and the Dominican Republic appealed against that decision. Now the WTO's Appellate Body has made its final ruling:

The Appellate Body confirmed the previous WTO ruling, which said that when Australia prevented tobacco producers from differentiating themselves from their rivals via brand marketing, this wasn't necessarily a restriction on trade.

It also rejected the argument that raising the purchasing age or increasing tobacco taxes were less trade-restrictive options that Canberra could have pursued instead of the plain packaging rules.

And it said that the international intellectual property regime didn't give tobacco companies a right to use a trademark; it merely stopped competitors from using it. So there was no obligation on Australia to allow a company to use its trademark, and the plain packaging regime hadn't "unjustifiably" encumbered companies' trademark usage.

That last point is particularly interesting. As far back as 2011 the tobacco companies tried to argue that "plain packaging has a smothering effect on companies' logos and trademarks." The WTO has just stamped on the idea that companies have some kind of sacred right to use their trademarks, which could have wider implications.

As for the main attempt to get rid of plain packs in Australia, that has now failed definitively -- there is no way to appeal against the WTO Appellate Body's ruling. That means that many more countries around the world are likely to bring in plain-pack laws -- a real victory for Australia's tenacious pursuit of this important health measure.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

44 Comments | Leave a Comment..

Posted on Techdirt - 4 June 2020 @ 7:54pm

After Taming Open Access, Academic Publishing Giants Now Seek To Assimilate The World Of Preprints

from the no-place-to-run dept

As Techdirt has reported, the open access movement seeks to obtain free access to research, particularly when it is funded by taxpayers' money. Naturally, traditional academic publishers enjoying profit margins of 30 to 40% are fighting to hold on to their control. Initially, they tried to stop open access gaining a foothold among researchers; now they have moved on to the more subtle strategy of adopting it and assimilating it -- rather as Microsoft has done with open source. Some advocates of open access are disappointed that open access has not led to any significant savings in the overall cost of publishing research. That, in its turn, has led many to urge the increased use of preprints as a way of saving money, liberating knowledge, and speeding up its dissemination. One reason for this is a realization that published versions in costly academic titles add almost nothing to the freely-available preprints they are based on.

An excellent new survey of the field, "Preprints in the Spotlight", rightly notes that preprints have attained a new prominence recently thanks to COVID-19. The urgent global need for information about this novel disease has meant that traditional publishing timescales of months or more are simply too slow. Preprints allow important data and analysis to be released worldwide almost as soon as they are available. The result has been a flood of preprints dealing with coronavirus: two leading preprint servers, medRxiv and bioRxiv, have published over 4,500 preprints on COVID-19 at the time of writing.

The publishing giant Elsevier was one of the first to notice the growing popularity of preprints. Back in 2016, Elsevier acquired the leading preprint server for the social sciences, SSRN. Today, Elsevier is no longer alone in seeing preprints as a key sector. A post on The Scholarly Kitchen blog describes how all the major publishers are active in preprints:

Today, we observe that beyond preprint communities that are typically organized around a field or set of fields, in recent years all the major publishers have made their own investments in preprint platforms. Publishers are integrating preprint deposit into their manuscript submission workflows, and adopting a common strategy designed to take back control of preprints.

That emphasis on "taking back control" is key. Preprints have become an alternative not just to academic publishing as practised by giant companies like Elsevier, but also to open access publishing, which is now not so different from the traditional kind. Companies clearly want to nip that development in the bud. Here's how publishers are likely to develop their preprint divisions:

they are bringing preprints inside their publishing workflows. This will afford them an opportunity to emphasize the importance of the version of record and its integrity. And, it will allow them to maximize their control over the research workflow as a whole, including datasets, protocols, and other artifacts of the research and publishing process. If successful, over time publishers will see fewer of the preprints of their eventual publications living "in the wild" and more of them on services and in workflows that they control.

That is, as well as taming the unruly world of preprints by bringing them in-house, publishers can also use them to bolster their mainstream businesses, and further their plans to offer academics a complete, "one-stop" service that includes preprints, journals, data management and more. Turning independent preprint servers into just another cog in the mighty publishing machine would be a further loss of control and autonomy for the academic community as a whole. It should be resisted by researchers, the institutions where they work, and by the bodies that fund them.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

24 Comments | Leave a Comment..

Posted on Techdirt - 1 June 2020 @ 10:36pm

Corporate Sovereignty Lawyers Prepare To Sue Governments For Bringing In Measures To Tackle COVID-19 And Save Lives

from the priorities,-priorities dept

Regular readers of Techdirt will be all too familiar with the problem of corporate sovereignty -- the ability of companies to sue entire countries for alleged loss of profits caused by government action. Also known as investor-state dispute settlement (ISDS), there have been indications that some countries are starting to drop ISDS from trade and investment treaties, for various reasons. But a worrying report from Corporate Europe Observatory suggests that we are about to witness a new wave of corporate sovereignty litigation. Hard though it may be to believe, these cases will be claiming that governments around the world should be reimbursing companies for the loss of profits caused by tackling COVID-19:

In the midst of a crisis like no other, the legal industry is preparing the ground for costly ISDS suits against government actions that address the health and economic impacts of the coronavirus pandemic. In written alerts and webinars law firms point their multinational clients to investment agreements' vast protections for foreign investors as a tool to "seek relief and/or compensation for any losses resulting from State measures"

No claims have been filed yet, but experts are so worried about this threat that they have called for an immediate moratorium:

on all arbitration claims by private corporations against governments using international investment treaties, and a PERMANENT RESTRICTION on all arbitration claims related to government measures targeting health, economic, and social dimensions of the pandemic and its effects.

Law firms specializing in corporate sovereignty are already well advanced in their preparations for demanding money from governments because of the "damage" the pandemic response has inflicted on corporate profits. Corporate Europe Observatory links to numerous reports and client alerts from these ISDS firms, which spell out the grounds on which big claims might be filed. These include:

ISDS claims against government action to provide clean water for hand-washing

Challenging relief for overburdened public health systems

Lawsuits against action for affordable drugs, tests and vaccines

Investor attacks on government restrictions for virus-spreading business activities

ISDS suits against rent reductions and suspended energy bills for those in need

Disputes over debt relief for households and businesses

Legal action against financial crises measures

Tax justice on trial

Suing governments for not preventing social unrest

The idea that governments around the world struggling to contain the pandemic and save thousands of lives might also have to fight such ISDS claims in court, and even pay out billions in fines when funds are needed for rebuilding lives and businesses, is bad enough. But the fact that law companies evidently have no qualms about recommending the use of corporate sovereignty in these difficult circumstances is a hint of even worse to come.

If these kinds of ISDS actions succeed, and governments are ordered to make huge payments to companies because of national pandemic responses, it is highly likely that similar cases could and would be brought over action to tackle climate change. That in itself might discourage some countries from adopting urgently needed measures. And for those that do, there is the prospect of big fines at just the time when maximum resources will be needed to deal with the environmental, social and economic effects of a climate catastrophe.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

36 Comments | Leave a Comment..

Posted on Techdirt - 18 May 2020 @ 7:35pm

It's Impossible To Opt Out Of Android's Ad Tracking; Max Schrems Aims To Change That

from the another-reason-to-use-apple? dept

Most of the world has been under some form of lockdown for weeks, but that clearly hasn't stopped the indefatigable Austrian privacy expert Max Schrems from working on his next legal action under the EU's GDPR. Last year, he lodged a complaint with the French Data Protection Authority (CNIL) over what he called the "fake consent" that people must give to "cookie banners" in order to access sites. Now he has set his sights on Google's Android Advertising ID, which is present on every Android phone. It builds on research carried out by the Norwegian Consumer Council, published in the report "Out of control".

Today noyb.eu filed a formal GDPR complaint against Google for tracking users through an "Android Advertising ID" without a valid legal basis. The data collected with this unique tracking ID is passed on to countless third parties in the advertising ecosystem. The user has no real control over it: Google does not allow to delete an ID, just to create a new one.

The Android Advertising ID (AAID) is central to Google's advertising system. It allows advertisers to track users as they move around the Internet, and to build profiles of their interests. Google claims that this "gives users better controls", which is true if people want to receive highly-targeted advertising. But if they wish to opt out of this constant tracking, there is a problem. Although Google allows you to change your AAID, it is not possible to do without it completely: the best you can manage is to get a new one. And as the detailed legal complaint to the Austrian Data Protection Authority (pdf) from Schrems points out, there are multiple ways to link old AAIDs with new ones:

Studies and official investigations have proved that the AAID is stored, shared and, where needed, linked with old values via countless other identifiers such as IP addresses, IMEI codes and GPS coordinates, social media handles, email addresses or phone number, de facto allowing a persistent tracking of Android users.

Schrems' organization None of Your Business (noyb.eu) claims that's unacceptable under the GDPR:

EU Law requires user choice. Under GDPR, the strict European privacy law, users must consent to being tracked. Google does not collect valid "opt-in" consent before generating the tracking ID, but seems to generate these IDs without user consent.

Google's position is weakened by the fact that Apple gives users of its smartphones the ability to opt out of targeted ads; for those using iOS 10 or later, the advertising identifier is replaced with an untrackable string of zeros:

If you choose to enable Limit Ad Tracking, Apple's advertising platform will opt your Apple ID out of receiving ads targeted to your interests, regardless of what device you are using. Apps or advertisers that do not use Apple’s advertising platform but do use Apple's Advertising Identifier are required to check the Limit Ad Tracking setting and are not permitted by Apple's guidelines to serve you targeted ads if you have Limit Ad Tracking enabled. When Limit Ad Tracking is enabled on iOS 10 or later, the Advertising Identifier is replaced with a non-unique value of all zeros to prevent the serving of targeted ads. It is automatically reset to a new random identifier if you disable Limit Ad Tracking.

The formal legal complaint was filed on behalf of an Austrian citizen, requesting that the AAID should be deleted permanently. If the action succeeds, that would allow anyone in the EU -- and probably elsewhere -- to do the same. In addition, the complaint points out that under the GDPR, the maximum possible fine, based on 4% of Google's worldwide revenue, would be about €5.94 billion. There's no chance such an unprecedented sum would be imposed, but the fact that every Android user in the EU is forced to use Google's AAID could lead to a fairly hefty fine if Schrems succeeds with his latest legal defense of privacy.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

15 Comments | Leave a Comment..

Posted on Techdirt - 14 May 2020 @ 3:23am

Everyone Agrees That Contact Tracing Apps Are Key To Bringing COVID-19 Under Control; Iceland Has Tried Them, And Isn't So Sure

from the solution-or-solutionism? dept

Given the massive impact that the coronavirus is having on life and the economy around the world, it's no wonder that governments are desperately searching for ways to bring the disease under control. One popular option is to use Bluetooth-based contact tracing apps on smartphones to find out who might be at risk from people nearby who are already infected. Dozens of countries are taking this route. Such is the evident utility of this approach, that even rivals like Apple and Google are willing to work together on a contact tracing app framework to help the battle against the disease. Although it's great to see all this public-spirited activity in the tech world, there's a slight problem with this approach: nobody knows whether it will actually help.

That makes the early experience of Iceland in using contact tracing apps invaluable. An article in the MIT Technology Review notes that Iceland released its Rakning C-19 app in early April, and persuaded 38% of Iceland's population of 364,000 population to download it. Here's what this nation found in its pioneering use of a tracing app:

despite this early deployment and widespread use, one senior figure in the country's covid-19 response says the real impact of Rakning C-19 has been small, compared with manual tracing techniques like phone calls.

"The technology is more or less … I wouldn’t say useless," says Gestur Pàlmason, a detective inspector with the Icelandic Police Service who is overseeing contact tracing efforts. "But it's the integration of the two that gives you results. I would say it [Rakning] has proven useful in a few cases, but it wasn’t a game changer for us."

It's only one data point, of course, but it's an important one. Iceland was not only early in tackling the coronavirus, it has done so with great success. And yet it seems that the contact tracing app played a relatively small part in that. Manual tracing techniques, by contrast, were absolutely key.

That's not to say other countries may not have more success with their apps. It's interesting to note, for example, that Iceland's Rakning C-19 tracks users' GPS data in order to establish where they have been, and who they met with. It's generally agreed that GPS information is too coarse for this, and that a Bluetooth approach should, in theory, provide better insights. It will be interesting to hear how apps based on Bluetooth interactions work in practice. Maybe they will provide the hoped-for means to bring the COVID-19 virus under control. Let's hope so, and that the eager embrace by governments of contact tracing apps is not just another example of "solutionism" -- the idea that any problem can be solved simply by throwing technology at it.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

56 Comments | Leave a Comment..

Posted on Techdirt - 7 May 2020 @ 7:31pm

As More Students Sit Online Exams Under Lockdown Conditions, Remote Proctoring Services Carry Out Intrusive Surveillance

from the you're-doing-it-wrong dept

The coronavirus pandemic and its associated lockdown in most countries has forced major changes in the way people live, work and study. Online learning is now routine for many, and is largely unproblematic, not least because it has been used for many years. However, online testing is more tricky, since there is a concern by many teachers that students might use their isolated situation to cheat during exams. One person's problem is another person's opportunity, and there are a number of proctoring services that claim to stop or at least minimize cheating during online tests. One thing they have in common is that they tend to be intrusive, and show little respect for the privacy of the people they monitor.

As an article in The Verge explains, some employ humans to watch over students using Zoom video calls. That's reasonably close to a traditional setup, where a teacher or proctor watches students in an exam hall. But there are also webcam-based automated approaches, as explored by Vox:

For instance, Examity also uses AI to verify students' identities, analyze their keystrokes, and, of course, ensure they're not cheating. Proctorio uses artificial intelligence to conduct gaze detection, which tracks whether a student is looking away from their screens.

It's not just in the US that these extreme surveillance methods are being adopted. In France, the University of Rennes 1 is using a system called Managexam, which adds a few extra features: the ability to detect "inappropriate" Internet searches by the student, the use of a second screen, or the presence of another person in the room (original in French). The Vox articles notes that even when these systems are deployed, students still try to cheat using new tricks, and the anti-cheating services try to stop them doing so:

it's easy to find online tips and tricks for duping remote proctoring services. Some suggest hiding notes underneath the view of the camera or setting up a secret laptop. It's also easy for these remote proctoring services to find out about these cheating methods, so they're constantly coming up with countermeasures. On its website, Proctorio even has a job listing for a "professional cheater" to test its system. The contract position pays between $10,000 and $20,000 a year.

As the arms race between students and proctoring services escalates, it's surely time to ask whether the problem isn't people cheating, but the use of old-style, analog testing formats in a world that has been forced by the coronavirus pandemic to move to a completely digital approach. Rather than spending so much time, effort and money on trying to stop students from cheating, maybe we need to come up with new ways of measuring what they have learnt and understood -- ones that are not immune to cheating, but where cheating has no meaning. Obvious options include "open book" exams, where students can use whatever resources they like, or even abolishing formal exams completely, and opting for continuous assessment. Since the lockdown has forced educational establishments to re-invent teaching, isn't it time they re-invented exams too?

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

20 Comments | Leave a Comment..

Posted on Techdirt - 30 April 2020 @ 3:34am

EU Joins In The Bullying Of South Africa For Daring To Adopt US-Style Fair Use Principles

from the copyright-maximalists-made-me-do-it dept

As part of its copyright reform, South Africa plans to bring in a fair use right. Despite the fact its proposal is closely modeled on fair use in American law, the copyright industry has persuaded the US government to threaten to kill an important free trade deal with South Africa if the latter dares to follow America's example. If you thought only US copyright companies were capable of this stunningly selfish behavior, think again. It seems that the European copyright industry has been having words with the EU, which has now sent a politely threatening letter to the South African government about its copyright reform (pdf). After the usual fake compliments, it gets down to business in the following passage:

we once again regret the foreseen introduction in the South African copyright regime of provisions relating to fair use in combination with an extensive list of broadly defined and non-compensated exceptions. This is bound to result in a significant degree of legal uncertainty with negative effects on the South African creative community at large as well as on foreign investments, including the European ones.

Invoking "uncertainty" is a standard ploy, already used back in 2011 when the UK was considering bringing in fair use. It is manifestly ridiculous, since the US provides a shining example of how fair use does not engender any terrible uncertainty. America also offers a rich set of legal and commercial experiences others can draw on when they implement a fair use right. Here, "uncertainty" is just a coded way of threatening to withdraw investment in South Africa. It's an empty threat, though, since US history shows that fair use encourages innovation, notably in the digital sector, for which investors have a huge appetite. The EU letter goes on to tip its hand about who is behind this whining:

The European right holders continue expressing their concerns to us in this regard as they have done during the consultation period. All creative sectors in the EU, film industry, music and publishing industry have pointed to the possibility of revisiting their investment plans in South Africa due to these concerns. Other sectors, such as those which are high- technology based, could also suffer as a result of legal uncertainty created by the new regime.

That last sentence is revealing. If the digital sector had actually expressed its fears about "uncertainty", you can bet that the EU would have mentioned it as a serious issue. Since it is framed as "could also suffer as a result", we know that this is just the EU's hypothetical. It is an attempt to get around the awkward fact that high-tech companies love fair use in general, since it gives them far more scope to try out exciting new ideas. It's sad to see the EU slavishly doing the bidding of copyright's digital dinosaurs, and joining with the US in the unedifying spectacle of bullying a small nation trying to modernize its laws.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

10 Comments | Leave a Comment..

More posts from Glyn Moody >>


This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it