One of the wonders of the internet was that it was supposed to be a distributed computer system, meaning that it would be harder to take down and harder to censor. But, over time, things keep getting more and more centralized. And that's especially true in the mobile ecosystem, and doubly so for the Apple iOS mobile ecosystem (at least on Android it's much easier to sideload apps). The latest demonstration of this is that Apple agreed to remove apps from the NY Times from its iOS app store in China, complying with demands from the Chinese government:
Apple removed both the English-language and Chinese-language apps from the app store in China on Dec. 23. Apps from other international publications, including The Financial Times and The Wall Street Journal, were still available in the app store.
“We have been informed that the app is in violation of local regulations,” Fred Sainz, an Apple spokesman, said of the Times apps. “As a result, the app must be taken down off the China App Store. When this situation changes, the App Store will once again offer the New York Times app for download in China.”
The article about this -- in the NY Times, naturally -- says that the paper has asked Apple to reconsider. No one is clear on exactly why this is happening, but the (reasonable) assumption is that it has to do with the new regulations China put in place over the summer that demand all internet news providers must be approved by the Chinese government -- which the Chinese are spinning as part of its effort to crack down on "fake news."
Of course, this really just highlights two separate, but equally worrisome trends: (1) the increasing centralization of connected ecosystems, that creates a single chokepoint to target with censorship demands; and (2) the ability to use hyped up claims about "fake news" to censor legitimate and critical investigative reporting. Neither of these are good to see, and both need to be counteracted.
Back in November, we wrote about Russia's surprising move to enforce an older data localization law that requires all Internet companies to store the personal data of Russian citizens on Russian soil. At the time, that seemed to be just another example of Vladimir Putin's desire to keep a close eye on everything that was happening in Russia. But a comment from his Internet adviser, German Klimenko, hints that there could be another motive: to make it easier for Russia to cut itself off from the global Internet during a crisis, as The Washington Post reports:
Klimenko pointed out that Western powers had cut Crimea off from Google and Microsoft services after the peninsula was annexed from Ukraine by Russia (the companies were complying with U.S. sanctions on Crimea imposed after Russia's takeover). He suggested that showed why it was necessary for the Russian Internet to work on its own.
"There is a high probability of 'tectonic shifts' in our relations with the West," said Klimenko. "Therefore, our task is to adjust the Russian segment of the Internet to protect themselves from such scenarios." He added that "critical infrastructure" should be on Russian territory, "so no one could turn it off."
Klimenko's comments were made before the US announced its response to claims of Russian interference in the presidential election process. His analysis of "tectonic shifts" in US-Russia relations now looks rather prescient, although US threats to hack back made it a relatively easy prediction. And even though his call for Russia to ensure its critical infrastructure cannot be "turned off" by anyone -- in particular by the US -- may be grandstanding to a certain extent, it is not infeasible.
The Chinese have consciously made their own segment of the Internet quite independent, with strict controls on how data enters or leaves the country. Techdirt reported earlier that Russia was increasingly looking to China for both inspiration and technological assistance; maybe Klimenko's comments are another sign of an alignment between the two countries in the digital realm.
It's probably time for Facebook to give up trying to be the morality police, because it isn't working. While nobody expects the social media giant to be perfect at policing its site for images and posts deemed "offensive", it's shown itself time and time again to be utterly incapable of getting this right at even the most basic level. After all, when the censors are removing iconic historical photos, tirades againstprejudice, forms of pure parody, and images of a nude bronze statue in the name of some kind of corporate puritanism, it should be clear that something is amiss.
Yet the armies of the absurd march on, it seems. Facebook managed to kick off the new year by demanding that an Italian art historian remove an image of a penis from her Facebook page. Not just any penis, mind you. It was a picture of a godly penis. Specifically, this godly penis.
That, should you not be an Italian art historian yourself, is a picture of a statue of the god Neptune. In the statue, which adorns the public streets of Bologna, Neptune is depicted with his heavenly member hanging out, because gods have no time for clothes, of course. Yet this carved piece of art somehow triggered a Facebook notice to the photographer, Elisa Barbari.
According to the Telegraph, Barbari got the following notification from Facebook. “The use of the image was not approved because it violates Facebook’s guide lines on advertising. It presents an image with content that is explicitly sexual and which shows to an excessive degree the body, concentrating unnecessarily on body parts. The use of images or video of nude bodies or plunging necklines is not allowed, even if the use is for artistic or educational reasons.”
Even were I to be on board with a Facebook policy banning nudity and, sigh, "plunging necklines" even in the interest of education or art -- which I most certainly am not on board with -- the claim that the image is explicitly sexual and focused on "body parts" is laughably insane. There's nothing sexual about the depiction of Neptune at all, unless we are to believe that all nudity is sexual, which simply isn't true. Also, the depiction focuses not on one body part, but on the entire statue. Nothing about this makes sense.
And that's likely because Facebook is relying on some kind of algorithm to automatically generate these notices. Confusingly, the site's own community standards page makes an exception for art, despite the notice Barbari received claiming otherwise.
Strangely, an exception is made for art. “We also allow photographs of paintings, sculptures, and other art that depicts nude figures.”
Except when it doesn't, that is. Look, again, nobody is expecting Facebook to be perfect at this. But the site has a responsibility, if it is going to play censor at all, to at least be good enough at it not to censor statues of art in the name of prohibiting too much skin.
Over and over again, we've talked about the ridiculousness of the moral panic around so-called "fake news" -- a broad and somewhat meaningless term now used to describe just about anything from actual made-up stories, to news articles that have a small factual error, to those with a "spin" that someone disagrees with. And, as we warned, the panic of "fake news" is leading to widespread calls for censorship. A few weeks ago, we wrote about how German officials were supporting a plan to criminalize "fake news" and now Italy wants to join in on the fun. In an interview with the country's antitrust chief, Giovanni Pitruzzella, he argued that it's really time to crack down on the internet, with government wielding the censorship power over whatever it calls "fake news."
“Post-truth in politics is one of the drivers of populism and it is one of the threats to our democracies,” Pitruzzella said. “We have reached a fork in the road: we have to choose whether to leave the internet like it is, the wild west, or whether it needs rules that appreciate the way communication has changed. I think we need to set those rules and this is the role of the public sector.”
Pitruzzella argued tackling fake news should not be left up to social media companies, but instead be tackled by the state through independent authorities with the power to remove fake news and impose fines, coordinated by Brussels, similar to the way the EU regulates competition.
Any time you hear of a plan for the government to be able to remove news stories or impose fines for reporting, you should get very, very worried. That is a recipe for censorship. Yes, blatantly made-up stories are a problem -- but not one that should be dealt with by expanding the tools of censorship in a way that will be abused. We need to teach better media literacy and get more people to understand how to read critically and to do research. Putting tools to censor and fine journalists in the hands of government will inevitably lead to that power being abused. Someone will report on something that makes a politician look bad, and suddenly it will be declared "fake news." We're seeing that happen already -- even without the threat of fines and censorship.
This focus on "fake news" is becoming increasingly dangerous and many of the people screaming loudest about it -- including lots of journalists -- don't seem to realize where it will end. You can worry about truly made-up stories all you want, but if you think the solution to it is to increase the powers to censor and stifle and chill expression, you're not going to be happy with how it boomerangs back on legitimate expression.
Retraction Watch was informed by its hosting service that it had received a DMCA notice targeting the post. The tactic used here is one we've seen before: copy-pasting and backdating of posts to make it appear as though the targeted site is the one engaging in copyright infringement.
On Wednesday, our host, Bluehost, forwarded us another false copyright claim — aka a Digital Millennium Copyright Act (DMCA) takedown notice — by someone calling himself “Jiya Khan” and claiming to be based in Delhi, India. (Well, specifically, in “Rohini,sector-12,” which would mean that he or she is based at one of two petrol stations.)
What actually happened, in an eerie echo of the 2013 case, is that Khan copied and pasted our December 9 post onto his or her site, then backdated it to December 5 to make it look older than ours, so that he or she could make a false copyright claim. (That, among other things, is a bit of a problem for Khan; the Federal Register notice that the post is about — and to which it manages not to link — wasn’t published until December 9.)
The bogus backdated Blogspot blog contains several other copy-pasted posts, suggesting "Jiya Khan" is just a fake name fronting for a sketchy reputation management service. Presumably, bogus DMCA notices have been issued to target the mixture of critical articles and negative reviews splashed across the blog's pages. It's not exactly a surefire way to rid the net of criticism, but it's cheap and easy and works just often enough it's worth trying. We saw this with disgraced real estate lawyer Sean Gjerde, and gripe sites have seen it happen with just about everyone else.
Retraction Watch is challenging the DMCA takedown notice. Presumably, the post will be live again in the near future. Then again, "Jiya Khan" may continue to insist he created Retraction Watch's post, which means Bluehost won't be able to do much more than keep the post down until all permutations of the DMCA process have been played out.
But that's how easy it is to make fully-factual criticism disappear, even if only temporarily. And whoever's mismanaging Kaushik Deb's questionable reputation knows this. Even if there's provable perjury in the takedown request, who's going to actually be able to track down the real person behind the "Jiya Khan" facade, much less manage to hold them accountable for their abuse of the system?
Things just got a little more weird in the case of the 13 year old British girl who had some art she created taken down from Redbubble's site because the artwork included the phrase "winter is coming." The girl's father responded to the takedown, questioning why HBO, owners of the trademark rights to the phrase in conjunction with the Game of Thrones franchise, would do this to an autistic teenager that wasn't even selling the art, only sharing it. As noted in our original post, the letter Redbubble sent the girl is a mess, lacking any firm reference to trademark or copyright and replacing it only with "IP/Publicity Rights." In addition, the whole letter is written like a forwarded DMCA notice, including offering the ability to counternotice through Redbubble's DMCA counternotice email address... but this would be a trademark issue, to which the DMCA doesn't apply. Lots of people, including the girl's father, assumed this takedown had been carried out at the request of HBO.
It looks like that wasn't the case. What the girl's father missed -- along with many of those reporting on the issue, including myself -- is that Redbubble's letter states that they did this because of HBO's history of issuing DMCA notices, not because it had actually done so in this case.
"We have removed the following content from Redbubble in response to past complaints from Home Box Office, Inc., the claimed owner or licensee of related intellectual property and in accordance with Redbubble's IP/Publicity Rights Policy," it said.
So this wasn't "notice and takedown", it was "notice and staydown", where Redbubble policed HBO intellectual property on its site based on previous complaints. That's not required by law, of course, but it certainly is what many in Hollywood want to see as the standard. And this is a perfect case for why it's a terrible, terrible idea. Legitimate, non-infringing uses get caught up in the blanket takedowns issued by service providers that don't really have a clue as to what they're doing.
And in light of what actually happened, Redbubble's letter, and the way it appears to be disguised as a DMCA notice, is at best horrifically sloppy and at worst an attempt to shift the blame for its voluntary and proactive takedown of a teenage girl's artwork. For it's part, HBO doesn't seem to be happy about this.
"We love when fans are creative in their support of our programmes," the network said in a statement (via Entertainment Weekly). "These works live online in many incarnations, and in the past we have celebrated them by drawing attention to them.
"Many for-profit websites that sell products, such as Redbubble, take steps to avoid infringements as part of their standard operating procedure. To suggest a particular individual was targeted, or that HBO threatened legal action against her, is simply untrue."
Now, HBO's history of how it treats fans of Game of Thrones isn't universally positive, and it's worth noting again that the stated reason Redbubble did all of this is because of the network's heavy-handed history when it comes to matters just like this...but I have to think HBO is also pissed off at the way Redbubble's communication with the girl's father allowed him -- and the media -- to conclude this was all HBO's fault for issuing a DMCA takedown.
Regardless, good to know that Redbubble wants to be in the business of proactively policing its site in such a way that it results in the non-infringing artwork of a teenager gets taken down. The site is a notice and staydown site, on the record.
Last month, the UK moved forward with the latest version of its ridiculous "Digital Economy Bill" which will put in place mandatory porn filtering at the ISP level -- requiring service providers to block access to sites that don't do an age verification check. But it was at least somewhat vague as to which "ISPs" this covered. The bill has moved from the House of Commons over to the House of Lords, and apparently we now have at least something of an answer -- and it's that social media sites like Twitter and Facebook will be covered by this regulation.
In other words, those sites may be required to block accounts and block access to certain porn sites. That's ridiculous. This came out during the reading of the bill in the House of Lords where a question was raised about the responsibility of platforms under the bill:
Finally, I have a question for the Minister. I would like him to comment on what the expectations are for social media sites like Twitter, which can themselves host user-generated pornographic content. The expectations on commercial pornography websites are set out pretty clearly in Clause 15, but will the Minister please clarify how the Bill as drafted will impact on social media sites? Clause 22 starts to cover this with its reference to “ancillary service providers”, but in Clause 22(6) the reference is restricted to business activities so provided. Evidence from the Government to the Communications Select Committee on 29 October was as follows:
“Twitter is a user-generated uploading-content site. If there is pornography on Twitter, it will be considered covered under ancillary services”.
How does that apply to material on Twitter that is not uploaded in the course of business activities? I ask the Minister to clarify this point when he responds.
Later, Baroness Benjamin claims that it's important that they make sure that social media is included in the bill "for the children" (of course):
In seeking to protect children from stumbling upon pornography, it is particularly important that social media is covered by the Bill. That is one of the primary ways in which children are exposed to pornography. There has been some debate about the scope of Clause 15 and the ancillary service providers, but it seems clear to me that social media should be covered by this. I was particularly delighted that the noble Baroness, Lady Shields, confirmed to the Lords Communications Committee on 29 November that:
“The Bill covers ancillary services. There was a question about Twitter. Twitter is a user-generated uploading-content site. If there is pornography on Twitter, it will be considered covered under ancillary services”.
Can the Minister confirm that this will be the case and also the case for all other social media, including, Facebook, Tumblr and Instagram?
The debate over regulating Twitter got pretty silly pretty fast. At least one person noted that the UK was at risk of looking like idiots. This is from "The Earl of Erroll" (gotta love the House of Lords), who then admits he doesn't even know what's possible, but he's absolutely positive that age checks on any e-commerce site is no big deal.
It is probably unrealistic to block the whole of Twitter—it would make us look like idiots. On the other hand, there are other things we can do. This brings me to the point that other noble Lords made about ancillary service complaints. If we start to make the payment service providers comply and help, they will make it less easy for those sites to make money. They will not be able to do certain things. I do not know what enforcement is possible. All these sites have to sign up to terms and conditions. Big retail websites such as Amazon sell films that would certainly come under this category. They should put an age check in front of the webpage. It is not difficult to do; they could easily comply.
Finally, Lord Ashton of Hyde, admits that, yes, of course the bill will apply to social media and all those other sites, because why the fuck not?
The right reverend Prelate, the noble Baronesses, Lady Kidron and Lady Benjamin, and the noble Earl, Lord Erroll, asked a valid question about social media and Twitter. The Government believe that services, including Twitter, can be classified by regulators as ancillary service providers where they are enabling or facilitating the making available of pornographic or prohibited material. This means that they could be notified of commercial pornographers to whom they provide a service but this will not apply to material provided on a non-commercial basis.
In that same answer, he pulls an infamous "free speech is important, but..." line that is what you expect from someone about to censor speech:
It is a complicated area. Free speech is vital but we must protect children from harm online as well as offline. We must do more to ensure that children cannot easily access sexual content which will distress them or harm their development, as has been mentioned.
And, thus, you go from a system officially designed to make it hard to reach porn on the internet "for the children" to a bill that allows for the UK government to force social media companies to block or kill certain accounts. That seems like a pretty big deal.
Ever since people started flipping out about the so-called "fake news" problem on Facebook, we've been warning that this is going to lead to calls for outright censorship of certain ideas. In the US, we've already seen Rep. Marsha Blackburn argue that companies have some sort of obligation to make "fake" news disappear.
But even more concerning is that we're seeing authoritarian governments jumping onto the bandwagon, knowing that they can now use the banner of "fake news" to censor all sorts of content they don't like. A few weeks ago, China announced that it was ramping up internet surveillance efforts to "combat fake news," and now Iran appears to be doing the same thing. The Iranian government has decided that in order to crack down on "fake news," it's going to expand regulations covering public news channels on the instant messaging app Telegram.
As reported by Tasnim News, a hardline news agency affiliated with the Revolutionary Guards, Iranian ICT Minister Mahmoud Vaezi made the announcement at a press conference during the National Conference on Public Service. Vaezi cited the dangers “unofficial news channels” pose in Iran's rural and less developed regions, where fake news and misinformation have gained sizeable audiences.
Considering that people have to trust the news released on these channels, it was decided that channels with more than a certain number of members will require a license.
Minister Vaezi's announcement builds on discussions now underway inside Iran's Supreme Council of Cyberspace, the country's chief authority on Internet policy. According to Vaezi, the ICT Ministry will manage the new licenses alongside the Ministry of Culture and Islamic Guidance — the state agency responsible for regulating Iran's media. Vaezi says the government will form a committee in the coming months, and news channels with more than 5,000 subscribers will be required to win the committee's approval to continue operating legally.
So, for all of you complaining about fake news -- a broad term with no real meaning, and which allows people to claim that anything they dislike, or anything with a small error in it counts as "fake news" -- beware that you're basically handing an easy tool of censorship to governments like China and Iran that have long histories of stifling any kind of dissent. "Fake news" isn't necessarily a good thing, but freaking out about it is playing into the hands of censors worldwide.
It's well-known at this point that HBO guards its intellectual property on the Game of Thrones franchise more jealously than a direwolf with a freshly harvested bone. To that end, the company often times treats some of its biggest fans with disdain, such as when it killed off viewing parties that would otherwise generate more interest in the show, or the times it abused the DMCA process as a way to keep spoilers from the show from permeating. These actions are indeed annoying, but they lack a certain something in the pure evil department.
"My daughter, who happens to be autistic, was doing an art challenge called Huevember which consisted of doing a piece of art based on a different colour as you worked your way round a colour wheel," Jonathan Wilcox, of Edwinstowe in the UK, told The Register on Thursday.
"She was uploading her pictures to a variety of sites and sharing them on Facebook. For this particular piece, she decided to title it 'Winter is Coming.' I do not believe she uploaded the picture to RedBubble to make any particular financial gain, she just thought it a sensible place to put it."
So a child makes some art and puts it on the internet, because that's what you do these days. It should be noted that the artwork was not being sold on the site, only displayed. HBO's lawyers come across it and take it down, with nary a conversation. And, lest you think that the artwork itself had something to do with the show, thus ameliorating HBO's actions, here is the artwork in question.
As someone who watches the show regularly, the image doesn't appear to me to be in any way connected to the show. Nor, likely, is the text itself. It's far more likely that a child that created some art at a certain time of year came up with the phrase independently. But, because that phrase is trademarked by HBO, the takedown was issued.
The takedown notice forwarded by Redbubble to Wilcox doesn't specifically cite trademark as the law being applied, but it's the only one that makes sense. That means that the test in question is whether or not anyone is going to confuse this artwork as being created by or endorsed by HBO. And if you believe the answer to that question is "yes," then I'm surprised you're able to put your pants on in the morning. The whole thing seems to be confusing, because even though the DMCA doesn't apply to trademark law, Redbubble is clearly treating it as a DMCA takedown -- where it just replaced the normal "copyright" terms with "IP/Publicity Rights" -- and even uses its DMCA email address for any "counternotice." And the "counternotice" process is identical to a DMCA counternotice process, which requires the family to accept jurisdiction in California (remember, they're in the UK) if they counter the claim.
This is ridiculous on many levels, but once again highlights how the power of copyright to be a tool for censorship grows and expands and swallows other legal doctrines in the same neighborhood.
You can sense Wilcox's frustration in his comments.
"My first reaction to the letter was 'FFS.' HBO should get a life or stick something where the sun doesn't shine," Wilcox said.
"On further investigation, it appears HBO are doing this all over the place regarding this phrase. It seems to have upset a lot of people on Etsy and elsewhere who have had the same or similar letter."
This is the problem when large entities and their legal departments use the DMCA (or a quasi-DMCA-like) process like a shotgun, spraying censorious buckshot at many targets, only some of which might be truly infringing. This lack of legal nuance manages to catch innocent content producers in the crossfire -- in this case an autistic teenager who painted a picture. One wonders how the more virtuous heroes from the show would react.
Tennessee Rep. Marsha Blackburn doesn't have a very good history demonstrating any knowledge of how the internet works. She's generally in favor of two very stupid policies related to the internet. First, getting rid of net neutrality. Second, forcing tech companies to censor the internet to stop "piracy." The fact that her rationales for these two things are completely in conflict with each other doesn't seem to enter her thought process. That is, she claims that there shouldn't be any net neutrality because it's important to keep the internet free from all regulations. Here's Blackburn explaining this point in a nice, quick and utterly idiotic whiteboard video:
If you can't see that, she starts out by talking up how wonderful the internet is just as it is today, and notes that it's necessary for creating jobs. Then she says this:
Some people fear that without government intervention, that entrepreneurs and innovators are going to hijack the internet that you enjoy. The World Wide Web! This hasn't happened. And there has never been a time when a consumer needed a federal bureaucrat to intervene.
Then she talks about passing her legislation to block the FCC "from ever regulating the internet" because "we want to keep it open free and prosperous."
Of course, she's quite willing to sing a different tune when it comes to her other pet projects. She was a major backer of SOPA, of course, which was a bill to regulate the internet and open it up to Chinese-style site-blocking. A few months ago, she also made the nutty claim that the script kiddie botnet hack that took down large parts of the internet would have been stopped if only SOPA had been passed which made no sense at all.
If you can't see that, it's part of a clip of Blackburn on CNN talking about "fake news," where she says:
If anyone is putting fake news out there, the ISPs have the obligation to, in some way, get that off the web. And maybe it's time for these information systems to look to have some type of news editor doing some vetting on that. Whether it's the Russians, the Chinese, the Iranians or whomever. You do not want that out there because it's... because it's fake news! It is not something that is going to be correct. It's going to end up being refuted. But it takes time, effort and energy to do that, and trying to sway or misinform is completely inappropriate, and in my opinion unethical.
So she isn't directly calling for legislation, but any time you have a sitting legislator (not to mention a Trump transition team member...) talking about how internet companies need to censor the internet to do away with "fake news" your ears should perk up. First off, note that she says that refuting fake stories takes "time, effort and energy" but doesn't note that finding "some type of news editor" who can review the news postings of billions of internet users is, um, physically impossible. Does she really not understand the scale of what she's talking about?
Second, I get the feeling that Marsha Blackburn's definiton of "fake news" differs from many other people's -- which is why we've noted that the whole "fake news freakout" is so misguided. The term can mean just about anything -- and all too frequently means "news I disagree with." I'm going to imagine that Rep. Blackburn doesn't much like this article for instance. Does she believe that her friends, the internet service providers, have "an obligation" to get this article "off the web"?
Because that's a pretty serious issue: you have a sitting legislator effectively calling for internet censorship based on a vague standard of news being "fake." Somewhat ironically, Blackburn has been one of the most vocal opponents of the bogeyman of the Fairness Doctrine -- which was an attempt to beat back biased news in the past by requiring "equal time" to opposing views. But Blackburn is constantly freaking out about a non-existent "fairness doctrine" for the internet that she insists was part of the FCC's net neutrality rules (it wasn't, and never was suggested). But her suggestion for having internet companies censor "fake news" seems even worse than a fairness doctrine. Rather than encouraging more speech, Blackburn is flat out supporting having internet companies censor content they deem to be "fake." That's bad.