Techdirt has been following for a while Canada's moves to stop scientists from speaking out about areas where the facts of the situation don't sit well with the Canadian government's dogma-based policies. Sadly, it looks like the UK is taking the same route. It concerns a new code for the country's civil servants, which will also apply to thousands of publicly-funded scientists. As the Guardian reports:
Under the new code, scientists and engineers employed at government expense must get ministerial approval before they can talk to the media about any of their research, whether it involves GM crops, flu vaccines, the impact of pesticides on bees, or the famously obscure Higgs boson.
The fear -- quite naturally -- is that ministers could take days before replying to requests, by which time news outlets will probably have lost interest. As a result of this change, science organizations have sent a letter to the UK government, expressing their "deep concern" about the code. A well-known British neurobiologist, Sir Colin Blakemore, told the Guardian:
"The real losers here are the public and the government. The public lose access to what they consider to be an important source of scientific evidence, and the government loses the trust of the public," Blakemore said.
Not only that, by following Canada's example, the British government also makes it more likely that other countries will do the same, which will weaken science's ability to participate in policy discussions around the world -- just when we need to hear its voice most.
If you pay attention to Github (and you should), you know that late last week the site started experiencing some problems staying online, thanks to a massive and frequently changing DDoS attack. Over the past few days a lot more details have come out, making it pretty clear that the attack is coming via China with what is likely direct support from the Chinese government. While it's messing with all of Github, it's sending traffic to two specific Github pages: https://github.com/greatfire and https://github.com/cn-nytimes. Those both provide tools to help people in China access Greatfire and the NY Times. Notably, Greatfire itself notes that prior to the DDoS on Github, its own site was hit with a very similar DDoS attack.
If you want the technical details, Netresec explains how the DDoS works, noting that it's a "man-on-the-side" attack, injecting certain packets alongside code loaded by Chinese search engine Baidu (including both its ad platform and analytics platform), but is unlikely to be coming directly from Baidu itself.
But the much more interesting part is why China is using a DDoS attack, rather than its standard approach of just blocking access in China, as it has historically done. The key is that, two years ago, China tried to block Github entirely... and Chinese programmers flipped out, pointing out that they couldn't do their jobs without Github. The Chinese censors were forced to back down, leading to a sort of loophole in the Great Firewall. That leads to the next question of why China doesn't just block access to the URLs of the two repositories it doesn't like? And the answer there: HTTPS. Because all Github traffic is encrypted via HTTPS, China can't just block access to those URLs, because it doesn't know specifically what's being accessed.
And thus, we get the decision to turn its firewall around, launching a rather obvious DDoS attack on the two sites it doesn't like, with the rather clear message being sent to Github: if you stop hosting these projects, the DDoS will stop. Of course, so far Github is taking a stand and refusing to take down those projects (which is great and exactly what it should be doing).
However, this does suggest an interesting escalation in questions about the increasing attempts to fragment the internet. You see various countries demanding (or forcing) certain websites get blocked. But those solutions are truly only temporary. Because the overall internet is too important to block, and because some sites are necessary (like Github) there are always holes in the system. Add in a useful dose of encryption (yay!) and the ability to control everything that's read in one particular country becomes increasingly difficult. You might hope the response would be to give up attempts to censor, but China isn't likely to give up just like that. So, instead, it's basically trying to censor the global internet, by launching a high powered attack on the site that is the problem, while basically saying "get rid of these projects and we'll stop the attack."
It seems likely that this sort of escalation is only going to continue -- but in some ways it's actually a good sign. It shows that there are real cracks in China's attempts to censor the internet. We're basically realizing the limits of the Great Firewall of China, and useful services like Github have allowed a way to tunnel through. China is responding by trying to make life difficult for Github, but as long as Github and others can figure out ways to resist, censorship attempts like the Great Firewall will increasingly be useless.
In the early days of the internet, people talked about how it was resistant to censorship. Over the past decade or so, China has challenged that idea, showing that it could basically wall off large parts of the internet, and actually keep things semi-functional. Yes, there were always cracks in the wall, but for the most part, China showed that you could censor large parts of the internet. This latest move suggests that we may be moving back towards a world where the internet really is resistant to censorship -- and China is freaking out about it and responding by trying to increase the censorship globally. It's a battle that is going to be important to follow if you believe in supporting free expression online.
As I noted earlier this week, at the launch of the Copia Institute a couple of weeks ago, we had a bunch of really fascinating discussions. I've already posted the opening video and explained some of the philosophy behind this effort, and today I wanted to share with you the discussion that we had about free expression and the internet, led by three of the best people to talk about this issue: Michelle Paulson from Wikimedia; Sarah Jeong, a well-known lawyer and writer; and Dave Willner who heads up "Safety, Privacy & Support" at Secret after holding a similar role at Facebook. I strongly recommend watching the full discussion before just jumping into the comments with your assumptions about what was said, because for the most part it's probably not what you think:
Internet platforms and free expression have a strongly symbiotic relationship -- many platforms have helped expand and enable free expression around the globe in many ways. And, at the same time, that expression has fed back into those online platforms making them more valuable and contributing to the innovation that those platforms have enabled. And while it's easy to talk about government attacks on freedom of expression and why that's problematic, things get really tricky and really nuanced when it comes to technology platforms and how they should handle things. At one point in the conversation, Dave Willner made a point that I think is really important to acknowledge:
I think we would be better served as a tech community in acknowledging that we do moderate and control. Everyone moderates and controls user behavior. And even the platforms that are famously held up as examples... Twitter: "the free speech wing of the free speech party." Twitter moderates spam. And it's very easy to say "oh, some spam is malware and that's obviously harmful" but two things: One, you've allowed that "harm" is a legitimate reason to moderate speech and two, there's plenty of spam that's actually just advertising that people find irritating. And once we're in that place, it is the sort of reflexive "no restrictions based on the content of speech" sort of defense that people go to? It fails. And while still believing in free speech ideals, I think we need to acknowledge that that Rubicon has been crossed and that it was crossed in the 90s, if not earlier. And the defense of not overly moderating content for political reasons needs to be articulated in a more sophisticated way that takes into account the fact that these technologies need good moderation to be functional. But that doesn't mean that all moderation is good.
This is an extremely important, but nuanced point that you don't often hear in these discussions. Just today, over at Index on Censorship, there's an interesting article by Padraig Reidy that makes a somewhat similar point, noting that there are many free speech issues where it is silly to deny that they're free speech issues, but plenty of people do. The argument then, is that we'd be able to have a much more useful conversation if people admit:
Don't say "this isn't a free speech issue", rather "this is a free speech issue, and I’m OK with this amount of censorship, for this reason.” Then we can talk."
Soon after this, Sarah Jeong makes another, equally important, if equally nuanced, point about the reflexive response by some to behavior that they don't like to automatically call for blocking of speech, when they are often confusing speech with behavior. She discusses how harassment, for example, is an obvious and very real problem with serious and damaging real-world consequences (for everyone, beyond just those being harassed), but that it's wrong to think that we should just immediately look to find ways to shut people up:
Harassment actually exists and is actually a problem -- and actually skews heavily along gender lines and race lines. People are targeted for their sexuality. And it's not just words online. It ends up being a seemingly innocuous, or rather "non-real" manifestation, when in fact it's linked to real world stalking or other kinds of abuse, even amounting to physical assault, death threats, so and so forth. And there's a real cost. You get less participation from people of marginalized communities -- and when you get less participation from marginalized communities, you lead to a serious loss in culture and value for society. For instance, Wikipedia just has fewer articles about women -- and also its editors just happen to skew overwhelmingly male. When you have great equality on online platforms, you have better social value for the entire world.
That said, there's a huge problem... and it's entering the same policy stage that was prepped and primed by the DMCA, essentially. We're thinking about harassment as content when harassment is behavior. And we're jumping from "there's a problem, we have to solve it" and the only solution we can think of is the one that we've been doling out for copyright infringement since the aughties, and that's just take it down, take it down, take it down. And that means people on the other end take a look at it and take it down. Some people are proposing ContentID, which is not a good solution. And I hope I don't have to spell out why to this room in particular, but essentially people have looked at the regime of copyright enforcement online and said "why can't we do that for harassment" without looking at all the problems that copyright enforcement has run into.
And I think what's really troubling is that copyright is a specific exception to CDA 230 and in order to expand a regime of copyright enforcement for harassment you're going to have to attack CDA 230 and blow a hole in it.
She then noted that this was a major concern because there's a big push among many people who aren't arguing for better free speech protections:
That's a huge viewpoint out right now: it's not that "free speech is great and we need to protect against repressive governments" but that "we need better content removal mechanisms in order to protect women and minorities."
From there the discussion went in a number of different important directions, looking at other alternatives and ways to deal with bad behavior online that get beyond just "take it down, take it down," and also discussed the importance of platforms being able to make decisions about how to handle these issues without facing legal liability. CDA 230, not surprisingly, was a big topic -- and one that people admitted was unlikely to spread to other countries, and the concepts behind which are actually under attack in many places.
That's why I also think this is a good time to point to a new project from the EFF and others, known as the Manila Principles -- highlighting the importance of protecting intermediaries from liability for the speech of their users. As that project explains:
All communication over the Internet is facilitated by intermediaries such as Internet access providers, social networks, and search engines. The policies governing the legal liability of intermediaries for the content of these communications have an impact on users’ rights, including freedom of expression, freedom of association and the right to privacy.
With the aim of protecting freedom of expression and creating an enabling environment for innovation, which balances the needs of governments and other stakeholders, civil society groups from around the world have come together to propose this framework of baseline safeguards and best practices. These are based on international human rights instruments and other international legal frameworks.
In short, it's important to recognize that these are difficult issues -- but that freedom of expression is extremely important. And we should recognize that while pretty much all platforms contain some form of moderation (even in how they are designed), we need to be wary of reflexive responses to just "take it down, take it down, take it down" in dealing with real problems. Instead, we should be looking for more reasonable approaches to many of these issues -- not in denying that there are issues to be dealt with. And not just saying "anything goes and shut up if you don't like it," but that there are real tradeoffs to the decisions that tech companies (and governments) make concerning how these platforms are run.
The court said such a law hit at the root of liberty and freedom of expression, the two cardinal pillars of democracy. The court said the section has to be erased from the law books as it has gone much beyond the reasonable restrictions put by the Constitution on freedom of speech. The Supreme Court said section 66A was vaguely worded and allowed its misuse by police.
But the judges did not eliminate another controversial power granted by the IT Act:
The court, however, upheld the validity of section 69B and the 2011 guidelines for the implementation of the I-T Act that allowed the government to block websites if their content had the potential to create communal disturbance, social disorder or affect India's relationship with other countries.
Those are pretty vague criteria, and it's easy to see them being abused, just as Section 66A was. Nonetheless, this is an important ruling (pdf), not least for the Indian Supreme Court's robust defense of free speech. Let's hope future Indian laws attempting to control online activities take note of its wisdom.
We had been noting, in the wake of the Charlie Hebdo attacks in France, how the country that then held a giant "free speech" rally appeared to be, instead, focusing on cracking down on free speech at every opportunity. And target number one: the internet. Earlier this week, the Interior Minister of France -- with no court review or adversarial process -- ordered five websites to not only be blocked in France, but that anyone who visits any of the sites get redirected to a scary looking government website, saying:
You are being redirected to this official website since your computer was about to connect with a page that provokes terrorist acts or condones terrorism publicly.
It appears that the French government has a very low opinion of the intelligence of the French public -- believing that merely reading something online will suddenly make them rush to join ISIS.
"I do not want to see sites that could lead people to take up arms on the Internet," Interior Minister Bernard Cazeneuve said.
"I make a distinction between freedom of expression and the spread of messages that serve to glorify terrorism. These hate messages are a crime."
Except... it already appears that France is really just censoring websites with messages it doesn't like. In that first batch was a site called "islamic-news.info." The owner of that site not only notes that he was never first contacted to "remove" whatever material was deemed terrorist supporting (as required by the law), but that nothing in what he had posted was supporting terrorism. He has written a public statement posted on the French news site Numerama, in which he makes it clear that he's a one-man operation, and that he's been doing everything based on a 50 euro/month hosting plan, and that he doesn't support ISIS or Al Qaeda at all. His site is opinionated, but mostly just against current Syrian leader Bashar al-Assad. In fact, he notes that he specifically avoided topics that might be misinterpreted to suggest that he supported terrorists. He did not share ISIS propaganda or similar content. He even points out how he denounced a Syrian fighter who argued for attacks on Europe, saying that such things would reflect poorly on Muslims in Europe.
But, with no judicial review, no due process at all, the French government declared the site to be a terrorist supporter and now it's gone.
All that talk about France and free speech quickly fade into nothing. As Glenn Greenwald, at the Intercept, points out in response to all of this, blatant government censorship is far more damaging than terrorist attacks (while also noting that governments around the globe are moving in similar directions):
In sum, far more damage has been inflicted historically by efforts to censor and criminalize political ideas than by the kind of “terrorism” these governments are invoking to justify these censorship powers.
And whatever else may be true, few things are more inimical to, or threatening of, Internet freedom than allowing functionaries inside governments to unilaterally block websites from functioning on the ground that the ideas those sites advocate are objectionable or “dangerous.” That’s every bit as true when the censors are in Paris, London, and Ottawa, and Washington as when they are in Tehran, Moscow or Beijing.
France's "motto" is supposedly Liberté, égalité, fraternité. I have difficulty seeing how blatantly censoring websites you disagree with, without any sort of due process, fits with any of those three ideals.
The DMCA takedown notice allows rights holders to perform targeted removals of infringing… I can't even finish that sentence with a straight face. IN THEORY, it can. In reality, it often resembles targeting mosquitoes with a shotgun. Collateral damage is assumed.
Case in point: Internet Brands recently issued two takedown requests to protect some of its cruelty-free, farmed content originating at LawFirms.com. It's this phrase -- taken verbatim from LawFirms' "Penalties for Tax Evasion" -- that has triggered the takedown notices from Internet Brands.
Tax evasion refers to attempts by individuals, corporations or trusts to avoid paying the total amount of taxes owed through illegal means, known as tax evasion fraud.
The second (at least according to Google's non-numeric sorting) is a repeat of the first, except for the addition of a Techdirt post. At first glance, the targeting of this article by Tim Geigner -- "Dear Famous People: Stop Attempting Online Reputation Scrubbing; I Don't Want To Write Streisand Stories Anymore" -- would appear to be exactly the sort of behavior Dark Helmet was decrying. But it isn't.
The phrase triggering the Internet Brands takedown can be found in a very late arrival to the comment thread, more than one-and-a-half years after the original post went live. It opens up with this:
This is a very interesting. I read the whole article at New York Magazine. So someone is accused of tax evasion and then charges are dropped and then tries to clean up his reputation.... nothing wrong with that.
Then, for no apparent reason, the commenter drops in the LawFirms.com paragraph highlighted above.
Now, here's the problem. If blogs and other sites are reposting others' content without permission, that's one thing. But targeting whole posts for delisting just because a commenter copy-pasted some content is abusive. It could very possibly take out someone else's created content -- covered under their copyright. Using a DMCA notice in this fashion can allow unscrupulous rights holders to bypass Section 230 protections -- effectively holding site owners "responsible" for comments and other third-party posts by removing the site's original content from Google's listings.
From the looks of it, Internet Brands did nothing more than perform a google search for this phrase and issue takedown notices for every direct quote that originated from somewhere other than its sites. It didn't bother vetting the search results for third-party postings, fair use or anything else that might have made its takedown request more targeted. Internet Brands doesn't issue many takedowns, so it's not as though its IP enforcement squad had its hands full. In fact, there's every reason to believe actual humans are involved in this process, rather than just algorithms -- all the more reason to handle this more carefully. Here's a little bit of snark it inserted into a 2014 DMCA takedown notice.
The interview and photos are published on our website and permission hasn't been granted for anyone else to republish them. Not only is the content stolen it out ranks our website in a Google search for the keyword "th taylor". So much for Google being able to identify the source of original content!
If a company has the time to leave personal notes for Google (which doesn't have the time to read them), then it has time to ensure its requests aren't targeting the creative works of others just to protect its own. The DMCA notice is not some sort of IP-measuring contest with Google holding the ruler. If Internet Brands thinks it is -- or just hasn't bothered to vet its takedown requests before sending -- it's usually going to be the one coming up short. If Google doesn't ignore the request, those on the receiving end of a bogus takedown will make a lot of noise. Either way, it''s accomplished nothing.
You may wonder what kind of person would deny that HIV leads to AIDS or death, but when you come across someone like prominent "AIDS denialist" Clark Baker, you may be inclined to believe that this is but a small part of his misanthropy. Baker has -- previous to his IP-abusing lawsuit against an HIV-positive blogger -- been arrested for assaulting a jaywalker and provides legal counsel to people accused of endangering sexual partners by concealing their HIV/AIDS status.
Baker sued a critic of his (the above-mentioned blogger) but had no legal basis for doing so. This didn't stop him from pursuing the lawsuit because intellectual property laws always seem willing to lend a hand when there's some censoring to be done. Baker originally skewed the blogger's postings as defamatory, but apparently wasn't too confident in letting that accusation stand on its own. So, he added something about trademark law -- supposed Lanham Act violations tied to the use of his entity's (HIV Innocence Project) name in the defendant's URL (hivinnocenceprojectruth.com).
Although the appellate decision itself is not very illuminating, the fact that the court indicated that its reasoning was the same as that of the trial judge helps illuminate the ruling as well as reinforcing the important precedent that it set — that when a defamation plaintiff throws in a trademark claim to justify suing in federal court, as well as hoping to make the whole proceedings more intimidating to the defendant, it really is possible to get the case thrown out at the pleading stage.
Good news for bloggers, as Public Citizen's Paul Alan Levy points out. Getting a case tossed at the pleading stage saves a whole lot of money and bloggers who utilize trademarks in their writing (especially when writing critical pieces) need a safety valve to prevent them from trademark bullies. This case does offer some hope in that respect, although there's still many gray areas of IP law left untouched in the decision -- including crucial defenses like protected speech and fair use.
The main reason why Levy took this case (apart from answering the Popehat signal) is to help establish precedent in terms of legal fee awards in clear cases of trademark bullying. The Fifth's two-page affirmation of the lower court's decision leaves this particular aspect unaddressed.
But I have been concerned about the fact that some circuits, the Fifth Circuit among them, set a standard for awarding attorney fees for winning trademark defendants -- demanding a showing of bad faith by clear and convincing evidence. There is wide-ranging agreement among the various circuits about just what standard should govern trademark attorney fees awards. Our long-standing argument has been that, when a trademark theory is used to try to shut down an expressive use, groundlessness of the claim should alone be sufficient to treat the claim as "exceptional."
What has been applied to patent cases by the Supreme Court should also be applied to trademark cases, Levy argues in a brief to the circuit court. In Octane Fitness v. Icon Health and Fitness, it ruled that the groundless nature of the lawsuit itself can be enough to award attorneys' fees -- overturning a lower court's finding that this alone wasn't sufficient enough to raise the case to the "exceptional" level.
In our brief to the Fifth Circuit on the fees issue, we argue that because the Lanham Act and Patent Act both allow awards of fees in “exceptional” cases, and because a variety of other aids to statutory construction suggest that the standards for fees under the Lanham Act should follow patent-law precedent, the court ought to take Octane Fitness as its new governing standard for trademark cases. We also argue that the fact that the complaint could have been dismissed on several different grounds supports a conclusion that the lawsuit was sufficiently lacking in merit to warrant an award of attorney fees without considering the evidence that plaintiffs were using the litigation to pursue the improper purpose of intimidating a critic and using discovery to pursue some pretty wild conspiracy theories.
Better deterrents are needed to ward off future censorious lawsuits. Anti-SLAPP laws are a start, but they've only been adopted in a handful of states. Precedent from a higher court would at least establish a new baseline in the court's district, sending the message that baseless lawsuits -- if not tossed nearly immediately -- will open plaintiffs up to further damages in the form of legal fees.
It's almost as if the UK is trying to be a shining example of the "slippery slope" we often refer to when talking about the dangers of filtering the Internet. Either that, or they're secretly creating absurdist art. Whether it's the government's porn filter architect getting arrested for child porn, the UK's filters blocking useful and entirely legal websites, or the desire to expand Internet filters to include ambiguously defined "extremist content," the UK has finally achieved high comedy with its stumbling, bumbling foray into trying to clean up the Internet of its naughty bits.
With the country's Pirate Bay filters going so well (as in not really well at all), the UK is engaged in a heated game of whac-a-mole to stop users from accessing the Pirate Bay specifically and BitTorrent websites in general. Despite years of effort and expenditures it remains relatively simple for most UK residents to dodge these bans, quite often by either changing simple DNS settings or by using a proxy server. The Pirate Bay has made it easier by often switching IP addresses, and when that doesn't work, users can still access the website via dedicated proxy sites. UK ISPs were already being forced to filter these proxy sites.
"Among the blocked sites are piratebayproxy.co.uk, piratebayproxylist.com and ukbay.org. Both sites are currently inaccessible on Virgin Media and TalkTalk, and other providers are expected to follow suit...TF spoke with Dan, the operator of UKBay.org, who’s baffled by the newly implemented blockade. He moved his site to a new domain to make the site accessible again, for the time being at least.
"The new blocks are unbelievable and totally unreasonable. To block a site that simply links to another site just shows the level of censorship we are allowing ISP’s to get away with," Dan says. "UKBay is not even a PirateBay proxy. It simply provides links to proxies. If they continue blocking sites, that link to sites, that link to sites.. there’l be nothing left,” he adds."
The filters include websites like piratebayproxy.co.uk, which features BitTorrent related news but also happens to list available proxies in a sidebar. What's next? A filter on the websites that list the websites that list the websites that offer proxy access to BitTorrent websites? Maybe for good measure UK ISPs should start filtering forums where you can discuss even so much as thinking about piracy just to be safe? It makes one wonder: when does a slippery slope stop being a slippery slope -- and just become an outright waterfall?
There has been an increasing push by the legacy entertainment industry to get "full site blocking," in which companies can declare sites they don't like as "rogue" and order ISPs to block all access to them. This was the whole point of SOPA. And while that law failed in the US, the entertainment industry is still interested in figuring out other paths to making it happen. Courts in many other countries have been much more receptive to this form of censorship -- and have regularly ordered ISPs to block sites. This is true in Sweden as well, but it appears that one ISP, Bredbandsbolaget, is going to fight back for as long as it can, according to Torrentfreak:
“It is an important principle that Internet providers of Internet infrastructure shall not be held responsible for the content that is transported over the Internet. In the same way that the Post should not meddle in what people write in the letter or where people send letters,” Commercial Director Mats Lundquist says.
“We stick to our starting point that our customers have the right to freely communicate and share information over the internet.”
Of course, this means that they'll be going to court later this year. Torrentfreak notes that the MPAA is pulling the strings behind this, of course:
Internal movie industry documents obtained by TorrentFreak reveal that IFPI and the Swedish film producers have signed a binding agreement which compels them to conduct and finance the case. However, the MPAA is exerting its influence while providing its own evidence and know-how behind the scenes.
Also of interest is that IFPI took a decision to sue Bredbandsbolaget and not Teliasonera (described by the MPAA as “the largest and also very actively ‘copy-left’ Swedish ISP”). The reason for that was that IFPI’s counsel represents Teliasonera in other matters which would have raised a conflict of interest.
Meanwhile, we're still left wondering how any of this encourages people to actually spend more money to support content creators.
[Windermere Cay's] Social Media Addendum, published here, is a triple-whammy. First, it explicitly bans all "negative commentary and reviews on Yelp! [sic], Apartment Ratings, Facebook, or any other website or Internet-based publication or blog." It also says any "breach" of the Social Media Addendum will result in a $10,000 fine, to be paid within ten business days. Finally, it assigns the renters' copyrights to the owner—not just the copyright on the negative review, but "any and all written or photographic works regarding the Owner, the Unit, the property, or the apartments." Snap a few shots of friends who come over for a dinner party? The photos are owned by your landlord.
The Florida apartment complex claims the stupid clause is needed to prevent "unjust and defamatory reviews." It makes this claim -- not in a statement given to Ars Technica (which was tipped off by a resident) -- but in the introductory paragraph of the Addendum. From there it gets worse. Doing any of the following triggers a $10,000 fine, with $5,000 added on for each additional "infraction."
This means that Applicant shall not post negative commentary or reviews on Yelp!, Apartment Ratings, Facebook, or any other website or Internet-based publication or blog. Applicant agrees that Owner shall make the determination of whether such commentary is harmful in Owner's sole discretion, and Applicant agrees to abide by Owner' determination as to whether such commentary is harmful.
Then come the copyright demands.
Additionally, each Applicant hereby assigns and transfers to Owner any and all rights, including all rights of copyright as set forth in the United States Copyright Act, in any and all written or photographic works regarding the Owner, the Unit, the property, or the apartments. This means that if an Applicant creates an online posting on a website regarding the Owner, the Unit, the property, or the apartments, the Owner will have the right to notify the website to take down any such online posting pursuant to the Digital Millennium Copyright Act.
Of course, when confronted by Ars about the Addendum, the property managers claimed this was all someone else's fault.
Asked about the Social Media Addendum by Ars, Windermere Cay's property manager sent this response via e-mail: "This addendum was put in place by a previous general partner for the community following a series of false reviews. The current general partner and property management do not support the continued use of this addendum and have voided it for all residents."
I would imagine the support was removed and addendum voided shortly after Ars publicized it, and not a moment before. According to Ars, the resident who contacted the site was asked to sign this suddenly-unsupported addendum only "days before." But Windermere Cay's management now very likely regrets ever including it in the first place. Like so many others before it, Windermere Cay is learning that attempting to preemptively shut down criticism with bogus clauses and high fees almost always results in more criticism. Its Yelp page is swiftly filling up with negative reviews and -- like every other emotionally-charged incident on the internet, has already achieved Godwin.
Obviously, there are better ways to handle allegedly defamatory reviews. A $10,000 fine and a preemptive usurpation of tenants' copyright isn't one of them.
[And neither is this bizarre Craigslist ad from another, unrelated rental property -- which makes vague claims about "defamation" while shouting "LAWSUIT LAWSUIT LAWSUIT" across the ether.]
As multiple entities have learned over the years, you can't stop criticism on the internet. You can only hope to contain it. Legal threats and punitive fines tend to blow the walls right off the containment scheme. What should be handled with exceptional customer service and the rare lawsuit (for truly defamatory statements) is instead turned over to hamfisted legalese and intimidating dollar amounts -- both of which make things worse for the entities they're ostensibly in place to protect.