The U.S. Supreme Court recently ruled in favor of a Cuban state-owned company and refused to intervene in a dispute over the “Cohiba” trademark. This is the most recent development in the long-standing rivalry between General Cigar Co Inc., an American (and Scandinavian) company, and Cubatabaco, a Cuban company.How fun! We finally open up the borders for some business with Cuba and one of the Castro companies decides it's trademark time! Keep in mind, of course, that the state that owns Cubatabaco is a communist nation, but not so communist that they'll refuse to use our capitalist tools to make that money. This dispute actually goes back nearly two decades, with Cubatabaco originally filing a trademark claim in 1997, which was eventually tossed in 2005 by the Second Circuit court, finding that any transfer of property, including a trademark, to a Cuban company would violate the embargo.
DailyDirt: Flying With The Greatest Of Ease (Too Much Free Time)
Among other sweeping new requirements to enhance digital privacy, the bill notably imposes a warrant requirement before police can access nearly any type of digital data produced by or contained within a device or service.Despite similar bills being killed by governor vetoes in 2012 and 2013, California legislators are still looking to reform the state's privacy laws. For one thing, this new bill would put the state's Electronic Communication Privacy Act in compliance with the Supreme Court's recent Riley v. California decision (warrant requirement for cell phone searches incident to arrest), as Cyrus Farivar points out.
In other words, that would include any use of a stingray, also known as a cell-site simulator, which can not only used to determine a phone’s location, but can also intercept calls and text messages. During the act of locating a phone, stingrays also sweep up information about nearby phones—not just the target phone.
[Marty] Vranicar [California District Attorneys Association] told the committee that the bill would "undermine efforts to find child exploitation," specifically child pornography.Vranicar failed to explain how an officer conducting an ongoing investigation would be unable to obtain a warrant for PTP user data… unless, of course, the "investigation" was nothing more than unfocused trolling or a sting running dangerously low on probable cause. Nothing in the bill forbids officers from using other methods -- Fourth Amendment-respecting methods -- to pursue those suspected of child exploitation. What it does do is make it more difficult to run stings and honeypots, both of which are already on shaky ground in terms of legality.
"SB 178 threatens law enforcement’s ability to conduct undercover child porn investigation. the so-called peer-to-peer investigations," he said. "Officers, after creating online profiles—these e-mails provide metadata that is the key to providing information. This would effectively end online undercover investigations in California."
1546.2 (a) Except as otherwise provided in this section, any government entity that executes a warrant or wiretap order or issues an emergency request pursuant to Section 1546.1 shall contemporaneously serve upon, or deliver by registered or first-class mail, electronic mail, or other means reasonably calculated to be effective, the identified targets of the warrant, order, or emergency request, a notice that informs the recipient that information about the recipient has been compelled or requested, and states with reasonable specificity the nature of the government investigation under which the information is sought. The notice shall include a copy of the warrant or order, or a written statement setting forth facts giving rise to the emergency.This isn't blanket coverage or without exceptions. Officers can still offer sworn affidavits in support of sealing to the court, which may then seal warrants on a rolling 90-day basis at its discretion.
(b) If there is no identified target of a warrant, wiretap order, or emergency request at the time of its issuance, the government entity shall take reasonable steps to provide the notice, within three days of the execution of the warrant, to all individuals about whom information was disclosed or obtained.
Dangerously Underpowered NSA Begging Legislators For Permission To Go To Cyberwar ((Mis)Uses of Technology)
NSA director Mike Rogers testified in front of a Senate committee this week, lamenting that the poor ol’ NSA just doesn’t have the “cyber-offensive” capabilities (read: the ability to hack people) it needs to adequately defend the US. How cyber-attacking countries will help cyber-defense is anybody’s guess, but the idea that the NSA is somehow hamstrung is absurd.Yes, we (or rather, our representatives) are expected to believe the NSA is just barely getting by when it comes to cyber-capabilities. Somehow, backdoors in phone SIM cards, backdoors in networking hardware, backdoors in hard drives, compromised encryption standards, collection points on internet backbones, the cooperation of national security agencies around the world, stealth deployment of malicious spyware, the phone records of pretty much every American, access to major tech company data centers, an arsenal of purchased software and hardware exploits, various odds and ends yet to be disclosed and the full support of the last two administrations just isn't enough. Now, it wants the blessing of lawmakers to do even more than it already does. Which is quite a bit, actually.
The NSA runs sophisticated hacking operations all over the world. A Washington Post report showed that the NSA carried out 231 “offensive” operations in 2011 - and that number has surely grown since then. That report also revealed that the NSA runs a $652m project that has infected tens of thousands of computers with malware.That was four years ago -- a lifetime when it comes to an agency with the capabilities the NSA possesses. Anyone who believes the current numbers are lower is probably lobbying increased power. And they don't believe it. They'd just act like they do.
The bill will do little to stop cyberattacks, but it will do a lot to give the NSA even more power to collect Americans’ communications from tech companies without any legal process whatsoever. The bill’s text was finally released a couple days ago, and, as EFF points out, tucked in the bill were the powers to do the exact type of “offensive” attacks for which Rogers is pining.In the meantime, Section 215 languishes slightly, as Trevor Timm points out. But that's the least of the NSA's worries. It has tech companies openly opposing its "collect everything" approach. Apple and Google are both being villainized by security and law enforcement agencies for their encryption-by-default plans. More and more broad requests for user data are being challenged, and (eventually) some of the administration's minor surveillance tweaks will be implemented.
I think we would be better served as a tech community in acknowledging that we do moderate and control. Everyone moderates and controls user behavior. And even the platforms that are famously held up as examples... Twitter: "the free speech wing of the free speech party." Twitter moderates spam. And it's very easy to say "oh, some spam is malware and that's obviously harmful" but two things: One, you've allowed that "harm" is a legitimate reason to moderate speech and two, there's plenty of spam that's actually just advertising that people find irritating. And once we're in that place, it is the sort of reflexive "no restrictions based on the content of speech" sort of defense that people go to? It fails. And while still believing in free speech ideals, I think we need to acknowledge that that Rubicon has been crossed and that it was crossed in the 90s, if not earlier. And the defense of not overly moderating content for political reasons needs to be articulated in a more sophisticated way that takes into account the fact that these technologies need good moderation to be functional. But that doesn't mean that all moderation is good.This is an extremely important, but nuanced point that you don't often hear in these discussions. Just today, over at Index on Censorship, there's an interesting article by Padraig Reidy that makes a somewhat similar point, noting that there are many free speech issues where it is silly to deny that they're free speech issues, but plenty of people do. The argument then, is that we'd be able to have a much more useful conversation if people admit:
Don't say "this isn't a free speech issue", rather "this is a free speech issue, and I’m OK with this amount of censorship, for this reason.” Then we can talk."Soon after this, Sarah Jeong makes another, equally important, if equally nuanced, point about the reflexive response by some to behavior that they don't like to automatically call for blocking of speech, when they are often confusing speech with behavior. She discusses how harassment, for example, is an obvious and very real problem with serious and damaging real-world consequences (for everyone, beyond just those being harassed), but that it's wrong to think that we should just immediately look to find ways to shut people up:
Harassment actually exists and is actually a problem -- and actually skews heavily along gender lines and race lines. People are targeted for their sexuality. And it's not just words online. It ends up being a seemingly innocuous, or rather "non-real" manifestation, when in fact it's linked to real world stalking or other kinds of abuse, even amounting to physical assault, death threats, so and so forth. And there's a real cost. You get less participation from people of marginalized communities -- and when you get less participation from marginalized communities, you lead to a serious loss in culture and value for society. For instance, Wikipedia just has fewer articles about women -- and also its editors just happen to skew overwhelmingly male. When you have great equality on online platforms, you have better social value for the entire world.She then noted that this was a major concern because there's a big push among many people who aren't arguing for better free speech protections:
That said, there's a huge problem... and it's entering the same policy stage that was prepped and primed by the DMCA, essentially. We're thinking about harassment as content when harassment is behavior. And we're jumping from "there's a problem, we have to solve it" and the only solution we can think of is the one that we've been doling out for copyright infringement since the aughties, and that's just take it down, take it down, take it down. And that means people on the other end take a look at it and take it down. Some people are proposing ContentID, which is not a good solution. And I hope I don't have to spell out why to this room in particular, but essentially people have looked at the regime of copyright enforcement online and said "why can't we do that for harassment" without looking at all the problems that copyright enforcement has run into.
And I think what's really troubling is that copyright is a specific exception to CDA 230 and in order to expand a regime of copyright enforcement for harassment you're going to have to attack CDA 230 and blow a hole in it.
That's a huge viewpoint out right now: it's not that "free speech is great and we need to protect against repressive governments" but that "we need better content removal mechanisms in order to protect women and minorities."From there the discussion went in a number of different important directions, looking at other alternatives and ways to deal with bad behavior online that get beyond just "take it down, take it down," and also discussed the importance of platforms being able to make decisions about how to handle these issues without facing legal liability. CDA 230, not surprisingly, was a big topic -- and one that people admitted was unlikely to spread to other countries, and the concepts behind which are actually under attack in many places.
All communication over the Internet is facilitated by intermediaries such as Internet access providers, social networks, and search engines. The policies governing the legal liability of intermediaries for the content of these communications have an impact on users’ rights, including freedom of expression, freedom of association and the right to privacy.In short, it's important to recognize that these are difficult issues -- but that freedom of expression is extremely important. And we should recognize that while pretty much all platforms contain some form of moderation (even in how they are designed), we need to be wary of reflexive responses to just "take it down, take it down, take it down" in dealing with real problems. Instead, we should be looking for more reasonable approaches to many of these issues -- not in denying that there are issues to be dealt with. And not just saying "anything goes and shut up if you don't like it," but that there are real tradeoffs to the decisions that tech companies (and governments) make concerning how these platforms are run.
With the aim of protecting freedom of expression and creating an enabling environment for innovation, which balances the needs of governments and other stakeholders, civil society groups from around the world have come together to propose this framework of baseline safeguards and best practices. These are based on international human rights instruments and other international legal frameworks.