Techdirt Lite.
(Click here for full version)

Let's Do Business: How Lifting The Embargo Has Opened The Door For Cuban Trademark Suits (Trademark)

by Timothy Geigner

from the hola! dept on Thursday, March 26th, 2015 @ 9:07PM
I'll miss the Cuban embargo. The easing of relations that it brings with it will likely mean the end of the 1950s-style spy games and crazy plots -- like the CIA plot designed to make a leader's beard fall out. Instead, we've finally decided that the United States is open for Cuban business. And you know what that means: trademark lawsuits!
The U.S. Supreme Court recently ruled in favor of a Cuban state-owned company and refused to intervene in a dispute over the “Cohiba” trademark. This is the most recent development in the long-standing rivalry between General Cigar Co Inc., an American (and Scandinavian) company, and Cubatabaco, a Cuban company.
How fun! We finally open up the borders for some business with Cuba and one of the Castro companies decides it's trademark time! Keep in mind, of course, that the state that owns Cubatabaco is a communist nation, but not so communist that they'll refuse to use our capitalist tools to make that money. This dispute actually goes back nearly two decades, with Cubatabaco originally filing a trademark claim in 1997, which was eventually tossed in 2005 by the Second Circuit court, finding that any transfer of property, including a trademark, to a Cuban company would violate the embargo.

But now that the embargo is gone, Cubatabaco has refiled, with a lower court ruling that the Cuban company could challenge General Cigar's mark with the USPTO even before the embargo was lifted -- a ruling the Supreme Court has refused to send back for review. So there appears to be nothing standing in the way of a trademark challenge.

All that said, it's difficult to see how valid a challenge is, actually, given several factors. First, the two companies as yet don't compete in the same markets, due to the legacy of the embargo. Second, the word "cohiba" might not deserve a trademark held by anyone, given that it is simply a foreign word that means "tobacco" in Taino, a language of the Caribbean. That would be like getting a trademark on your beer brand, Cerveza.

However this turns out, welcome officially to business in the States, Cuba! Now that the embargo doesn't keep property from transfering your way, it's all trademark, patents and copyright from here on out!
5 Comments

DailyDirt: Flying With The Greatest Of Ease (Too Much Free Time)

by Michael Ho

from the urls-we-dig-up dept on Thursday, March 26th, 2015 @ 5:00PM
Airplanes have been commonplace for quite some time now, and we've grown accustomed to what an airplane should look like. Ask any kid to draw a plane, and you'll probably get familiar results. However, this doesn't mean we've reached the end of novel plane designs. Plenty of unconventional planes are being designed and tested, and here are just a few. If you'd like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.
1 Comment

California Legislators Pushing Warrant Requirement For All Access To Electronic Information, Including That Obtained By Stingrays (Privacy)

by Tim Cushing

from the strong-nod-towards-long-ignored-rights dept on Thursday, March 26th, 2015 @ 3:47PM
Good news from California: a bill requiring warrants for Stingray device usage (among other things) has passed out of a Senate committee and is headed for an assembly vote.
Among other sweeping new requirements to enhance digital privacy, the bill notably imposes a warrant requirement before police can access nearly any type of digital data produced by or contained within a device or service.

In other words, that would include any use of a stingray, also known as a cell-site simulator, which can not only used to determine a phone’s location, but can also intercept calls and text messages. During the act of locating a phone, stingrays also sweep up information about nearby phones—not just the target phone.
Despite similar bills being killed by governor vetoes in 2012 and 2013, California legislators are still looking to reform the state's privacy laws. For one thing, this new bill would put the state's Electronic Communication Privacy Act in compliance with the Supreme Court's recent Riley v. California decision (warrant requirement for cell phone searches incident to arrest), as Cyrus Farivar points out.

The committee passed it with a 6-1 vote, suggesting there's broader support for privacy and Fourth Amendment protections now than there were in the pre-Snowden days. Of course, the usual opposition was on hand to portray those pushing for a warrant requirement as being in favor of sexually abusing children.
[Marty] Vranicar [California District Attorneys Association] told the committee that the bill would "undermine efforts to find child exploitation," specifically child pornography.

"SB 178 threatens law enforcement’s ability to conduct undercover child porn investigation. the so-called peer-to-peer investigations," he said. "Officers, after creating online profiles—these e-mails provide metadata that is the key to providing information. This would effectively end online undercover investigations in California."
Vranicar failed to explain how an officer conducting an ongoing investigation would be unable to obtain a warrant for PTP user data… unless, of course, the "investigation" was nothing more than unfocused trolling or a sting running dangerously low on probable cause. Nothing in the bill forbids officers from using other methods -- Fourth Amendment-respecting methods -- to pursue those suspected of child exploitation. What it does do is make it more difficult to run stings and honeypots, both of which are already on shaky ground in terms of legality.

Additionally, the bill demands extensive reporting requirements pertaining to government requests for data, and makes an effort to strip away the secrecy surrounding search warrants.
1546.2 (a) Except as otherwise provided in this section, any government entity that executes a warrant or wiretap order or issues an emergency request pursuant to Section 1546.1 shall contemporaneously serve upon, or deliver by registered or first-class mail, electronic mail, or other means reasonably calculated to be effective, the identified targets of the warrant, order, or emergency request, a notice that informs the recipient that information about the recipient has been compelled or requested, and states with reasonable specificity the nature of the government investigation under which the information is sought. The notice shall include a copy of the warrant or order, or a written statement setting forth facts giving rise to the emergency.

(b) If there is no identified target of a warrant, wiretap order, or emergency request at the time of its issuance, the government entity shall take reasonable steps to provide the notice, within three days of the execution of the warrant, to all individuals about whom information was disclosed or obtained.
This isn't blanket coverage or without exceptions. Officers can still offer sworn affidavits in support of sealing to the court, which may then seal warrants on a rolling 90-day basis at its discretion.

Law enforcement will continue to fight this bill, but its opposition seemingly had no effect on the Public Safety Committee. This bill brings the government into a much tighter alignment with the wording and the intent of the Fourth Amendment. The arguments against it demonstrate that the law enforcement community continues to prize efficient policing over the public's (supposedly) guaranteed rights.

Read More
7 Comments

Dangerously Underpowered NSA Begging Legislators For Permission To Go To Cyberwar ((Mis)Uses of Technology)

by Tim Cushing

from the poor,-neglected-NSA dept on Thursday, March 26th, 2015 @ 2:36PM
Cyber-this and cyber-that. That's all the government wants to talk about. The NSA, which has always yearned for a larger slice of the cybersecurity pie, is pushing legislators to grant it permission to go all-out on the offensive to protect foreign-owned movie studios the USofA from hackers.
NSA director Mike Rogers testified in front of a Senate committee this week, lamenting that the poor ol’ NSA just doesn’t have the “cyber-offensive” capabilities (read: the ability to hack people) it needs to adequately defend the US. How cyber-attacking countries will help cyber-defense is anybody’s guess, but the idea that the NSA is somehow hamstrung is absurd.
Yes, we (or rather, our representatives) are expected to believe the NSA is just barely getting by when it comes to cyber-capabilities. Somehow, backdoors in phone SIM cards, backdoors in networking hardware, backdoors in hard drives, compromised encryption standards, collection points on internet backbones, the cooperation of national security agencies around the world, stealth deployment of malicious spyware, the phone records of pretty much every American, access to major tech company data centers, an arsenal of purchased software and hardware exploits, various odds and ends yet to be disclosed and the full support of the last two administrations just isn't enough. Now, it wants the blessing of lawmakers to do even more than it already does. Which is quite a bit, actually.
The NSA runs sophisticated hacking operations all over the world. A Washington Post report showed that the NSA carried out 231 “offensive” operations in 2011 - and that number has surely grown since then. That report also revealed that the NSA runs a $652m project that has infected tens of thousands of computers with malware.
That was four years ago -- a lifetime when it comes to an agency with the capabilities the NSA possesses. Anyone who believes the current numbers are lower is probably lobbying increased power. And they don't believe it. They'd just act like they do.

Unfortunately, legislators may be in a receptive mood. CISA -- CISPA rebranded -- is back on the table. The recent Sony hack, which caused millions of dollars of embarrassment, has gotten more than a few of them fired up about the oft-deployed term "cybersecurity." Most of those backing this legislation don't seem to have the slightest idea (or just don't care) how much collateral damage it will cause or the extent to which they're looking to expand government power.

The NSA knows, and it wants this bill to sail through unburdened by anything more than its requests for permission to fire.
The bill will do little to stop cyberattacks, but it will do a lot to give the NSA even more power to collect Americans’ communications from tech companies without any legal process whatsoever. The bill’s text was finally released a couple days ago, and, as EFF points out, tucked in the bill were the powers to do the exact type of “offensive” attacks for which Rogers is pining.
In the meantime, Section 215 languishes slightly, as Trevor Timm points out. But that's the least of the NSA's worries. It has tech companies openly opposing its "collect everything" approach. Apple and Google are both being villainized by security and law enforcement agencies for their encryption-by-default plans. More and more broad requests for user data are being challenged, and (eventually) some of the administration's minor surveillance tweaks will be implemented.

Section 215 may die. (Or it may keep on living even in death, thanks to some ambiguous language in the PATRIOT Act.) But I would imagine the bulk phone metadata is no longer a priority for the NSA. It has too many other programs that harvest more and face fewer challenges. The NSA wants to be a major cyberwar player, which is something that will only increase its questionable tactics and domestic surveillance efforts. If it gets its way via CISA, it will be able to make broader and deeper demands for information from tech companies. Under the guise of "information sharing," the NSA will collect more and share less. And what it does share will be buried under redactions, gag orders and chants of "national security." Its partnerships with tech companies will bear a greater resemblance to parasitic relationships than anything approaching equitable, especially when these companies will have this "sharing" foisted upon them by dangerously terrible legislation.

But until it reaches that point, the NSA will keep claiming it's under-equipped to handle the modern world. And it will continue to make the very dubious claim that the best defense is an unrestrained offense.
17 Comments

Free Speech, Censorship, Moderation And Community: The Copia Discussion (Free Speech)

by Mike Masnick

from the not-an-easy-issue dept on Thursday, March 26th, 2015 @ 1:39PM
As I noted earlier this week, at the launch of the Copia Institute a couple of weeks ago, we had a bunch of really fascinating discussions. I've already posted the opening video and explained some of the philosophy behind this effort, and today I wanted to share with you the discussion that we had about free expression and the internet, led by three of the best people to talk about this issue: Michelle Paulson from Wikimedia; Sarah Jeong, a well-known lawyer and writer; and Dave Willner who heads up "Safety, Privacy & Support" at Secret after holding a similar role at Facebook. I strongly recommend watching the full discussion before just jumping into the comments with your assumptions about what was said, because for the most part it's probably not what you think:
Internet platforms and free expression have a strongly symbiotic relationship -- many platforms have helped expand and enable free expression around the globe in many ways. And, at the same time, that expression has fed back into those online platforms making them more valuable and contributing to the innovation that those platforms have enabled. And while it's easy to talk about government attacks on freedom of expression and why that's problematic, things get really tricky and really nuanced when it comes to technology platforms and how they should handle things. At one point in the conversation, Dave Willner made a point that I think is really important to acknowledge:
I think we would be better served as a tech community in acknowledging that we do moderate and control. Everyone moderates and controls user behavior. And even the platforms that are famously held up as examples... Twitter: "the free speech wing of the free speech party." Twitter moderates spam. And it's very easy to say "oh, some spam is malware and that's obviously harmful" but two things: One, you've allowed that "harm" is a legitimate reason to moderate speech and two, there's plenty of spam that's actually just advertising that people find irritating. And once we're in that place, it is the sort of reflexive "no restrictions based on the content of speech" sort of defense that people go to? It fails. And while still believing in free speech ideals, I think we need to acknowledge that that Rubicon has been crossed and that it was crossed in the 90s, if not earlier. And the defense of not overly moderating content for political reasons needs to be articulated in a more sophisticated way that takes into account the fact that these technologies need good moderation to be functional. But that doesn't mean that all moderation is good.
This is an extremely important, but nuanced point that you don't often hear in these discussions. Just today, over at Index on Censorship, there's an interesting article by Padraig Reidy that makes a somewhat similar point, noting that there are many free speech issues where it is silly to deny that they're free speech issues, but plenty of people do. The argument then, is that we'd be able to have a much more useful conversation if people admit:
Don't say "this isn't a free speech issue", rather "this is a free speech issue, and I’m OK with this amount of censorship, for this reason.” Then we can talk."
Soon after this, Sarah Jeong makes another, equally important, if equally nuanced, point about the reflexive response by some to behavior that they don't like to automatically call for blocking of speech, when they are often confusing speech with behavior. She discusses how harassment, for example, is an obvious and very real problem with serious and damaging real-world consequences (for everyone, beyond just those being harassed), but that it's wrong to think that we should just immediately look to find ways to shut people up:
Harassment actually exists and is actually a problem -- and actually skews heavily along gender lines and race lines. People are targeted for their sexuality. And it's not just words online. It ends up being a seemingly innocuous, or rather "non-real" manifestation, when in fact it's linked to real world stalking or other kinds of abuse, even amounting to physical assault, death threats, so and so forth. And there's a real cost. You get less participation from people of marginalized communities -- and when you get less participation from marginalized communities, you lead to a serious loss in culture and value for society. For instance, Wikipedia just has fewer articles about women -- and also its editors just happen to skew overwhelmingly male. When you have great equality on online platforms, you have better social value for the entire world.

That said, there's a huge problem... and it's entering the same policy stage that was prepped and primed by the DMCA, essentially. We're thinking about harassment as content when harassment is behavior. And we're jumping from "there's a problem, we have to solve it" and the only solution we can think of is the one that we've been doling out for copyright infringement since the aughties, and that's just take it down, take it down, take it down. And that means people on the other end take a look at it and take it down. Some people are proposing ContentID, which is not a good solution. And I hope I don't have to spell out why to this room in particular, but essentially people have looked at the regime of copyright enforcement online and said "why can't we do that for harassment" without looking at all the problems that copyright enforcement has run into.

And I think what's really troubling is that copyright is a specific exception to CDA 230 and in order to expand a regime of copyright enforcement for harassment you're going to have to attack CDA 230 and blow a hole in it.
She then noted that this was a major concern because there's a big push among many people who aren't arguing for better free speech protections:
That's a huge viewpoint out right now: it's not that "free speech is great and we need to protect against repressive governments" but that "we need better content removal mechanisms in order to protect women and minorities."
From there the discussion went in a number of different important directions, looking at other alternatives and ways to deal with bad behavior online that get beyond just "take it down, take it down," and also discussed the importance of platforms being able to make decisions about how to handle these issues without facing legal liability. CDA 230, not surprisingly, was a big topic -- and one that people admitted was unlikely to spread to other countries, and the concepts behind which are actually under attack in many places.

That's why I also think this is a good time to point to a new project from the EFF and others, known as the Manila Principles -- highlighting the importance of protecting intermediaries from liability for the speech of their users. As that project explains:
All communication over the Internet is facilitated by intermediaries such as Internet access providers, social networks, and search engines. The policies governing the legal liability of intermediaries for the content of these communications have an impact on users’ rights, including freedom of expression, freedom of association and the right to privacy.

With the aim of protecting freedom of expression and creating an enabling environment for innovation, which balances the needs of governments and other stakeholders, civil society groups from around the world have come together to propose this framework of baseline safeguards and best practices. These are based on international human rights instruments and other international legal frameworks.
In short, it's important to recognize that these are difficult issues -- but that freedom of expression is extremely important. And we should recognize that while pretty much all platforms contain some form of moderation (even in how they are designed), we need to be wary of reflexive responses to just "take it down, take it down, take it down" in dealing with real problems. Instead, we should be looking for more reasonable approaches to many of these issues -- not in denying that there are issues to be dealt with. And not just saying "anything goes and shut up if you don't like it," but that there are real tradeoffs to the decisions that tech companies (and governments) make concerning how these platforms are run.
13 Comments

Older Stories >>