Back in the mid-90s there were a series of lawsuits over "deep linking" practices, where people who didn't quite understand how the web worked would sue other sites for linking to them without permission. We still see this happen occasionally, such as with the Associated Press's ridiculous assertion that various other sites shouldn't link with a headline in a snippet from an article. However, it appears that some smaller news organizations are just as clueless about the internet as well. Reader Ben writes in to point out that GateHouse Media, a publisher of some local free news publications in Massachusetts is suing the NY Times for linking to them. The full complaint shows a near complete misunderstanding of how the internet works. You can read it here:
Basically, the big complaint is that Boston.com (which is owned by the NY Times) has a local section, where it links to GateHouse publications. It does so in ways that are clearly fair use. It includes the headline and the very first sentence of the GateHouse articles, with a link to the full version. This is driving traffic to GateHouse's publications and clearly not taking anything away from GateHouse. But GateHouse claims this is copyright infringement. Furthermore, GateHouse claims that there is trademark infringement because Boston.com accurately shows where the content originally is from and tells you what site the link goes to. In other words, it's helping to promote GateHouse's properties. GateHouse, instead, claims this is blatant trademark infringement. Even more ridiculously, GateHouse claims that this effort by Boston.com, which helps get it more attention and drive more traffic to its properties is somehow unfair competition. I only wish we had competition like that.
Perhaps most interesting of all, GateHouse is charging the NY Times with breach of contract, because (of all things) GateHouse uses a Creative Commons license on its content -- though it uses the Attribution, Non-Commercial, No Derivatives license -- and it claims that Boston.com's use is commercial, and thus a contractual violation. This highlights the problem Creative Commons has with its non-commercial licenses. It's pretty clear the intent of such licenses is to prevent a company from reselling the works. But when it's being used to directly drive more traffic to the original site, it's difficult to see how any sane person would see that as a violation of the intent.
Either way, the end-result of all of this is that other websites have already come to the conclusion that it's just not worth linking to GateHouse sites at all. Consider it a stupid lawyers tax. Suing people for sending you traffic has to be, perhaps, the most braindead business strategy around, these days.
Lawrence D'Oliveiro calls our attention to the news that high-end women's shoe company, Jimmy Choo, is forcing a small New Zealand gifts website to give up its name using trademark law. The name of the site is Kookychoo.com, though it may be gone soon, as it sounds like the owner of the site is going to give in rather than fight. This is, of course, ridiculous. It's highly unlikely that anyone, let alone our favorite moron in a hurry, would confuse Jimmy Choo shoes with a website selling random gifts like teddy bears, bean bags and candles, among other things. Originally, Choo's lawyers just wanted to prevent the owner from trademarking Kookychoo, but now is demanding she stop using the name entirely. Since it would cost at least $50,000 to go to court should Jimmy Choo sue, the woman is likely to just give up the name. This is corporate bullying at its finest -- abusing tradmark law for no reason other than to shut down a small website that isn't competing with the company at all.
A judge has ruled that online video hosting site Veoh is not guilty of copyright infringement for videos uploaded by its users. The judge made the proper ruling here, noting that the DMCA's safe harbors protect Veoh. The lawsuit was brought by adult video entertainment firm Io, who was upset that Veoh's users kept uploading clips from its films. As the judge properly noted, Veoh follows all the rules necessary under the DMCA to avoid liability (this doesn't mean that the individuals doing the uploading aren't liable, however).
While this may seem like a small case, it is quite similar to Viacom's infamous lawsuit against YouTube/Google. Considering that YouTube follows the DMCA's rules in a similar manner to Veoh, this ruling suggests that YouTube is also protected by the DMCA safe harbors, just as many had stated from the beginning. The key issues raised by Io (and also raised by Viacom) is that these sites lose their DMCA safe harbors because they take action on the content, often transcoding the content from one format into flash. However, the judge in the Veoh case trashed that argument pretty easily:
Here, Veoh has simply established a system whereby software automatically processes user-submitted content and recasts it in a format that is readily accessible to its users. Veoh preselects the software parameters for the process from a range of default values set by the third party software... But Veoh does not itself actively participate or supervise the uploading of files. Nor does it preview or select the files before the upload is completed. Instead, video files are uploaded through an automated process which is initiated entirely at the volition of Veoh's users
The folks over at Google are, understandably, pretty happy about this ruling, which confirms their position that YouTube is protected: "It is great to see the Court confirm that the DMCA protects services like YouTube that follow the law and respect copyrights."
I wrote last fall that the New York Times was finally starting to get the web, and I think the Washington Post is in the same category of taking the web a lot more seriously than it did a few years ago. But although the biggest newspapers are now taking the challenge seriously, they still have work to do. Case in point: the Washington, DC, area had a big storm a while back, and Scott Karp went to the Washington Post website expecting (reasonably enough) to find information about it. Unfortunately, despite being a DC-based publication, the Post's home page had very little information about the storm. Indeed, the home page wouldn't have mentioned it at all if there didn't happen to be a story on the most-read articles list. Unfortunately it was a formulaic story from the print edition that was great for a non-Washingtonian who doesn't know anything about the storm, but it's not terribly useful to a Washingtonian who can see the storm happening outside his window. What locals need is detailed, real-time information. After seeing nothing relevant on the WaPo's website, he went over to Google, typed in "power outages in northern virginia," and the first hit was a page from Dominion Electric showing power outages around its service region. Karp went back to the Post's website, and after more searching finally found a blog focusing on DC area weather—precisely the sort of thing that the Post ought to be making more prominently displayed during major weather events.
I think there are a couple of lessons to be learned from this. First, as Mike has said before, good content is often less important than useful services like organizing and filtering information. The Post had the content Karp wanted -- an up-to-date blog and links to useful resources -- but because its website was poorly organized, he wasn't able to find it easily. Some newspapers claim that Google lives parasitically off of other content producers, but I think this is a good illustration of why that's not true; there was plenty of content out there, but without Google, Karp might not have been able to find it. The other problem is that for all of the Post's progress it still seems to regard itself largely as a newspaper that happens to publish its articles on the web, rather than a general media company that happens to publish a paper edition. Sometimes a traditional newspaper article is the best way to cover a story, but often (as in this case) it's not. The Post, like a lot newspaper outlets, still seems to put too much emphasis on its print content, even in circumstances were a shorter, timelier, and more densely-linked story would be more useful to readers.
Every once in a while, when discussing the DMCA's "safe harbors" someone shows up in the comments to insist that the safe harbors were never intended to apply to websites, but merely to ISPs. Tim Lee does a nice bit of work absolutely destroying that assertion, by pointing out how it doesn't make sense given the language of the law which clearly is designed to apply to websites as well as network providers (otherwise, as he notes, why would they ever suggest content would have to be "removed" rather than just "blocked").
But, more importantly, the focus should be on the overall intent of the law beyond just the specific scenarios on the mind of those who wrote it. Even if it's true that those who crafted the language weren't "thinking" about websites when they wrote it, the intent of the safe harbor is clear, and it should apply to websites as well as network providers. Why? Because the whole point of safe harbors was to make sure liability was properly applied to those who actually infringed, rather than an easy-to-target company. That it was the network providers who raised this concern in the first place doesn't mean that the same thinking wouldn't apply to websites as well. And, on top of that, while the safe harbors of the CDA (for things like defamation) haven't been harmonized with the DMCA's safe harbors -- the purposes are nearly identical, and the courts have granted extremely wide coverage of the CDA safe harbors, so there's no reason to think that they wouldn't apply the same broad interpretation to the DMCA as well.
from the perhaps-because-there's-no-good-reason dept
On Monday, Tim wrote about the pointless controversy around the site RateMyCop.com, which would allow people to rate police officers they had dealings with. Considering how many similar sites there are for teachers, doctors, restaurants, etc. -- combined with the dangers that come with police abusing the power they are given -- a site to rate police officers seems quite reasonable. But, of course, many police officers didn't see it that way. However, what no one expected is that the site's registrar and host would step into the fight and take the site completely offline with no warning to its owner. Yesterday, GoDaddy pulled the entire site offline, and replaced it with a page telling the owner to call GoDaddy (even though they had his phone number). People at GoDaddy gave conflicting reports as to why the site was taken offline, first claiming it was taken offline for "suspicious activity" and later that he had surpassed a 3 terabyte bandwidth limit, which the owner of the site disputes, saying there weren't nearly enough page views for that to happen. Either way, he's now ditched GoDaddy and found a host that won't pull the site offline with no warning and no recourse.
The Olympic Committee is rather infamous for its ability to convince governments to pass special intellectual property laws just to protect the Olympics. However, it appears as though the folks involved with the Olympics don't take others' IP rights so seriously. Slashdot points out that it appears the Beijing Olympics website has copied a flash game designed by someone else. So, apparently, not only does the Olympics want extra special rights concerning its own efforts, it wants to ignore the already existing copyrights of others. While I find it silly to even try to protect copyright on a simple game like this, it does say something an organization like the Olympics is so keen on over-protecting its own rights while ignoring them for others.
Over the years, we've seen a ton of jurisdictional questions raised by the internet. After all, since the internet is available just about anywhere, and content on it may break laws in some countries, but not others, how do you handle the jurisdiction question. Some courts have determined that it doesn't matter -- and they'll claim jurisdiction for whatever they want. Others suggest that evidence needs to be shown that the content is directed at and was seen by many people within the jurisdiction. Others have held that it needs to be created by a local resident or hosted on a local server. However, with all that said, it's not clear what jurisdiction the US government seems to be claiming over a bunch of websites created by a British travel agent. The websites all advertise trips to or information about Cuba. The websites were designed for European travelers to plan trips to the island nation. Now, it's well known that US citizens are not allowed to travel to Cuba, but that's not true of people from other countries. So, this guy clearly was not breaking any laws.
No matter, though. Since he had registered the domains for his various websites through eNom, an American company, the US Treasury Department had them pull down his sites and to refuse to release them to another registrar. There's no doubt that if the sites were targeting Americans or was run by an American travel agency, you could understand these actions. But to take down a UK-based website that was aimed at European travelers, offering them perfectly legitimate trips to Cuba, seems to go beyond any reasonable jurisdictional claim.
While there was some decent news suggesting the economy might not be falling into a recession, there are still plenty of knowledgeable folks who think some sort of recession is likely. Last week, in New York, plenty of folks I spoke to seemed to believe we were already in one. Of course, to actually call a recession, the general consensus is that there would need to be two consecutive quarters of negative economic growth. So how would you measure that growth? Well, apparently the White House would prefer to make it as difficult as possible. Reader Jon writes in to note the rather inconvenient timing of the Administration suddenly deciding to shut down its own website that aggregated economic indicators. The site, EconomicIndicators.gov had even won awards from Forbes as a great resource. The timing of the closure certainly raises some questions. It's not that difficult to manage a website (though, I recognize, in the government, all costs are multiplied by some insanity multiplier). If it's really so expensive to manage, why not throw it open and make it into a wiki? Hell, perhaps Jimbo Wales or somebody can build a WikinomicIndicators site instead.
A little over a week after Kaspersky's anti-virus software declared Windows Explorer was a virus, it appears that McAfee has had its own mistake, as an anti-virus update from the company started warning people to stay away from a bunch of popular sites, including ESPN, Friendster and Ars Technica. McAfee later admitted that it was a mistake on its end, but it seems that we're seeing these kinds of false positives on a fairly frequent basis these days. It's yet another sign that things need to change in how security software works -- but instead of real advances, it still seems like firms are bogged down with things like pointless patent battles.