by Mike Masnick
Mon, Sep 23rd 2013 3:49pm
by Mike Masnick
Mon, Jul 15th 2013 10:43am
Google, Microsoft And Other Ad Networks Agree To 'Best Practices' To Stop Ads From Appearing On 'Pirate' Sites
from the the-details-matter dept
I have some concerns about this, as I'll discuss below, but on the whole it appears that there's actually some good to come out of this. First off, it's worth noting that all of these guys already have terms of service that bar the use of their ads on sites that primarily engage in such things. While various tech industry haters still tend to believe otherwise, the tech industry has been pretty good at keeping their ads directly away from such sites for years. The ads that tend to get on those sites come from tiny third party ad networks that no one has heard of. In fact, some of the "evidence" against Megaupload was that from very early on, Google kicked it out of its ad program.
Another sign that this agreement probably isn't that bad: the MPAA has already put out a statement about how they hate it, saying that it's not enough. Chris Dodd specifically argues that nothing is going to be enough until everyone else does the copyright holders' job for them, and proactively polices the internet. The fact that no one but the copyright holder can know for certain if something is infringing is not even allowed to enter the discussion in the corrupt minds of the MPAA.
In this case, it appears that this new agreement involves something of a more formalized notice and (possible) takedown system. Copyright holders can submit a complaint to each ad network (individually, not to some central authority), and then the ad network gets to decide how it handles the notice -- but, under the best practices, they will strive to keep their ads from appearing on such sites. Since this is just a voluntary agreement, unlike, say, the DMCA, there's no automatic liability shifting in refusing to pull the ads -- and the agreement makes it clear that the best practices themselves do not establish liability, nor do they create a duty to proactively monitor (though, I could see how copyright holders might later try to raise that issue).
The good thing about this program is that it appears those who worked on it clearly recognize that certain copyright holders may be a little over eager in claiming certain sites are "pirate" sites when they might not be. So the program is designed to be more transparent and to include the clear ability for a site to appeal such a decision and get the ad networks to reconsider. In some ways, this is a step forward from the way it was before, in which Google or others might just kick you out of the program with almost no communication and absolutely no right of appeal. In fact, Google is somewhat infamous for its big white monolithic response to kicking people out of its ad network: basically just telling them "you've violated our terms" with no explanation, no way to find out more, and no way to appeal. Adding an actual appeals process is a step up.
That said, there are still two key concerns here. The first is that even with an appeals process and various safeguards, it's quite likely that legitimate sites that have significant non-infringing purposes will still get caught up in this. We've seen too many false takedowns, false attacks and the like for that not to happen. And even with an appeals process, losing your entire ad network for a period of time can completely sink a small business (and, any site making money on these kinds of ad networks is, by definition, a small business -- because none of these ad networks pay out very much to individual sites).
The second concern is a bigger one: which is that if you look at the history of some of the most important innovations that have helped the content industry grow, they almost always start out as what those content industries deemed "principally dedicated to infringing activity." In the early days of radio, cable TV, VCRs, DVRs, mp3 players, YouTube, etc... they were all attacked as being hotbeds of infringement. Yet, as they grew in popularity, business models developed that helped the content industry tremendously. As I've pointed out in the past, it was only four years after Jack Valenti declared that the VCR was the "Boston Strangler" of the movie business that the home video business surpassed the box office in revenue for Hollywood. Yet, if we allow a system where the copyright holders are able to simply starve these new businesses completely before they've had a chance to develop and mature, I worry that we miss the next VCR, the next DVR, the next mp3 player, the next YouTube -- and whatever tool that comes next that allows content creators to do an even better job connecting with fans, creating new works, distributing new works, promoting those works and eventually monetizing those works.
It's easy to simply try to label all new upstarts as "evil" and kill them off, but history has shown that's generally not a very good idea. The reason those upstarts are successful is not that they enable infringement, but rather that they enable something new and useful that people want and like. The real opportunity is in figuring out ways for content creators to use that to their advantage -- and I fear that programs like this make it easier to simply snuff them out too early.
That said, if there needs to be such a program, this one appears to be the least destructive approach. It doesn't create liability or a proactive duty to police the internet. It allows the networks to make the final call on what do with complaints. It gives the accused sites the ability to appeal whatever decisions are made. Either way, I would imagine that the MPAA and the RIAA already have their incredibly long lists of sites ready and are submitting them everywhere they can... and within a few weeks we'll watch them issue statements about how the new program isn't working and how more needs to be done.
by Mike Masnick
Sat, Jun 8th 2013 12:28pm
from the details-details-details dept
Basically, it appears those companies all agreed to make it easier for the NSA to access data that was required to be handed over under an approved FISA Court warrant, and they appear to do this by setting up their own servers where they put that information (and just that information). From the NY Times report:
But instead of adding a back door to their servers, the companies were essentially asked to erect a locked mailbox and give the government the key, people briefed on the negotiations said. Facebook, for instance, built such a system for requesting and sharing the information, they said.This is significantly less worrisome than the original Washington Post report, which suggested full real-time access to all servers. That's not quite what has happened, according to this report. This involves cases where the companies really do need to hand over this information. We can disagree with whether or not the FISA Court should issue these warrants, but at some point there may be information that the companies do need to hand over to the government. As for the Guardian, they published the following slide:
The data shared in these ways, the people said, is shared after company lawyers have reviewed the FISA request according to company practice. It is not sent automatically or in bulk, and the government does not have full access to company servers. Instead, they said, it is a more secure and efficient way to hand over the data.
The real question should be about what information the FISA Court is approving warrants over:
FISA orders can range from inquiries about specific people to a broad sweep for intelligence, like logs of certain search terms, lawyers who work with the orders said. There were 1,856 such requests last year, an increase of 6 percent from the year before.Note just how broad some of those searches may be. Staying around for weeks to download logs? We're not talking about narrowly focused searches here.
In one recent instance, the National Security Agency sent an agent to a tech company’s headquarters to monitor a suspect in a cyberattack, a lawyer representing the company said. The agent installed government-developed software on the company’s server and remained at the site for several weeks to download data to an agency laptop.
In other instances, the lawyer said, the agency seeks real-time transmission of data, which companies send digitally.
Of course, what's now also come out is that, despite Google and Microsoft releasing transparency reports about government requests for data, they don't include FISA requests because of the gag orders on them. It's only recently that both Google and Microsoft were able to include "range" numbers for how many national security letter requests they get. One hopes they're pushing to be transparent on FISA requests as well.
The article makes it clear that Twitter was alone among the companies in refusing to join this program. That does not mean that Twitter does not hand over data to the government when receiving a legitimate FISA order. I'm sure it does. But it does mean that they have not set up a special system to make it easy for the government to just log in and get the data requested. Some people have suggested that the government has little need for Twitter to join the program since nearly all Twitter information is public, but that's not true. There is still plenty of important information that might be hidden, including IP addresses, email addresses, location information and direct messages that the NSA would likely want. Besides, YouTube is a part of the program, and most of its data is similarly "public."
This is not, by the way, the first time that we've seen Twitter stand up and fight for a user's rights against a government request for data. Over two years ago, we pointed out that Twitter, alone among tech companies, fought back when a court ordered it to hand over user info. Twitter sought, and eventually got, permission to tell the user, and allow that user to try to fight back. It later came out that, as part of that same investigation, the government also had requested information from Google and Sonic.net, with Sonic.net fighting back and losing. It never became clear whether Google fought back.
Separately, however, Chris Soghoian has noted that an "unnamed company" fought back and lost against a FISA court order... and that, according to the PowerPoint presentation, Google "joined" PRISM just a few months later. It is possible that Google fought joining the program, and then only did so after losing in court. That said, Google's most recent denial insists that "the government does not have access to Google servers—not directly, or via a back door, or a so-called drop box." Perhaps they don't consider a special server set up for lawfully required information a "drop box," but others certainly might.
In the end, it appears that the initial Washington Post report was overblown in that it suggested direct access to all servers, rather than specific servers, set up to provide information that was required. That said, it is still true that the FISA Court appears to issue a fair number of secret orders for information from a variety of technology companies, some of them quite broad, and that many of the biggest tech companies have set up systems to make it easier to give the NSA/FBI and others access to that info -- though, they are often required by law to provide that information. The real outrage remains that all of this is happening in complete secrecy, where there is little real oversight to stop this from being abused. As we noted just a few weeks ago, the FISA Court has become a rubber stamp, rejecting no requests at all in the past two years.
Given the revelations of the past week, the public (and our representatives) need to demand much more transparency and oversight concerning these surveillance programs.
by Mike Masnick
Fri, Jun 7th 2013 8:35am
Tech Companies Deny Letting NSA Have Realtime Access To Their Servers, But Choose Their Words Carefully
from the worth-watching dept
Note the fine distinction. Giving the NSA a clone of their data wouldn't be giving them "access to our servers." It would be giving copies to the NSA... and then the NSA could "access" its own servers. And you were wondering why the NSA needed so much space in Utah. If they're basically running a replica of every major big tech company datacenter, it suddenly makes a bit more sense. Of course, at this point there's no evidence that this is necessarily the case -- and some are insisting that the denials are legit, and that the Washington Post's story is not entirely accurate. But... the wording here is extra careful, and the government's report really does seem to indicate that these companies are deeply involved.
Comparing denials from tech companies, a clear pattern emerges: Apple denied ever hearing of the program and notes they “do not provide any government agency with direct access to our servers and any agency requesting customer data must get a court order;” Facebook claimed they “do not provide any government organisation with direct access to Facebook servers;” Google said it “does not have a ‘back door’ for the government to access private user data”; And Yahoo said they “do not provide the government with direct access to our servers, systems, or network.” Most also note that they only release user information as the law compels them to.
But the PRISM program’s reported access to data and the now repeatedly confirmed widespread access to phone records and other types of digital data appears to be almost exactly what the 2008 Protect America Act (PAA) allows Foreign Intelligence Surveillance Act (FISA) courts to compel tech companies to do — as many warned around the time of its passage. If tech companies are not providing direct access to their servers but are cooperating with the PRISM program, that leaves at least one other option: Companies are providing intelligence agencies with copies of their data.
By the way, if you'd like to dig in on annotating the various tech companies' denials, someone put them all up at RapGenius, the site for annotating text (not just rap songs).
by Mike Masnick
Thu, Jun 6th 2013 3:35pm
Oh, And One More Thing: NSA Directly Accessing Information From Google, Facebook, Skype, Apple And More
from the not-a-good-week-for-the-nsa dept
The technology companies, which participate knowingly in PRISM operations, include most of the dominant global players of Silicon Valley. They are listed on a roster that bears their logos in order of entry into the program: “Microsoft, Yahoo, Google, Facebook, PalTalk, AOL, Skype, YouTube, Apple.” PalTalk, although much smaller, has hosted significant traffic during the Arab Spring and in the ongoing Syrian civil war.This program, like the constant surveillance of phone records, began in 2007, though other programs predated it. They claim that they're not collecting all data, but it's not clear that makes a real difference:
Dropbox , the cloud storage and synchronization service, is described as “coming soon.”
The PRISM program is not a dragnet, exactly. From inside a company’s data stream the NSA is capable of pulling out anything it likes, but under current rules the agency does not try to collect it all.I expect we'll be seeing more such revelations before long.
Analysts who use the system from a Web portal at Fort Meade key in “selectors,” or search terms, that are designed to produce at least 51 percent confidence in a target’s “foreignness.” That is not a very stringent test. Training materials obtained by the Post instruct new analysts to submit accidentally collected U.S. content for a quarterly report, “but it’s nothing to worry about.”
Even when the system works just as advertised, with no American singled out for targeting, the NSA routinely collects a great deal of American content.
by Mike Masnick
Tue, Dec 18th 2012 12:12am
from the easily-dismissed dept
Thankfully, the district court smacked the case down pretty hard, and did so with prejudice, denying him the ability to refile an amended complaint. However, Tasini wasn't ready to give up, and appealed the original ruling. The appeals court has now taken its turn in smacking down the lawsuit, noting that Tasini's argument is simply ridiculous, as you can see in the full filing (also embedded below):
The problem with plaintiffs' argument is that it has no basis in their Amended Complaint. Nowhere in the Amended Complaint do plaintiffs allege that The Huffington Post represented that their work was purely for public service or that The Huffington Post would not subsequently be sold to another company. To the contrary, plaintiffs were perfectly aware that The Huffington Post was a forprofit enterprise, which derived revenues from their submissions through advertising. Perhaps most importantly, at all times prior to the merger when they submitted their work to The Huffington Post, plaintiffs understood that they would receive compensation only in the form of exposure and promotion. Indeed, these arrangements have never changed.In other words, Tasini's inability to accept the deal he made, and the fact that he apparently got jealous of Huffington's ability to sell the site, is not a legal issue at all. The court also re-affirms that the dismissal with prejudice was entirely proper. Maybe, instead of spending all this time on lawsuits, Tasini would be better served trying to build his own site. Of course, as was ironically noted after he filed his lawsuit, Tasini actually did that once and didn't pay the bloggers who blogged for him...
Though it is no doubt a great disappointment to find that The Huffington Post did not live up to the ideals plaintiffs ascribed to it, plaintiffs have made no factual allegations that, if taken as true, would permit the inference that The Huffington Post deceived the plaintiffs or otherwise received a benefit at the expense of the plaintiffs such that equity and good conscience require restitution.
by Mike Masnick
Tue, Oct 30th 2012 10:48am
from the that-doesn't-actually-help-advertisers dept
We've been somewhat excited that we're rapidly approaching one million total comments on Techdirt. We thought it was quite a nice milestone. But we feel a bit small to learn that the Huffington Post already has over 70 million comments just this year alone. Over at Poynter, Jeff Sonderman has a fascinating interview with the site's director of community, Justin Isaf, about how they manage all those comments. Apparently they have a staff of 30 full time comment moderators, helped along by some artificial intelligence (named Julia) from a company they bought just for this technology.
Now, obviously, sites have lots of different philosophies on moderating comments. Our own is pretty open. We have a spam filter that tries to cut out obvious spam (of which we get about 1,000 per day, last I checked) and other than that comments are basically unmoderated. We do have a system that allows the community to vote on funny and insightful comments (which we then round up in a weekly "best of" post). We also, just recently, introduced our first word/last word feature, which lets the community promote certain comments. Finally the community can also "report" comments they find problematic, which then minimizes those comments, though they remain available for anyone to see with one click. We've found that this system of trusting the community works pretty damn well overall.
HuffPo, on the other hand, between the technology and the moderators, seems more focused on nudging the conversation themselves. I can understand and respect that choice, but there was one detail that struck me as a bit questionable:
I’m a big fan of having machines help us with the lower level tasks, freeing up time, resources and brain power for more interesting and complex tasks. Julia [the artificial intelligence system that HuffPo owns] takes that a few steps further and helps us with a lot of other aspects of HuffPost in addition to helping weed out abusive members, including identifying intelligent conversations for promotion, and content that is a mismatch for our advertisers. She has allowed us to do a lot more with a lot less.(Note: see update at the top). I recognize that these are all advertising businesses, but I'm a bit surprised to see HuffPo so blatantly admit that they moderate comments if they're "a mismatch for our advertisers." I've seen plenty of sites say they'll moderate inappropriate commentary, but leave reasonable commentary alone even if it's critical. But HuffPo is basically saying that if advertisers aren't likely to like the comments, they may moderate them. It's their system, and they can do what they want with that, but personally, that makes me feel uncomfortable. We've always tried to promote the fact that our own community is very opinionated (and not shy about it) when we've spoken to advertisers, and we use that as a way of explaining why things they do should be authentic and real, rather than forced and phony. And, because of that, we'd like to think that we're able to drive more interesting engagement. If you leave open the possibility of moderating comments that advertisers won't like, that seems to only encourage bogus and annoying advertising, since marketers may never learn that people don't actually like that kind of thing.
In the end, HuffPo's position is obviously self-serving, even as they pretend that it's best for advertisers. What they may end up doing is hiding the fact that the advertisements are bad, rather than improving the quality of the advertising. Now, obviously, I'm sure AOL does quite fine with HuffPo's ad selling (and they're a hell of a lot bigger than us), but it still struck me as interesting to see the company so blatantly admit how it reacts to content their advertisers might think is "a mismatch."
by Mike Masnick
Fri, Aug 3rd 2012 3:31pm
from the shocking,-i-know dept
While others aren't going that far, there's more and more evidence that betting on apps was, in fact, the exact mistake that we predicted. Mathew Ingram summarizes how both The Huffington Post and Murdoch's The Daily have failed with their fee-based iPad app strategy. He makes the same basic point that a winner of our "most insightful comment" (by Robert Weller) made recently: that people get their news from lots of sources, so paying for a bunch of apps just doesn't make sense. In fact, it takes away from the value. As Ingram notes:
Whether media companies like it or not (and they mostly don’t), much of the news and other content we consume now comes via links shared through Twitter and Facebook and other networks, or through old-fashioned aggregators — such as Yahoo News or Google News — and newer ones like Flipboard and Zite and Prismatic that are tailored to mobile devices and a socially-driven news experience. Compared to that kind of model, a dedicated app from a magazine or a newspaper looks much less interesting, since by design it contains content from only a single outlet, and it usually doesn’t contain helpful things like links.What he's basically saying is that the publishers focusing on apps are trying to create artificial scarcity by building digital silos. But that actually takes value away from those publications. People interact with the news in all sorts of ways that go way beyond "reading." But individual apps often make that more difficult. It involves extra effort (and cost) while providing less benefit. All because publishers are looking for something (anything!) that resembles some fencing so they can build a gate and go back to pretending they're in the gatekeeper business.
Hopefully publishers will finally stop looking to recreate the past by building artificial walls, and start looking at ways to make money that embrace the internet and what it enables.
by Mike Masnick
Mon, Jun 4th 2012 6:45am
AOL Threatens Blogger With Copyright Infringement Charge... For Doing The Exact Same Thing AOL Has Done On A Large Scale
from the shameful dept
Enter, Maryland Juice. A local Maryland blog, which recently had a post about some happenings in Montgomery County, which included relatively large excerpts of parts of an article from Patch, another property owned by AOL. It also included an image from the article. The Maryland Juice article included a significant amount of commentary about the article and, in particular, the photo, which was used to illustrate the point (that it was not a representative sample of county residents at the local meeting). And, yet... AOL lawyers sent a cease and desist letter:
As owner of the Content, AOL has the obligation to prevent the improper use of its proprietary material. Before pursuing any additional avenues to remove the Infringing Content, we are demanding that MarylandJuice.com take immediate steps to remove Patch’s image and either 1) display no more than a 1-2 sentence snippet of this Content, with credit explicitly given as well as a link back to the full article available at http://wheatonmd. patch.com/articles/proposed-rule-change-for-accessory-apartments-meets-opposition ; or 2) remove and disable access to all Infringing Content, and agree to never repost or use the Infringing Content or any other AOL Content, absent compliance with the third-party use guidelines identified above.David, the Maryland Juice blogger, explains how excerpting, discussing and linking is all part of being neighborly online, and tells AOL to shove off, claiming fair use. Of course, you know who should know an awful lot about this kind of thing? Yeah, AOL and HuffPo. You see, a few years ago, when HuffPo tried to do its own "hyper local site," it was accused of doing more or less the exact same thing (but with less commentary, and more copying):
But, apparently, when someone does it to AOL, it's no longer okay? Now that's hypocritical.
And seeding HuffPo Chicago is a scheme whereby the publication takes some — in many cases all — of the content from another site, with a link back to the original.
The result is quick and easy traffic for the new Chicago edition, since the publication ends up catching some Google searches for keywords contained in the (Chicago-related) articles it takes. HuffPo already has good Google PageRank, so its own version of the content floats to the top of the results, even though it was not the original source.
HuffPo's justification, at least when the publication was pulling this crap with us, taking the entirety of our RSS feeds, was that the reprinted posts were good promotion, since they included (a totally buried) backlink to the original content on our site.
by Mike Masnick
Mon, Apr 23rd 2012 11:10am