by Mike Masnick
Thu, May 17th 2012 12:40am
by Mike Masnick
Mon, Apr 23rd 2012 10:01pm
Australian Government Plans To Continue Holding Secret Anti-Piracy 'Stakeholder' Meetings With Industry; No Consumer Advocates Allowed
from the ridiculous dept
The Federal Government would “closely examine” the High Court’s judgement in the long-running copyright infringement case won by ISP iiNet over film and TV studios this morning, Federal Attorney-General Nicola Roxon said this afternoon, as she noted that closed-door talks held by her department with industry on the matter would continue.Thankfully, iiNet's CEO seems to realize that with this ruling in hand, he doesn't need to give in to industry blackmail. While noting that the meetings had been "been going around in circles," in the wake of the High Court ruling, iiNet CEO Michael Malone announced at a press conference that "My preference would be to walk away now." If only it were that easy.
by Mike Masnick
Fri, Apr 20th 2012 7:02am
from the secondary-liability dept
iiNet fought back, and fought back hard -- and won at every single level in the court system, including today's High Court ruling that effectively ends the case. Oh yeah, the High Court also says that Hollywood has to pay iiNet's legal expenses -- approximately $9 million.
From the beginning, contrary to the MPAA's assumption, iiNet fought back hard. Beyond the obvious, which was pointing out that as a service provider it was not responsible for its users' actions, iiNet also protested that the notices the MPAA's anti-piracy front group AFACt, was sending were deficient:
They send us a list of IP addresses and say 'this IP address was involved in a breach on this date'. We look at that say 'well what do you want us to do with this? We can't release the person's details to you on the basis of an allegation and we can't go and kick the customer off on the basis of an allegation from someone else'. So we say 'you are alleging the person has broken the law; we're passing it to the police. Let them deal with it'.The original district court ruling was fantastic, and did such a great job illustrating why it makes no sense to blame third party service providers for infringement -- because infringement is not an absolute, but requires a court to decide what really is infringement. As the original ruling stated:
Regardless of the actual quality of the evidence gathering of DtecNet, copyright infringement is not a straight 'yes' or 'no' question. The Court has had to examine a very significant quantity of technical and legal detail over dozens of pages in this judgment in order to determine whether iiNet users, and how often iiNet users, infringe copyright by use of the BitTorrent system. The respondent had no such guidance before these proceedings came to be heard. The respondent apparently did not properly understand how the evidence of infringements underlying the AFACT Notices was gathered. The respondent was understandably reluctant to allege copyright infringement and terminate based on that allegation. However, the reasonableness of terminating subscribers on the basis of non-payment of fees does not dictate that warning and termination on the basis of AFACT Notices was equally reasonable. Unlike an allegation of copyright infringement, the respondent did not need a third party to provide evidence that its subscribers had not paid their fees before taking action to terminate an account for such reason.In other words, just because someone accuses someone else of infringement, it's ridiculous for the ISP to automatically assume infringement has taken place. That turns the basic concepts of due process on their head. AFACT/MPAA appealed and lost again, with the court once again pointing out that general knowledge that someone on your site infringes is not nearly enough to terminate or suspend users.
This latest (and final) ruling basically takes the same stance. The full ruling is a bit dry, but makes some salient points. It notes, for example, that as a mere ISP, iiNet has absolutely nothing to do with BitTorrent and can't control the fact that some of its subscribers used BitTorrent. It also notes that iiNet was not hosting any of the material, nor doing anything with the infringing material. On top of that, it notes the pointlessness of AFACT/MPAA insisting that iiNet has to kick people off the internet:
Termination of an iiNet account with a customer who has infringed will assuredly prevent the continuation of a specific act of communicating a film online using a particular .torrent file on a particular computer. Regrettably, however, on receiving a threat of such termination, it is possible for a customer to engage another ISP for access to the internet on that computer or access the internet on another computer using a different ISP. Whilst any new infringement would be just as serious as the specific primary infringements about which the appellants complain, this circumstance shows the limitations on iiNet's power to command a response from its customers, or to prevent continuing infringements by them.And, once again, the court finds that mere notice of infringement certainly is not proof of infringement, and requiring iiNet to investigate further is too big a burden:
Updating the investigative exercise in the AFACT notices would require iiNet to understand and apply DtecNet's methodology – which, among other things, involved a permission to DtecNet from AFACT to use the BitTorrent system to download the appellants' films. Before the filing of experts' reports in the proceedings, the information in the AFACT notices did not approximate the evidence which would be expected to be filed in civil proceedings in which interlocutory relief was sought by a copyright owner in respect of an allegation of copyright infringement. Also, any wrongful termination of a customer's account could expose iiNet to risk of liability. These considerations highlight the danger to an ISP, which is neither a copyright owner nor a licensee, which terminates (or threatens to terminate) a customer's internet service in the absence of any industry protocol binding on all ISPs, or any, even interim, curial assessment of relevant matters.All in all, this is a good ruling concerning copyright and secondary liability -- and a bunch of money down the drain for the MPAA, who could have spent this time helping its studios to innovate, but has instead focused on this quixotic legal strategy.
iiNet's inactivity after receipt of the AFACT notices was described by the appellants as demonstrating a sufficient degree of indifference to their rights to give rise to authorisation. However, the evidence showed that the inactivity was not the indifference of a company unconcerned with infringements of the appellants' rights. Rather, the true inference to be drawn is that iiNet was unwilling to act because of its assessment of the risks of taking steps based only on the information in the AFACT notices. Moreover, iiNet's customers could not possibly infer from iiNet's inactivity (if they knew about it), and the subsequent media releases (if they saw them), that iiNet was in a position to grant those customers rights to make the appellants' films available online.
Of course, it doesn't sound like this ruling will have the MPAA come to its senses either. The AFACT front group is already claiming that the ruling means Australia must change its laws to turn ISPs into copyright cops:
The Australian Federation Against Copyright Theft (AFACT) is ramping up the pressure on the government to act. It said today's judgment exposed the failure of copyright law to keep pace with the online environment and the need for the government to act.No, Neil, it's not Australian law that's the problem. It's reality, and the fact that the movie studios refuse to bother to understand how the internet works and how they can adapt. No law will fix this. It will only make things worse. And Gane and the MPAA should be careful, lest they think they can try to pass another SOPA down under. I get the feeling that won't go over well.
"It would seem apparent that the current Australian Copyright Act isn't capable of protecting content once it hits the internet and peer-to-peer networks...," AFACT managing director Neil Gane said.
by Leigh Beadon
Fri, Apr 13th 2012 5:05pm
from the it's-a-start-I-guess dept
The House Intelligence Committee has published a new draft of CISPA (pdf and embedded below), which includes the two amendments that were already approved, plus several other additions and changes. In some areas, there is genuine progress—in others, things actually seem to have gotten worse. Unfortunately, some of the biggest problems with the bill remain, and some of the new language seems to have little effect at all. Some changes I will discuss in future posts, but there are two that I wanted to look at right away:
A Narrower Definition Of Cybersecurity
This is the one clearly positive change in the bill. Previously, the definition of cybersecurity and cyber threat information was:
(A) efforts to degrade, disrupt, or destroy such system or network; or
(B) theft or misappropriation of private or government information, intellectual property, or personally identifiable information.
While the first part remains unchanged, the second part is now much narrower:
(B) efforts to gain unauthorized access to a system or network, including efforts to gain such unauthorized access to steal or misappropriate private or government information
Where the original language could be construed to include all sorts of activity that goes beyond what most people could consider "cybersecurity", the new definition makes it clear that we are talking about unauthorized network access. Most notably, it removes the reference to "intellectual property", which makes sense: the authors have always insisted that they were talking about the misappropriation of secret R&D by foreign entities, which is sufficiently covered by language referring to privacy and unauthorized access. Including "intellectual property" opened it up to all sorts of additional interpretations that went beyond this stated intent.
Now, there's still reason to be a little concerned here, because the attempts to charge people for "unauthorized access" under the CFAA have been ridiculous in the past. If this language in CISPA were construed to include things like violating terms of service (as some have claimed of the CFAA language) then it would be very dangerous. However, with last week's Ninth Circuit ruling which narrowly construed unauthorized access, legal thinking on this matter seems to be heading in the right direction. There's still some gray area, and I think there's still room for a much better definition of cybersecurity in CISPA (I know they want to future-proof it, but it doesn't have to be that short and vague) but this is still a significant improvement over the previous draft.
Extremely Limited Liability For Companies
The new draft of CISPA includes a whole new section carving out the requirements for a company to be held liable if they share information improperly. Basically, a company that shares data with the government receives immunity from all existing privacy laws unless you can show that their actions caused you injury and constituted "willful misconduct"—which is very specifically defined in CISPA as an action taken:
(I) intentionally to achieve a wrongful purpose;
(II) knowingly without legal or factual justification; and
(III) in disregard of a known or obvious risk that is so great as to make it highly probably that the harm of the act or omission will outweigh the benefit.
Yes: and. A company's actions need to satisfy all three of those conditions. I'm not even sure how that's possible. They have to be trying to harm you, knowingly breaking the law and, in a bizarre third clause, they also have to know there is a risk that the harm to you will outweigh the benefits to them. How you are supposed to weigh the harm to individuals whose private data is handed to the government, versus the benefits to cybersecurity services who improve their networks with data, is beyond me. But no matter how you slice it, this is an insanely onerous definition of willful misconduct that makes it essentially impossible to ever sue a company for wrongly sharing data under CISPA.
Overall, despite the progress made on the definition of cybersecurity, CISPA is still a highly problematic bill which still doesn't properly safeguard people's privacy. One of the biggest problems—the fact that the government can use, retain and affirmatively search the information they gather for vaguely defined "national security" purposes—is untouched in the new draft. There are some attempts to alter the rules on how federal agencies can share information between themselves, but many of those changes seem essentially meaningless. It's good to see some reaction from Congress, but if CISPA is to be fixed (a prospect I'm still dubious about) there is still a long way to go.
by Glyn Moody
Thu, Apr 12th 2012 8:01pm
from the in-russia,-isp-spy-on-you! dept
Something that's proving popular with politicians running out of ideas for tackling unauthorized sharing of copyright materials online is to make ISPs and Web sites responsible for the actions of their users -- even though nobody would think of doing the same for telephone companies. SOPA was one of the best-known examples of this approach, and now it looks like Russia wants to join the club:
The cyber crime department of Russia’s Interior Ministry says it intends to get tough on the country’s ISPs when their customers share copyrighted or otherwise illegal material. Authorities say they are currently carrying out nationwide checks on ISPs' local networks and could bring prosecutions as early as next month.
The proposed legislation is a little unusual in that it seems to concern the exchange of unauthorized copies of copyright material across ISPs' local networks:
These networks, present within the ISPs’ own infrastructure, provide users’ access to a wealth of legal content and services such as Internet Relay Chat, but inevitably unauthorized content is available too.
As would have happened with SOPA, the inevitable consequence of passing this kind of law will be round-the-clock surveillance of Internet users by their ISPs -- not because the law requires it, but because the ISPs would be crazy not to given the financial risks they would run otherwise. The other knock-on effect, of course, is that people will just start swapping 2Tbyte portable hard discs full of unauthorized material by hand, bypassing the networks completely.
by Mike Masnick
Fri, Apr 6th 2012 11:28am
from the risky dept
I recognize how tempting it is to go after the tools providers over spam. But just as we don't blame Twitter or Craigslist for how users use (or abuse) their system, those companies shouldn't blame tools providers for the actions of their users either. At the very least, I could see it coming back (in a big, bad way) to haunt Twitter, by giving opponents in lawsuits the ability to point to Twitter's own claims against these tool providers to suggest that it, too, should be liable for the actions of its users. From the details (embedded below), it appears that Twitter is arguing that all users breached the terms of service -- and it carefully notes that each of the software providers have registered accounts -- meaning they agreed to the terms at some point. I understand why it's being argued this way, but I'm not sure it makes sense. The terms apply to that account, not everything that someone with an account does outside of the account. Twitter also claims that the spamware providers are involved in tortious interference with a contract as well as fraud and "unlaweful, unfair and fraudulent business practices" under California law.
To its credit, Twitter does not go as far as Craigslist did in its anti-spam lawsuits -- which actually tried to use copyright and trademark law, as well as claiming that violations of terms of service are a violation of the Computer Fraud and Abuse Act. Thankfully, Twitter avoids going down those paths where those very specific laws might come back to haunt it -- and sticking with slightly more defensible claims.
While I'm incredibly sympathetic towards Twitter's position here, and the goal of stomping out spammers, I still find it troubling in a few ways. Twitter can and should (absolutely) look at ways to kill spammer accounts and to block spamming tools through technological means. It's when things go legal that it could get tricky. While my heart wants them to win -- I still fear that the arguments that the service provider itself is guilty because their tools are used for spamming floats a little too close to arguments about whether or not Twitter is responsible for how its users use Twitter.
by Mike Masnick
Thu, Apr 5th 2012 9:09am
Breaking: Appeals Court Sends Viacom-YouTube Case Back To District Court, Future Of Safe Harbors Still Uncertain
from the some-good,-some-bad dept
The key question in the lawsuit revolved around the so-called "red flag" knowledge question -- and whether or not that meant specific knowledge of items that were infringing (as YouTube and the lower court believed) or just general knowledge of infringement on the site (as Viacom argued). Here, the appeals court got it right, saying that specific knowledge is necessary.
Although the parties marshal a battery of other arguments on appeal, it is the text of the statute that compels our conclusion. In particular, we are persuaded that the basic operation of § 512(c) requires knowledge or awareness of specific infringing activity. Under § 512(c)(1)(A), knowledge or awareness alone does not disqualify the service provider; rather, the provider that gains knowledge or awareness of infringing activity retains safe-harbor protection if it “acts expeditiously to remove, or disable access to, the material.” 17 U.S.C. § 512(c)(1)(A)(iii). Thus, the nature of the removal obligation itself contemplates knowledge or awareness of specific infringing material, because expeditious removal is possible only if the service provider knows with particularity which items to remove. Indeed, to require expeditious removal in the absence of specific knowledge or awareness would be to mandate an amorphous obligation to “take commercially reasonable steps” in response to a generalized awareness of infringement. Viacom Br. 33. Such a view cannot be reconciled with the language of the statute, which requires “expeditious[ ]” action to remove or disable “the material” at issue. 17 U.S.C. § 512(c)(1)(A)(iii) (emphasis added).The court rightfully rejects the idea that the "red flag" knowledge part of the DMCA means that just knowing that there's some infringement -- without knowing specifics -- means you lose the safe harbors. Since this is the key question in the lawsuit, it's great that the appeals court got this right. This was also the point that the maximalists insisted that no appeals court would uphold, and, clearly, they were wrong about that.
The court responds to the claim that if red flag knowledge does not apply to "general" knowledge of infringement, then it's superfluous, by noting that's not true:
The difference between actual and red flag knowledge is thus not between specific and generalized knowledge, but instead between a subjective and an objective standard. In other words, the actual knowledge provision turns on whether the provider actually or “subjectively” knew of specific infringement, while the red flag provision turns on whether the provider was subjectively aware of facts that would have made the specific infringement “objectively” obvious to a reasonable person. The red flag provision, because it incorporates an objective standard, is not swallowed up by the actual knowledge provision under our construction of the § 512(c) safe harbor. Both provisions do independent work, and both apply only to specific instances of infringement.In other words, it's possible to show that there are red flags, but they have to be red flags for infringement of specific items, not knowledge that there is infringement in general. That's a good ruling and it makes sense. Accepting Viacom's interpretation would have effectively killed large parts of the DMCA. YouTube's interpretation (now supported by both the district and the appeals court) keeps the DMCA's safe harbors in existence.
That said, the court then suggests that the district court may have erred in granting the summary judgment on that point. Here, the court is talking specifically about YouTube's actions, and saying that Viacom at least raised enough issues that it is possible to argue that YouTube did, in fact, have knowledge of specific infringement. In other words, the court agrees on the big picture interpretation of the law, but disagrees on the specific application by the district court. It doesn't mean that the court thinks that YouTube violated the DMCA -- just that Viacom at least raised enough issues that it should be handled by a jury in a trial, rather than decided at the summary judgment stage. So the case will now go back to the district court to be heard over that issue.
Even here, the judge notes that while Viacom pointed to some email evidence that YouTube execs may have known of some specific instances of infringement which they ignored, it also points out that it's unclear if those specific instances involve videos that are part of this lawsuit -- and that's necessary if YouTube is to lose its safe harbor provisions.
A second issue involves the question of whether or not YouTube exhibited "willful blindness" to infringement on the site. Here, the ruling is a bit troublesome. It notes that the DMCA does not refer to willful blindness (and that the DMCA does note that there is no duty to monitor). But... it then still suggests that there can be a willful blindness question under the DMCA if there is specific knowledge of infringement. So, again, going back to the main issue in this case, if Viacom can show specific knowledge, it might also be able to get YouTube for being "willfully blind." But, it's no sure thing that Viacom can actually show specific knowledge of clips that are a part of this lawsuit.
The third issue is the question of what "the right and ability to control" infringing activity means. Both YouTube and Viacom interpret that phrase differently... and here, the court rejects them both. The district court accepted YouTube's interpretation, saying (reasonably, in my opinion) that a service provider must know of the particular case before it is required to "control" it. That is, how can the "right and ability to control" apply to a situation where there is no specific issue at hand? What is the service provider expected to control if it doesn't know what it's controlling? Viacom, instead, argued that the issue around the "right and ability to control" created a magical "vicarious liability" for service providers if their services were used to infringe. Both courts reject that argument as making little sense and (importantly) going against the Congressional record (which specifically left out vicarious liability, which had been found in an earlier DMCA draft). Here, the appeals court tries to thread the needle with a somewhat confused ruling that doesn't quite agree with either side. It's not a vicarious liability standard, but it doesn't quite require specific knowledge. Instead, the court literally says "something more" is required -- and asks the district court to consider what that "something more" might be.
Finally, there's an issue of what "by reason of" storage means. The DMCA's safe harbors give protection for infringement that happens "by reason of the storage at the direction of a user of material that resides on a system or network controlled or operated by or for the service provider." YouTube (and the district court) pointed out that YouTube fits under this definition. Viacom tried to argue that YouTube does not qualify because it does much more than storage -- such as converting (transcoding) videos, offering playback of videos and offering "related videos." Viacom tried (at both levels) to argue that those functions go beyond mere storage, and do not qualify for safe harbor protections. Thankfully, the appeals court here agrees with the lower court and says those are protected. It notes that it's clear that Congress intended "service provider" to mean much more than just a storage provider. I should note that one of our frequent critics in the comments has been insistent that the DMCA was designed only to apply to pure storage providers -- but now we've got yet another detailed court ruling pointing out that this is 100% false.
However, the court does send one "feature" back to the lower court for review. It questions whether or not the syndication of videos to third party sites then falls outside the safe harbor provisions concerning "by reason of storage." The court isn't sure that this is outside the safe harbors, but at least asks the lower court to explore the issue.
In the end, this is a mostly good ruling. It gets the biggest question of law right, even if it's not sure about YouTube's specific actions. On some of the other points, it's a little fuzzy in its thinking, but this is still mostly a victory for YouTube at this stage (though, who knows how the lower court and a jury will rule on some of the specifics). It could have been a more complete victory, but this is hardly the complete rejection of the district court ruling that some maximalists insisted was going to be delivered.
by Mike Masnick
Fri, Jan 27th 2012 6:31pm
from the hey,-wait-a-second... dept
The specific lawsuit involved a Bengals cheerleader/school teacher, who wasn't happy with the pictures of her posted to the website... along with the comments made about her (such as suggesting she had slept with the entire football team). As we noted at the time, if this content is user generated -- it's a clear situation where the case should be dismissed over Section 230's safe harbors (which put the liability on the actual content creator, rather than the middlemen third parties). In this case, the actions that might reach the level of defamation clearly came from the user, not the site owner. Previous rulings in other districts have even made it clear that sites that merely pass along content created by someone else -- even if it involves a moderator "choosing" what gets displayed -- do not lose the basic protections. So this case should have been a slam dunk.
Instead... it appears that the judge has gone in the other direction, creating really convoluted arguments to claim that Section 230 does not apply. As Eric Goldman explains, there are serious problems with this ruling:
The court's discussion is short, yet it's surprisingly scattered. Pages 8-10 run through a gamut of gripes about thedirty's practices and statements, but the judge doesn't articulate the relevance of these facts (other than providing evidence of the judge's animus towards thedirty). Because the judge does a poor job connecting the facts to his adopted legal standard, we aren't sure exactly what thedirty did to foreclose the 230 immunityThe ruling, which is attached below, really is that bizarre. The judge twists and turns himself into contortions to try to come up with a reason to say that TheDirty.com is liable for comments made on the site. The simplest explanation, as Eric noted, is that the judge just didn't like the kind of site that TheDirty.com is (and from a quick glance, remains). The key to the judge's ruling is in trying to apply the infamous Roomates.com case. The problem, however, is that the case doesn't fit well. Roommates.com lost not because the site encouraged some actions against the law, but because its menu choices were a part of the content creation, and those menu choices, themselves, directly violated the Fair Housing Act.
It's a huge stretch to go from there to claiming that a site where mean things are celebrated is no longer protected via Section 230's safe harbors. But that's what the judge did.
And, in part, it gets really scary for me, personally, because the judge declares -- multiple times -- that the use of the word "dirt" in a domain name means that you are encouraging defamation:
First, the name of the site in and of itself encourages the posting only of “dirt,” that is material which is potentially defamatory or an invasion of the subject’s privacy.Of course, there's absolutely nothing in Section 230 that suggests that if a judge doesn't like your name -- or falsely assumes that any website with the word "dirt" in the name is up to no good -- he can ignore Section 230's important protections. Like Eric suggested, it would be good if there's an appeal here, because it seems to go against pretty much any other Section 230 ruling. Not liking a site is simply not a reason to ignore those important safe harbors...
And, just to summarize, here are the basics. The site, TheDirty.com posted a user submission, with a one-sentence comment on it. That submission included a cheerleader/teacher, who didn't like her photos being widely available. Somewhere along the way the legal shenanigans began. Remember, the contents of the post itself may be defamatory -- but that, alone, should not make the site liable. It could very well make the original submitter liable, but the cheerleader doesn't seem to want to go that route of actually suing those who did the bad thing. So, instead, the site now faces a lot of liability... because a judge thinks that having "dirt" in your domain name must mean that you're seeking out something bad.
For reasons beyond just the standard defenses of Section 230, this is pretty bizarre and slightly terrifying. I certainly don't encourage the submission of defamatory information. But because I have "dirt" in my domain name, does that mean I should be worried too?
by Mike Masnick
Wed, Jan 4th 2012 12:01pm
from the there-goes-a-reasonable-one dept
However, the entertainment industry has been pushing this message about how infringement has killed the entire industry in Spain to US politicians and diplomats, leading the US State Department to go ballistic in Spain, demanding that the country change its copyright laws to please Hollywood. While this had been assumed ever since the new legislation was introduced, some of the State Department cables leaked via Wikileaks confirmed the US's deep involvement in pressuring the Spanish government to change its laws.
The revelation that this was really a Hollywood-driven law ramped up public opposition to the bill, and actually delayed it for about a year. The whole situation so pissed off people all around Spain, that even the head of the Spanish Film Academy quit that position to protest how bad the new law was, and how it was anti-consumer.
A month ago, a bunch of press reports suggested that the law, called the Sinde Law, after Culture Minister Angeles Gonzalez-Sinde, had been killed. However, many others pointed out that the issue had really just been punted to the incoming government, which appears to have wasted almost no time in approving the Sinde Law and putting in place a totally backwards and unnecessary law that was pushed by Hollywood by misleading people about the state of the Spanish film market. This, despite the fact that analysis from some economists determined that the bill would be very bad for consumers and artists alike (though it might help big studios in Hollywood).
All in all this is a pretty shameful sell out by the Spanish government to Hollywood. Even worse, Spanish Deputy PM Soraya Saenz de Santamaria is either naive or clueless in suggesting that this will "boost our cultural industries." It won't. It's actually about getting money away from Spanish cultural industries (which, again, are making more movies than ever) and sending it to Hollywood instead. Shameful.
by Glyn Moody
Wed, Jan 4th 2012 9:57am
from the but-safer-for-dinosaurs dept
Perhaps there's something about the German legal system that encourages judges to push their interpretation of the law to the limit, without any concern for whether the results of that logic are absurd. At least that is the impression you might get from two recent cases whose judgments both make use of the internet by ordinary citizens increasing fraught with legal risks.
The first involved a retired woman who was accused of downloading a violent film about hooligans. That in itself seems slightly unlikely, but nothing compared with the fact that at the time of the alleged download, the woman in question had neither computer nor wireless router, and lived alone.
Given that, you might think this was a cut-and-dried case of a mistake being made by the tracking company or ISP, since it was not physically possible for the IP address that had been assigned to her in a previous period (when she did have a computer and was online) to be used by her.
But the German judge in Munich was having none of it. As TorrentFreak reports:
The bottom line in Germany is that account holders are responsible for everything that happens on their account and if they canít prove their innocence, they are found guilty. The woman must now pay just over 650 euros in damages to the copyright holder.
That seems an extraordinary approach, since it requires the accused to prove that something didn't happen. And if the absence of computer and wireless router isn't enough to do that, what is?
The other court case extends this extremist interpretation of copyright law to include streaming. A judge in Leipzig has ruled that even the temporary downloading involved in streaming counts as making a copy, because data packets are downloaded successively. If the material on the server is an unauthorized copy, then so is the streamed version, and the person viewing it is breaking the law (German original).
The trouble with this interpretation is that often it is not clear whether material held on a server is infringing on someone else's copyright Ė even for lawyers, never mind for members of the public.
So what will be the inevitable effect of this uncompromising viewpoint, if it is confirmed and enforced across Germany? Practically every site holding user-generated content will be forced to remove it or shut down completely there, since few general users will take the risk of downloading or streaming unknown materials that may be unauthorized, and that would immediately turn them into criminals. And that includes major sites like YouTube or top German startup Soundcloud, and possibly even Facebook, which might need to block all videos.
These extremist interpretations of copyright law threaten to have a chilling effect not just on online innovation in Germany -- who would risk setting up an internet company that involved any kind of user-generated content there? -- but beyond, into everyday life. It would make the use of the internet for anything other than as a medium for watching "approved" channels of "approved" content too much of a risk for much of the general population. Which, of course, is exactly what the copyright industries are striving for: a tamed, neutered Net.