by Mike Masnick
Mon, May 6th 2013 12:27pm
by Mike Masnick
Fri, Apr 19th 2013 4:30pm
It's Not About Whether Amateur Internet Journalism Is Good Or Bad, But That It Happens And Will Continue To Happen
from the look-forward,-not-back dept
Except, that's ridiculous. Mathew Ingram points out that people attacking Reddit for this are missing the point, which is true by a wide, wide margin. First of all, as he notes, mainstream news folks also got parts of the story wrong. As we noted yesterday, the mainstream TV folks got a hell of a lot wrong. Hell, the NY Post even put the wrong two guys on the cover and falsely claimed that the feds were seeking them.
But the bigger problem is this idea that it's "Reddit" or, as some people have argued) "the internet" against the legacy media. That's not true at all. Everyone made mistakes during the rapidly changing story, but only on Reddit did you actually see the details of the process. The legacy news organizations present things as if coming from a place of authority, while Reddit is like an open newsroom where anyone can jump in. The conversation about Tripathi, for example, was about whether or not Suspect #2 was him -- it wasn't based on a declaration that it absolutely was him. Furthermore, when you look at the reason why the story actually spread, it was after some more known "press" names retweeted the initial tweet from Greg Hughes, which claimed (incorrectly) that Tripathi's name went out on the police scanner (ironically, he posted that about a minute after posting "This is the Internet's test of 'be right, not first' with the reporting of this story").
But here's the real issue: people can fret about all of this, but it doesn't change one thing: this is going to happen and continue to happen. People are naturally curious and they're going to talk to people when there's a news story going on and they'll try to figure things out. That happens all the time in newsrooms already before stuff goes on the air or is officially published. It's just that the public doesn't see the process. On Reddit, or anywhere else that the public can converse, it does happen in public. The problem is to assume the two things are the same. Furthermore, it's even more insane to blame "Reddit" or "the internet" as if those are singular entities that anyone has control over. They're not. As Karl Bode noted, they're just massive crowds of people.
An even better point was made by Charles Luzar, who noted that "the crowd doesn't implicitly profess its empirical correctness like the media does," but rather admits quite openly that it's a process in action. Further, he notes that even if the crowd presents false information before finding factual information, that's still "effective crowdsourcing" and, if anything, provides a greater role to the media to be effective curators of the actual facts.
In the end, it seems likely that this incident will actually help a lot the next time there's a big breaking news story, because (hopefully) it will give people more reason to be at least somewhat skeptical of stories coming out, but it's not going to change the fact that groups on various platforms are going to talk about things, and often try to do a little sleuthing themselves. Sometimes they'll get it right, and sometimes they won't -- just the same as many others. It seems like a much better focus looking forward is in providing more training and tools to help the world be better at it.
by Glyn Moody
Wed, Apr 17th 2013 3:59pm
from the getting-it-wrong dept
Amongst economists and those who draw on their thinking, the names Reinhart and Rogoff are well known for work published under the title "Growth in a Time of Debt," which sought to establish the relationship between public debt and GDP growth. The key result, that median growth rates for countries with public debt over 90% of GDP are about one percent lower than otherwise, and that the mean growth rate is much lower still, has been cited many times, and invoked frequently to justify austerity economics -- the idea being that if the public debt is not reduced, growth is likely to suffer badly.
Given the economic, political and social importance of that finding, many have tried to reproduce it, but failed. A post by Mike Konczal on The Next New Deal blog explains how three researchers finally succeeded -- with surprising consequences:
In a new paper, "Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff," Thomas Herndon, Michael Ash, and Robert Pollin of the University of Massachusetts, Amherst successfully replicate the results. After trying to replicate the Reinhart-Rogoff results and failing, they reached out to Reinhart and Rogoff and they were willing to share their data spreadsheet. This allowed Herndon et al. to see how how Reinhart and Rogoff's data was constructed.
In his post, Konczal goes on to give a good explanation of just what went wrong. Correcting those three major errors produces the following result:
They find that three main issues stand out. First, Reinhart and Rogoff selectively exclude years of high debt and average growth. Second, they use a debatable method to weight the countries. Third, there also appears to be a coding error that excludes high-debt and average-growth countries. All three bias in favor of their result, and without them you don't get their controversial result.
So what do Herndon-Ash-Pollin conclude? They find "the average real GDP growth rate for countries carrying a public debt-to-GDP ratio of over 90 percent is actually 2.2 percent, not -0.1 percent as [Reinhart-Rogoff claim]." [UPDATE: To clarify, they find 2.2 percent if they include all the years, weigh by number of years, and avoid the Excel error.] Going further into the data, they are unable to find a breakpoint where growth falls quickly and significantly.
That is, not only is there no significant difference between countries whose public debt-to-GDP ratio is over 90%, and those with much lower values, there is apparently no critical number above which growth falls catastrophically. Put another way, from the corrected research, there does not seem to be any reason why the public debt-to-GDP ratio cannot keep on rising while preserving normal levels of growth.
That clearly runs entirely contrary to the current dogma that public debt must be reduced at all costs in order to keep growth at a healthy level. As the authors of the new paper conclude (pdf):
RR's [Reinhart and Rogoff's] findings have served as an intellectual bulwark in support of austerity politics. The fact that RR's findings are wrong should therefore lead us to reassess the austerity agenda itself in both Europe and the United States.
That debate about public debt reduction and the need for austerity measures certainly won't stop just because a key justification for the approach has been found to be completely wrong. But it's worth noting that alongside the major political ramifications of this new finding, there is another, rather less contentious, conclusion to be drawn.
The three errors in the original work by Reinhart and Rogoff finally came to light when they allowed other researchers to examine their model and the data they employed in it. It then became clear that the model was flawed, and that not all the relevant data had been included in the calculation. Neither was obvious from the result alone.
This reinforces a point we have made before. Alongside the results of their work, academics also need to release the datasets and any mathematical/computational models that they have used to derive them. Without those additional resources, it is not possible for other researchers to reproduce the results, which may -- as turns out to be the case for Reinhart and Rogoff's famous paper -- contain fundamental errors that completely undermine the conclusions drawn from them.
by Mike Masnick
Thu, Jan 3rd 2013 8:26am
from the incredible dept
Over time, it became clear that Congress had left significant errors in. Recently some of the key people behind the bill admitted that there were errors in the bill, with Eli Lilly's General Counsel, Bob Armitage, stating: "There are a few minor errors in the bill and one major error in the bill." What's the "maajor error"? It's the part on "estoppel" in "post grant review." Basically, there's a provision in the bill which encourages people to seek "post grant review" of questionable patents in the first nine months after they've been approved. In talking about this, Congress was clear that it wanted to encourage more people to use this, and so it wanted to remove barriers. One of those was to make it clear that failing to raise issues during the post grant review shouldn't prevent those issues from being raised later. However, the actual language of the bill says that any issue that "could have been raised" can't be raised later.
As law professors Eric Goldman and Colleen Chien note, it's clear that Congress didn't mean to include this language. The committee report on the bill and direct quotes from both House and Senate sponsors of the bill (Lamar Smith and Patrick Leahy) admitted that this was a mistake:
To fix some of the errors in the AIA, Congress rushed through a "technical corrections" bill, intended to fix some of the problems with the bill. During all the fiscal cliff mess, with some back and forth between the House and Senate, they approved this bill which will be signed any moment, if it hasn't been already.
By all accounts, in the AIA, Congress intended to remove the "could have been raised" language and provide a narrower estoppel for PGR proceedings. As the Congressional committee report explains, the PGR was designed to "remove current disincentives to current administrative processes." But something funny happened on the way to the Congressional floor, and the problematic "could have been raised" language was inadvertently inserted into the bill.
We're not the only ones to recognize the error. House Judiciary Chairman Lamar Smith referred to the AIA's PGR estoppel standard as "an inadvertent scrivener's error." Senate Judiciary Chairman Patrick Leahy, in advocating that the Senate adopt the technical corrections bill, said the PGR estoppel standard in AIA was "unintentional," and it was "regrettable" the technical corrections bill doesn't address the issue. Sen. Leahy expressed "hope we will soon address this issue so that the law accurately reflects Congress's intent." The PTO also thinks Congress made a mistake, saying "Clarity is needed to ensure that the [PGR] provision functions as Congress intended."
Just one problem. For a bill about technical fixes, it didn't actually address this one *admitted* major error in the original bill. Yeah, they left that one out.
Let's recap, because this is quite incredible:
- Congress spends seven years debating patent reform.
- It finally approves patent reform in late 2011, and despite seven years of debate, had a ton of clear errors in the drafting of the bill.
- The official sponsors of the bill flat out admit that there's a major error in a part of the bill that they did not intend to be in there.
- A year plus later, Congress finally introduces a bill to "fix problems" in the original bill.
- This "technical corrections" bill does not fix the one major problem that all admit was a flat out mistake in the original bill.
by Tim Cushing
Tue, Sep 18th 2012 2:57pm
from the you-can't-like,-OWN-a-URL,-man dept
It's election season, a time when man's (and more recently, woman's) thoughts turn towards shutting off the TV, radio and phone until mid-November. But! Things must be voted on, including such controversial issues as legalizing medical marijuana and authorizing dispensaries. As an opponent of weed-based medicines, you vow to fight this with every ounce/gram of your being. You set your plan in action.
1. Pick a name for your committee. ("No on Question 3")
2. Pick out a suitable URL ("votenoonquestion3.org")
3. Get your committee and its pertinent information added to the official voters' guide (both print and online.)
4. Register URL.
5. Become aghast.
Can anyone point out where Vote No on Question 3 went wrong? Here are some visual aids, taken from votenoonquestion3.org:
You see, the internet is like magic. And like most magic, it can be used for entertainment purposes. All the do-gooding in the world doesn't amount to much if you forget to register your URL. While you're busy enjoying that "new ink" smell of freshly printed Voter's Guides, someone quicker on the draw is undermining your "marijuana is bad"
propaganda proselytizing information with hilariously over-the-top headlines.
The good news is that the online voters' guide sports the corrected URL: mavotenoonquestion3.com
The bad news is that the paper version will carry the old URL permanently. Of course, very few people are willing to type in a URL by hand, but as news of this blunder spreads, the fake site with the real URL will be receiving much more attention, voters' guide correction or no.
Here's the official reaction from No on Question 3 spokesman, Kevin Sabet:
"It's funny and upsetting, I guess, at the same time."Yeah. Largely the first part. And to think, the committee can't even blame a late afternoon smokeout for the mental slip.
This statement, however, seems both more on point and more disingenuous:
The group sent out a press release saying proponents of medical marijuana were tampering with the democratic process through “underhanded efforts.”Sabet admits the committee made a mistake and yet, the press release attempts to paint No on Question 3 as the victim of villainous pot smokers rather than treating it like the self-inflicted wound it is.
Oh, and here's more bad news for the "No" side:
The Globe notes that the No on Question 3 campaign has managed to collect all of $600 so far, compared to the $1 million or so that supporters of the initiative have received from Peter Lewis, a longtime patron of drug policy reform.Maybe it's time to admit your fears of a weed-loaded America are overblown, especially when you've just been outmaneuvered (and outspent) by a bunch of stoners.
by Mike Masnick
Thu, Jul 26th 2012 12:18pm
WSJ Still Hasn't Corrected Its Bogus Internet Revisionist Story, As Vint Cerf & Xerox Both Claim The Story Is Wrong
from the how-do-you-correct-a-story-that's-almost-entirely-wrong? dept
Basically, anyone and everyone is telling the WSJ that it got this story totally and completely wrong. You might think the WSJ would start making some corrections. Instead, it's made one single correction:
by Mike Masnick
Thu, Jun 28th 2012 3:01am
from the keystone-kops dept
The latest update may create an even bigger headache for the US in its crusade against Kim Dotcom and Megaupload. High Court judge Helen Winkelmann has ruled that the search warrants used to seize Kim Dotcom's property... were illegal. Yeah, that's going to present a problem for the US. She also ruled that the FBI broke the law in taking data from Dotcom's computers out of the country. But the illegal warrants are the big deal here:
She said the search warrants were invalid because they were general warrants which lacked specificity about the offence and the scope of the items to be searched for.In other words, it's not only entirely possible that the government won't even be able to use anything from what they seized in a case, but they may, themselves, be in trouble for breaking the law and violating Dotcom's privacy rights.
Without a valid warrant, police were trespassing and exceeded what they were lawfully authorised to do.
Justice Winkelmann said no one had addressed whether police conduct also amounted to unreasonable search and seizure, but her preliminary view was that it did.
The specific problem? The warrant did not actually state what US laws were supposedly broken -- which is kind of important, especially since this was about a case in the US and a person in New Zealand. If it's not made clear that the warrant is under US laws, then it "would no doubt cause confusion to the subjects of the searches...they would likely read the warrants as authorising a search for evidence of offences as defined by New Zealand law."
So not only do we have a weak case, the whole process in the case has been a complete joke and may mean that the US is unable to use much of the evidence it collected, can't extradite Dotcom and... has little actual basis to move forward with a lawsuit. Honestly, I'm somewhat amazed at the number of mistakes by the feds in such a case. It increasingly feels like they did this because they felt the need to "do something" right after the effort to pass SOPA and PIPA stalled out -- and in their rush to make Hollywood like them again, the feds didn't bother to actually pay much attention to the details. Sometimes it's "creative" to color outside the lines. At other times, it's called cooking up a case on trumped up charges for political reasons.
by Mike Masnick
Fri, Apr 20th 2012 5:33pm
from the okay,-perhaps-pilots-should-be-barred-from-texting dept
Somewhere between 2500 feet and 2000 feet, the captain's mobile phone started beeping with incoming text messages, and the captain twice did not respond to the co-pilot's requests.There followed a series of errors, with the pilot and the co-pilot not communicating with each other -- the pilot trying to drop the wheels as the co-pilot prepared to abort the landing -- and then both pilots becoming confused about their actual altitude. Oh, and then there was the fact that the flaps were set incorrectly.
The co-pilot looked over and saw the captain "preoccupied with his mobile phone", investigators said. The captain told investigators he was trying to unlock the phone to turn it off, after having forgotten to do so before take-off.
At 1000 feet, the co-pilot scanned the instruments and felt "something was not quite right" but could not spot what it was.
I'm not necessarily one to bemoan the way people get obsessed with text messaging these days, but I generally think that if you're flying a commercial airplane, and taking it in for landing... it shouldn't be that hard to know that it's a good idea to not worry about your phone for five minutes.
by Mike Masnick
Wed, Apr 18th 2012 10:25am
from the because-they're-not-losses dept
One recent estimate placed annual direct consumer losses at $114 billion worldwide. It turns out, however, that such widely circulated cybercrime estimates are generated using absurdly bad statistical methods, making them wholly unreliable.This is pretty common. In the first link above, we wrote about how a single $7,500 "loss" was extrapolated into $1.5 billion in losses. The simple fact is that, while such things can make some people lose some money, the size of the problem has been massively exaggerated. As these researchers note, this kind of thing happens all the time. They point to an FTC report, where two respondents alone provided answers that effectively would have added $37 billion in total "losses" to the estimate.
Most cybercrime estimates are based on surveys of consumers and companies. They borrow credibility from election polls, which we have learned to trust. However, when extrapolating from a surveyed group to the overall population, there is an enormous difference between preference questions (which are used in election polls) and numerical questions (as in cybercrime surveys).
For one thing, in numeric surveys, errors are almost always upward: since the amounts of estimated losses must be positive, there’s no limit on the upside, but zero is a hard limit on the downside. As a consequence, respondent errors — or outright lies — cannot be canceled out. Even worse, errors get amplified when researchers scale between the survey group and the overall population.
This doesn't mean that the problems should be ignored, just that we should have some facts and real evidence, rather than ridiculous estimates. If the problem isn't that big, the response should be proportional to that. Unfortunately, that rarely happens. In fact, combining this with the recent ridiculous stories about the need for "cybersecurity," perhaps we can start to estimate just how much of an exaggeration in FUD the prefix "cyber-" adds to things. I'm guessing it's at least an order of magnitude. Combine bad statistical methodology with the scary new interweb thing, and you've got the makings of an all-out moral panic.
by Tim Cushing
Tue, Mar 13th 2012 6:13am
Brazilian Performance Rights Group Claims Collecting From Bloggers Was Simply An 'Operational Error' After Google Pushes Back
from the and-i-thought-google-was-just-there-to-screw-lowly-creatives dept
How quickly things change, especially for entities who find themselves staring down an angry internet. At first, ECAD seemed disturbingly untroubled by the uproar, including the memeification of its intention to stretch the definition of "public performance" to include all audible sound. But it suddenly changed its prohibitively expensive tune when hundreds of thousands of dollars were at stake.
None other than Google Brazil itself issued a blog post stating that ECAD's existing agreement with Youtube did not allow the agency to collect fees from bloggers, pointing out the obvious to ECAD's wilfully obtuse representatives:
These sites don't host or transmit any content when they associate a YouTube video to their site, and as such, the fact of embedding videos from YouTube can't be treated as a ‘retransmission'. As these sites aren't performing any music, ECAD can't, within the law, collect any payment from these.Having been smacked down by its main benefactor, ECAD issued a statement of its own, claiming the whole thing was just an "error" and that it had no intention of setting up tollbooths on every website with embedded video:
1- Ecad has never had the intention to curtail the freedom on the internet, known to be a space devoted to information, dissemination of music and other creative works, and propagation of ideas. The institution also lacks a copyright billing strategy geared to embedded videos. Royalties collections for webcasting have been under re-evaluation since February 29th, and the case reported in recent days took place before then. Nevertheless, it resulted from an operational error of interpretation, which represents an isolated fact in this segment. (...)Note that ECAD has left itself a bit of an opening for pursuing these fees in the future. Supposedly it can still go after blogs but only if it informs Google/Youtube of its intention to do so. It seems the only error it feels it made was getting caught. Everything else was simply a clerical screw-up and if all ducks had been properly ordered, it would have been free to bill websites for linking to Youtube.
2- Two years ago, Ecad and Google signed a letter of intent that guides the relationship between both organizations. The document details thatEcad can collect copyright fees for music coming from embedded videos, as long as it gives advance notice to Google/YouTube. As Ecad did not send such a notification, it becomes clear that this is not its goal. If it were the case, it would have sent the notification the letter of intent requires. (...)
As it stands now, ECAD has backed completely away from this plan. But, once the furor dies down and recedes into the past, I wouldn't be surprised to see this sort of tactic deployed again, if not by ECAD, than certainly by another "aspirational" performance rights organization.
(Hat tip to Techdirt's own Glyn Moody and his amazing Twitter feed. He's asked you all very nicely to follow him and this post is an example of why you should. So, follow this link to do exactly that..)