A few years back, there were some reports of tools that would "block" people from texting while driving by simply disabling the feature if the phone was moving over a certain speed. It was targeted at parents to put on their kids' phones, but we haven't seen much of an indication that it's gained any traction. No worries, politicians to the rescue. Gregil10 points us to the news that "an influential group of Chicago aldermen," are pushing for a law that would require such software be placed on any mobile phone sold in Chicago, which could then be enabled by the parents (or, I guess, by the user themselves).
Of course, the same problems that we discussed a few years back apply (and haven't been solved). If you think kids won't figure out how to get around such things, you haven't seen kids and their mobile phones lately. They understand the devices better than parents. Even if a parent can figure out how to enable the software, you can bet kids will figure out how to disable it.
An even bigger issue is that blocking texting based on the speed of travel is a really broad brush for trying to stop texting while driving. Speed of travel isn't a very good proxy for whether or not someone is driving. It may be a good indication that someone is travelling in a vehicle, but that hardly means they're controlling the vehicle. And, it really doesn't make sense to block texting for passengers. In fact, allowing passengers to communicate in this way often serves as a good way to stop drivers from texting, because they can ask a passenger to handle the texting instead. Or if someone's on a bus or a train, should they really be stopped from texting? Often, that's when people use such functionality the most, letting others (such as parents!) know that they got on the bus or train and would be arriving on time/late/early/etc.
I certainly recognize the risks of texting and driving. And it's no secret that many, many kids do engage in this incredibly risky and stupid behavior. But laws like this don't solve the real problem. Instead, they just create even more problems.
We've been running a series of experimental discussions lately asking people to weigh in -- via an ad unit -- on different aspects of computing in their lives. The latest such ad box is running for a few days either on the front page of the site or below this post, asking the question: how have computers improved the life of your child? I actually think this is a pretty interesting question by itself, let alone the possible answers. I still don't feel particularly old, but when I think back to how different things were when I was a kid, it's really quite shocking how much computers now pervade everyone's life compared to not so many years ago. I had a computer growing up -- an Atari 800 -- which was stashed in the basement. It wasn't connected to the rest of the world. It didn't have a hard drive. But it was a computer. But today, my kid is surrounded by computing devices everywhere he looks, and even though he's young, he's already fascinated by all kinds of computing devices which will, of course, feel completely "normal" to him. Anyway, you can share your thoughts directly in the ad unit by clicking on one of the "poll" options, and then answering the question in the box. Once again, we're shutting off comments on this post to keep them directed in that ad unit.
Once again, this experiment is being sponsored by ASUS Windows Slate, in partnership with Microsoft and SAYMedia.
While we have no issue with parents choosing to use filters for their kids' computers (we do have an issue when its gov't mandated, however...), we've pointed out in the past that one of the worries with such filters is that parents will over rely on those filters to actually work. Yet, history has shown time and time again that they don't work very well, and a new study out of Europe once again notes that the filters aren't very good at blocking "objectionable" content for kids. The filters are basically good only at domain-level blocks, which is fine for some stuff, but there's all sorts of content on social networks and blogs that might not be appropriate for kids, but that these filters won't block. The simple fact is that sooner or later kids are going to see stuff they weren't meant to see. But if you rely too much on the filter, and think that it really will block stuff, you might not think to teach kids that perhaps some stuff online isn't appropriate for them, and teach them how to deal with it if they run into it by accident.
from the because,-they-better-not-like-ads... dept
It appears that a class action lawsuit has been filed against Facebook for the horrible, horrible act of letting kids like ads. As TechCrunch explains:
On Facebook, you can "like" any status update or post in your stream, but you can also "like" ads. When you do so, it can appear as a status update to all your friends if that ad is linked to a Facebook page, thus turning the "like" button into a social endorsement...
The class action lawyers claim that in the case of teenagers, Facebook is "misappropriating the names and pictures of minors for profit." Facebook might say that it is in its terms of service, that's how the site works. But the lawsuit hinges on a loophole in California law which requires parental consent in order to obtain a minor's consent for using their name or likeness for an advertisement, And Facebook doesn't do that.
This seems like a clear "unintended consequences" situation. Politicians pass a law to "protect the children" from being exploited in advertisements, but it also has the potential to get in the way of really harmless activity, such as a kid clicking a "like" button on his Facebook profile.
Well, it looks like Hollywood is going to keep betting against basic economics. A new report has come out suggesting that the latest generation of kids are perfectly happy to pay for digital content. The report suggests that it's just the slightly older generation -- "the Napster generation" -- that isn't interested in paying for content. Perhaps I'm missing something, but there appears to be no indication of how this conclusion was arrived at, other than some random research firm says so. There is no indication of an actual study or methodology -- though, if someone can actually figure it out, please let us know in the comments. Frankly, this sounds like wishful thinking. It's premised on the idea that the reason many people don't pay for content today is because they "don't know any better." But that's hogwash. People understand the legalities of it all. It's just that many don't buy into it. Furthermore, having the legacy players bet on this fiction that the next generation of kids will magically start paying for what their older siblings got for free means that these legacy players will hold back on making the major changes they need to make to their business models. This kind of report is the sort of thing that is written to make big company execs feel good about their unwillingness to adapt -- rather than give them any sort of useful advice.
The only issue is that in a small group of kids, who already appear to have other psychological issues, the video games may contribute to and exacerbate those issues. In other words, it's not the video games that are the issue, but a separate problem. But for most children, there are no problems with them playing violent video games, despite what you may have heard on the nightly news:
"Violent video games are like peanut butter," said Christopher J. Ferguson, of Texas A&M International University. "They are harmless for the vast majority of kids but are harmful to a small minority with pre-existing personality or mental health problems."
He added that studies have revealed that violent games have not created a generation of problem youngsters.
"Recent research has shown that as video games have become more popular, children in the United States and Europe are having fewer behavior problems, are less violent and score better on standardized tests," Ferguson, a guest editor for the journal, explained.
Other studies also found that many video games have positive aspects, and can be quite useful to kids.
While it still seems like the common belief is that "txt spk" and other sorts of abbreviated elements of the English language harm kids' ability to write properly, we've seen study after study after study after study after study after study has found exactly the opposite. They've found that most kids can tell the difference, and do understand what's proper and what's not. On top of that, heavy texters tend to be better spellers, because they're much more used to writing -- even if they tend to abbreviate the language when communicating via technology.
So it almost seems superfluous to mention that yet another one of these studies has come out and it, too, has found that those who regularly use txt spk have very strong literacy skills. But what's annoying is that both the researchers and the BBC act as if this was a "surprise." It's as if no one bothered to check to see if similar research had been done before, and found the many, many, many studies all saying the same exact thing.
While there were some studies last year claiming that heavy social networking users were likely to have lower grades (though, there were lots of problems with that study), it apparently isn't because it keeps kids up late at night. A new study that looked at students and their social networking habits didn't find much difference in the amount of sleep heavy social network users got vs. those who weren't spending all their time on Facebook and Twitter. My guess is that, with both of these things, there are so many other factors that finding any sort of causal relationship is unlikely in a simple comparison of two variables. There could be many other factors that lead to either good or bad grades, and also impact how much a person uses social networks or the amount of sleep they get. And, in the end, looking for something to blame for either really misses the point. It's an attempt to blame a technology for something else, rather than look at the real underlying reasons why a student doesn't get enough sleep or doesn't do well at school.
But, of course, don't expect that to stop the debate. As I was finishing up this post, along comes a different study that again notes a correlation between really heavy users and bad grades. But, the study also finds that for kids these days, they're pretty much online all the time somehow -- even more than the study's authors thought possible.
One of the more famous examples of abuses of the YouTube video takedown process was the case of Lenz vs. Universal Music, which involved Universal Music issuing a YouTube DMCA takedown to a woman who posted a very short clip of her baby dancing to a Prince song that was playing in the background. It was a clear case of fair use, and while after the woman filed a counternotice Universal chose not to sue, the EFF filed a lawsuit against Universal Music, saying that the DMCA notice was fraudulent, since it was such an obvious case of fair use. While Universal Music argued that since fair use is just a "defense" and not a "right" it need not consider fair use in sending a takedown, the court disagreed.
The notice claims that the video contains content for which the copyright is held by record label Razor & Tie. The guy who got the takedown seems a bit confused, in that he appears to be blaming McDonald's for the mess, when it appears McDonald's had nothing at all to do with the takedown. In fact, the record label Razor & Tie may not have anything to do with it either... as I'll explain below. The song used in the video was from a CD that came with a McDonald's Happy Meal. Looking around, it appears that in April, McDonald's announced a promotion with record label Kidz Bop to issue music CDs. Razor & Tie is the parent company of Kidz Bop. The problem here is clearly not McDonald's. All it did was include the CD in Happy Meals. It's got nothing to do with the takedown, and the guy's anger at McDonald's is misplaced (though, you could make the argument -- and it's a stretch -- that McDonald's should tell its partners to avoid these sorts of ridiculous copyright claims that scare people away from buying Happy Meals).
The next assumption, then, would be that Razor & Tie is guilty of sending the takedown, but I don't think that's true. If Razor & Tie had sent a DMCA takedown, the video would be down. When Google receives a DMCA takedown, it almost always (or perhaps always) pulls down the content immediately in order to retain its DMCA safe harbors. The user would then need to file a counternotice to start the process of potentially getting the video back up. The fact that the video is up and the notice the guy received simply tells him to review the videos suggests that no DMCA takedown was sent.
Instead, the blame almost certainly lies with Google's content recognition engine/filters that the record labels pushed them to use to try to catch copyright infringement ahead of time. Now, Razor & Tie is somewhat complicit here, in that it appears to have uploaded its catalog to train Google's filters (if I remember correctly -- and correct me if I'm wrong -- Google needs the copyright holder to submit copies for its filter to work). So, Google had this particular song on file, and noticed the similarity. Google's filter algorithms don't appear to consider fair use (or, perhaps more likely, they do a bad job of it in many cases) and the guy then is sent the automated notification, even though it makes everyone -- McDonald's, Razor & Tie and Google -- look bad, though the blame from the recipient appears to be in almost reverse order of culpability.
Unfortunately, the guy who received the notice also appears to be confused concerning his own rights. He says he is going to take down the video, though he clearly has a strong fair use case in asking for the video to be left alone. It seems likely that Google would allow the video to stay up, and I highly doubt that Razor & Tie would do anything else (it would be ridiculous to try to claim that this was not fair use).
Either way, this highlights a variety of interesting things. First, despite all the publicity of the Lenz case, these types of "takedowns" (even if it's not a DMCA takedown) still happen. Second, people on the receiving end of these notices assume that there is no recourse that would allow the video to stay up. People get official sounding notices and they assume they need to jump. Third, Google's content match filter isn't particularly good on fair use issues. Fourth, when these sorts of bogus notices are sent, it reflects very poorly on a variety of companies. In this case, McDonald's is getting most of the blame, despite being almost entirely blameless (well, it did decide to put out these silly music CDs, but that's a separate issue). Even Razor & Tie may be getting misplaced blame (though it may depend on the "rules" it set for Google's filter). Amusingly, it may be Google that deserves the most blame, and it appears to be getting the least.
Still, no matter what the situation, it's simply ridiculous that a guy filming 30 seconds of his kid dancing should have to worry about any of this.
There are a bunch of different "child filtering/monitoring" software on the market these days, and many parents use it to help them keep track of what their kids do online. I have no problem with this -- so long as such filters aren't mandated by the government. But it appears that just selling the tools isn't enough for some companies. JJ sends in the news that one of the top providers in the space doesn't just monitor what kids do for parents, but collects all the data -- including the text of chat room discussions -- and resells it to marketers. You have to imagine that this isn't exactly what the FTC (or parents) expects of such tools.
The company defends the practice, claiming that the data is anonymized and no identifiable data is included -- but we've heard that before. Every single time someone insists their data is anonymized, news breaks showing that it is not. I don't think there's anything wrong, necessarily, with doing targeted marketing programs, but using unsuspecting parents and getting them to install filters and monitoring software, without realizing the data will be handed over to marketing firms, seems pretty sleazy.
dennis deems: Looks like it's really gonna happen. Dammit. :( http://arstechnica.com/information-technology/2013/05/yahoo-will-purchase-tumblr-for-1-1-billion-according-to-wsj-report/ Ninja: omg... hopefully they'll leave it alone and avoid destroying it... I like tumblr dennis deems: I really can't understand why yahoo wants tumblr. the ethos is completely different and I would think irreconcilable to the ethos of yahoo if they integrate it into their own infrastructure, it's dead http://talkingpointsmemo.com/news/2013/05/tumblr-mayer-promises-not-to-screw-it-up.php silverscarcat: Wait, wait, wait... The DoJ and the Dept of Edu want WHAT?! *Googles it* ... *Calls Everyone in congress who supposedly represents me.* Even Italians are calling the U.S. a police state. dennis deems: ? google search yields nothing silverscarcat: I got it on my first attempt. Should be the top result. Rikuo: Jeebus, you'd think ISPs would have learned about data caps My ISP, Digiweb, has just announced three Fibre plans, 70Mb down, 20Mb up. First two plans though have 70GB and 200GB caps respectively, which is ridiculous for such high speeds. For the high price of 80 euro/month, I can get unlimited bandwidth Thing is, I've learned to be wary, so I click on Terms and Conditions, to see what their definition of unlimited is...only to be met by an error page saying my IP address had been blocked silverscarcat: Well... That sucks. :( ... Whut? Water balloons? ... Seriously?! I think I need to get some, fill them with oil-based paint and throw them all over that school. Rikuo: huh...was finally able to read the Terms and Conditions. Nowhere is mentioned a hidden definition of unlimited