Every Time A State Tries To 'Protect The Children' Online, It Makes Things Worse
from the bad-ideas dept
A few weeks ago, California Senate Bill 568 was signed into law, creating a whole host of “protect the children online” provisions — almost all of which seem short-sighted to ambiguously dangerous. The part that has received the most attention is the “online eraser.” While folks like Eric Schmidt have championed the idea of being able to have teenagers erase the past upon becoming an adult, the first attempt to turn that into a law appears to be a massive failure. Law professor Eric Goldman walked through many of the problems with the law — much of which is focused on the vagueness of the bill:
the removal right doesn’t apply if the kids were paid or received “other consideration” for their content. What does “other consideration” mean in this context? If the marketing and distribution inherently provided by a user-generated content (UGC) website is enough, the law will almost never apply. Perhaps we’ll see websites/apps offering nominal compensation to users to bypass the law.
And then there’s the reality that this won’t actually do much to stop any harassment, since it only lets you erase the initial posting of content, but not further copies:
The law only allows minors to remove their content from the site where they posted it; and the removal right doesn’t apply where someone else has copied or reposted the content on that site. Removing the original copy typically accomplishes the minor’s apparent goal only when it’s the only copy online; otherwise, the content will live on and remain discoverable. Given how often publicly available content gets copied elsewhere on the Internet–especially when it’s edgy or controversial–minors’ purported control over the content they post will be illusory in most circumstances.
In fact, as the folks at New Media Rights point out, this means that the bill will be particularly useless:
… odds are that the more embarrassing the post is, the more likely it was shared. The law does not require that these shared posts be hidden, which the law should not. It would be unfair to give websites the impossible task of tracking down and hiding each iteration of a post on their site or on the entire internet. However, it’s unclear if hiding the original post will make much of a difference in cases where teen’s photos are shared and used against them. For example, in New York an Ex-NFL player created a website where he shared photos teens publicly posted on social media of themselves trashing and partying in a home he had up for sale. The teens in this case were able to delete their photos, but only from the original source.
Goldman, in his piece, also highlights the First Amendment problems:
Example 1: A newspaper prepares a collection of stories, written by teens, about their first-hand experiences with cyber-bullying. These stories are combined with other content on the topic: articles by experts on cyberbullying, screenshots of cyberbullying activity online, and photos of victims and perpetrators. After the newspaper publishes the collection, one of the teenagers changes his/her mind and demands that the newspaper never reprint the collection, and seeks a court order blocking republication. Does the newspaper have a potential First Amendment defense to the court order? Yes, and I don’t think the question is even close.
Example 2: a UGC website creates a topical area on cyberbullying and asks its registered users, including teens, to submit their stories, photos, screenshots and videos on the topic. The website “glues” the materials together with several articles written by its employees. Does the website have a First Amendment interest in continuing to publish the entire collection? Yes, and like the newspaper example, I don’t think it’s close.
And the law doesn’t stop there, either. In another post, Goldman also rips apart a part in the bill that tries to block advertising “bad things” to kids online. It’s one of those things that sounds good, and which politicians love because it makes it look like they’re “protecting the children.” But as per usual, the reality is a lot more messy.
First, the law protects minors’ “personal information” but doesn’t define the term. Without a definition, the term is meaningless. We know that just about any data can be combined with other data to personally identify individuals.
Second, the law doesn’t define who is an “advertising service.” Surely it covers ad networks like Google AdSense, but do the obligations extend to other players in the online ad industry: ad serving technology providers, ad agencies, buyers of remnant ad inventory, etc.?
Third, the law restricts “specifically directing” an ad to a minor, but I have no idea what that means. The law suggests that “run of site” ads should be OK, but I’m not sure when other targeting efforts trigger the restriction.
Finally, like its online eraser counterpart, the law establishes a potentially illusory distinction between teen-oriented websites and adult websites.
Furthermore, he notes that the law itself is almost certainly unconstitutional and violates certain federal laws.
So why is it always this way? It seems that certain politicians just can’t avoid trying to “protect the children online,” and yet every single time they try to do so, the end result is a mess: poorly drafted laws that don’t actually do anything to protect children — and which often just create opportunities for lawsuits over perfectly reasonable activities, creating a massive waste. This knee-jerk reaction to try to regulate the internet to “protect the children” is something that really needs to stop.