California Bill Could Introduce A Constitutionally Questionable 'Right To Be Forgotten' In The US
from the well-meaning-but-poorly-thought-out dept
As we’ve pointed out concerning the General Data Protection Regulation (GDPR) in the EU, the thinking behind the regulation is certainly well-meaning and important. Giving end users more control over their own data and increasing privacy controls is, generally speaking, a good idea. However, the problem is in the drafting of the GDPR, which is done in a manner that will lead to widespread censorship. A key part of the problem is that when you think solely in terms of “privacy” or “data protection” you sometimes forget about speech rights. I have no issue with giving more control over actually private information to the individuals whose information is at stake. But the GDPR and other such efforts take a much more expansive view of what information can be controlled, including public information about a person. That’s why we’ve been troubled by the GDPR codifying a “right to be forgotten.” We’ve already seen how the RTBF is leading to censorship, and doing more of that is not a good idea.
But now the idea is spreading. Right here in California, Assemblymember Mark Levine has introduced a local version of the GDPR, called the California Data Protection Authority, which includes two key components: a form of a right to be forgotten and a plan for regulations “to prohibit edge provider Internet Web sites from conducting potentially harmful experiments on nonconsenting users.” If you’re just looking from the outside, both of these might sound good as a first pass. Giving end users more control over their data? Sounds good. Preventing evil websites from conducting “potentially harmful experiments”? Uh, yeah, sounds good.
But, the reality is that both of these ideas, as written, seem incredibly broad and could create all sorts of new problems. First, on the right to be forgotten aspect, the language is painfully vague:
It is the intent of the Legislature to ensure that personal information can be removed from the database of an edge provider, defined as any individual or entity in California that provides any content, application, or service over the Internet, and any individual or entity in California that provides a device used for accessing any content, application, or service over the Internet, when a user chooses not to continue to be a customer of that edge provider.
Any content? Any application? At least the bill does limit “personal information” to a limited category of topics, so we’re not just talking about “embarrassing” information, a la the EU’s interpretation of the right to be forgotten. But “personal information” is still somewhat vague. It does include “medical information” which is further defined as “any individually identifiable information, in electronic or physical form, regarding the individual?s medical history or medical treatment or diagnosis by a health care professional.” So, would that mean that if we wrote about SF Giants pitcher Madison Bumgarner, and the fact that his broken pinky required pins and he won’t be able to pitch for a few weeks… we’d be required to take that information down if he requested it? That seems like a pretty serious First Amendment problem.
This is the problem with writing broad legislation that doesn’t take into account the reality that sometimes this kind of information is made public for perfectly good reasons.
Similarly, the prohibition on “potentially harmful experiments.” How does one define “potentially harmful”? Websites are in a never-ending state of experimentation. That’s how they work. Everyone gets a different view on sites like Amazon and Netflix and Facebook and Google, because they’re all trying to customize how they look for you. Is that “potentially harmful”? Maybe? It’s also potentially very, very helpful. Before just throwing out the ability of websites to try to build better products, it seems like we should have a lot more of an exploration of the issue than just saying nothing “potentially harmful” is allowed. Because almost anything can be “potentially harmful.”
Again, I’m quite sure that Levine’s intentions here are perfectly good. There are very good reasons (obviously!) why so many people are concerned about the data that companies like Facebook, Google, Amazon and others are collecting on people. And these are big companies with a lot of power. But these rules seem vague and “potentially harmful” themselves. Beyond blocking perfectly natural “experimenting” in terms of how websites are run, these rules won’t just impact those giants, but every website, including small ones like, say, this blog. Can we experiment in how we display our information? Or is that “potentially harmful” in that it might upset some of our regulars? That may sound silly, but under this law, it’s not at all clear what is meant by “potentially harmful.”
There are important discussions to be had about protecting individuals’ privacy, and about experiments done by large companies with lots of data. But the approach in this bill seems to just rush into the fray without bothering to consider the actual consequences of these kinds of broad regulations.