Facebook's Threat To NYU Researchers Is A Mistake, But It's The Inevitable Follow On To Overreaction To Cambridge Analytica

from the research-v-abuse dept

Late on Friday news came out that Facebook had sent a cease and desist letter to researchers at NYU working on the Ad Observatory project. At issue was that the project had asked people to install a browser extension that would share data back to NYU regarding what ads they saw. Facebook -- responding to significant criticism -- has put forth an ad library that allows researchers to search through what ads are being shown on Facebook and who is behind them (which is good transparency!), but it does not show how those ads were targeted. This is what the researchers at NYU were trying to collect data on. And that is a reasonable research goal.

Facebook has argued that this is a breach of Facebook's terms of service -- though it does seem notable that this is coming out right around the same time that these very same researchers discovered that Facebook's promise to properly label political ads isn't working so great (it's a tangent, but this is why promising to label political ads may be problematic in the first place: you're going to miss a bunch, especially on a platform this big).

The Knight 1st Amendment Center at Columbia is representing the researchers and is condemning this move (and the researchers are refusing to comply with the cease-and-desist). Here's the Knight Center's litigation director Alex Abdo:

“Frankly it’s shocking that Facebook is trying to suppress research into political disinformation in the lead-up to the election. There’s really no question more urgent right now than the question of how Facebook’s decisions are shaping and perhaps distorting political discourse. It would be terrible for democracy if Facebook is allowed to be the gatekeeper to journalism and research about Facebook.”

This is not the first time that the Knight Center and Facebook have clashed over research. Two years ago, the Center had asked Facebook to create a special safe harbor for journalists doing research on the platform, to avoid having them face CFAA claims. Facebook declined.

I think this is a bad move by Facebook and a huge mistake. I think it's a mistake on multiple levels: political, technological, legal and just on general PR. However, I will give one sliver of a defense to Facebook: it likely feels somewhat forced into this because of the years-long over-reaction to Cambridge Analytica. Cambridge Analytica did a bunch of bad stuff, and it started out as an "academic" claiming (misleadingly) that he was merely doing "academic research" on Facebook users -- creating a Facebook app that would get users to basically cough up data about their entire social graph. The "academic" then gave the data over to the company (violating the rules), which then became part of a political advertising machine. Eventually, Facebook was pretty significantly fined, in part because of Cambridge Analytica's ability to extricate and share that data.

You may note some similarities. Ostensibly academic researchers asking users to install something to collect some data about Facebook users, and then collecting that data for "research." Given what happened with Cambridge Analytica, you might see why some folks within Facebook would be reasonably gunshy about letting this happen again -- and that may explain the company's aggressive legal response. And, you and I can say "but NYU is a respected institution -- they're not going to abuse this data" but the guy who did the data for Cambridge Analytica was initially at Cambridge, also a respected university.

You might also argue that since Facebook has been dinged by the FTC, in part over this, and the consent decree doesn't really specify that this there are separate rules for "legitimate" research, it has to do this. Indeed, that's kind of what Facebook itself is now arguing in response to this:

However, I believe the scenario here is quite different for a few key reasons. The original data used (eventually) by Cambridge Analytica was created via a Facebook app, for which the developer had to sign an agreement with Facebook that promised not to use the data in this way. And he was using the setup of Facebook's own tools to extract this data. That is, it was Facebook's API making this information available.

This is very, very different from what the NYU researchers are doing. They're asking users to install a browser extension (i.e., outside the Facebook ecosystem and not using the Facebook API or signing any kind of agreement with Facebook) in order to extract data from their own computers (again, not via the Facebook API, or the Facebook servers). And while Facebook may not like it, it's problematic that the company is arguing that it can step in and argue that end users can't -- by their own choice -- install an app they want on their own computers to capture the data that is on their own computers.

So, I think there's a pretty strong argument that me on my computer installing software to collect data in my own browser is not "unauthorized access" in any sense of the term, no matter what Facebook or the FTC might think. In fact, I'd argue that thinking that me collecting data out of my own browser and willfully handing it over to someone I chose to is, frankly, none of Facebook's business -- and it suggesting that this is a form of "unauthorized access" (which has a specific meaning under the CFAA) is crazy.

Indeed, this gets back to the infamous (and dangerous) lawsuit between Facebook and Power, in which Facebook won a CFAA claim, in part because it had sent a cease-and-desist. I still think this case was wrongly decided and the ramifications are huge, and are one of the reasons why Facebook remains so dominant today. But, in that case, again, it involved a third party offering a service to end-users who willingly chose to use the software, in order to gain access to their Facebook data in order to interact with it in a different way (in Power's case via a universal social media dashboard).

Unfortunately, I fear that the similarities to the Power case make this dangerous for the NYU researchers -- though they make a much more sympathetic defendant than a for-profit startup. And, the facts here are somewhat distinguishable from the Power case (the app here is simply collecting data in a user's browser, rather than doing an independent login and sucking out data).

We should be able to install whatever software we want -- even if Facebook doesn't like it -- to access data on our own computers. Facebook should have no say in that, and shouldn't be able to reach into our computers with a legal effort to block it. If the services were somehow damaging Facebook's technology, I could understand the issue. But this is just about reading the information on your own computer sent by Facebook and collecting and sharing that information, and it's done entirely outside the Facebook ecosystem. Facebook should not only drop the threat, but it should actually support this kind of important research.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: academic research, ad observatory, consent decree, political advertising, privacy, research, threats
Companies: facebook, nyu

Reader Comments

Subscribe: RSS

View by: Time | Thread

  1. identicon
    Anonymous Coward, 27 Oct 2020 @ 9:19am


    Number 2 is a bit shaky of a statement. Unless FB is pushing friends data into their ad service and that info is being pushed into the ad placements... I doubt this is a thing. It sounds more like FUD than fact.

    You do have a lot of friend info in your feed -- things they posted, online status, the People You Might Know section has other people and shows how they are connected to people who are connected to you... so yes FB has a lot of info about your friends that they surface to you while trying to suck you in further.

    With that said, these researchers could easily allay fears by allowing others to see their code and make sure it only targets the ad section of the FB page -- but then again, FB likes to put adds everywhere including in the middle of your feed, so it may not be that cut and dry.

Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here

Subscribe to the Techdirt Daily newsletter

Comment Options:

  • Use markdown. Use plain text.
  • Make this the First Word or Last Word. No thanks. (get credits or sign in to see balance)    
  • Remember name/email/url (set a cookie)

Follow Techdirt
Insider Shop - Show Your Support!

Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it

Email This

This feature is only available to registered users. Register or sign in to use it.