Good For The World, But Not Good For Us: The Really Damning Bits Of The Facebook Revelations

from the anti-competitive,-anti-consumer-issues dept

As expected, UK Parliament Member Damian Collins released a bunch of documents that he had previously seized under questionable circumstances. While he had revealed some details in a blatantly misleading way during the public hearing he held, he’s now released a bunch more. Collins tees up the 250 page release with a few of his own notes, which also tend to exaggerate and misrepresent what’s in the docs, and many people are running with a few of those misrepresentations.

However, that doesn’t mean that all of these documents have been misrepresented. Indeed, there are multiple things in here that look pretty bad for Facebook, and could be very damaging for it on questions around the privacy protections it had promised the FTC it would put in place, as well as in any potential anti-trust fight. It’s not that surprising to understand how Facebook got to the various decisions it made, but the “move fast and break things” attitude also seems to involve the potential of breaking both the law and the company’s own promises to its users. And that’s bad.

First, the things that really aren’t that big a deal: a lot of the reporting has focused on the idea that Facebook would give greater access to data to partners who signed up to give Facebook money via its advertising or other platforms. There doesn’t seem to be much of a bombshell there. Lots of companies who have APIs charge for access. This is kind of a standard business model question, and some of the emails in the data dump show what actually appears to be a pretty thoughtful discussion of various business model options and their tradeoffs. This was a company that recognized it had valuable information and was trying to figure out the best way to monetize it. There isn’t much of a scandal there, though some people seem to think there is. Perhaps you could argue that allowing some third parties to have greater access Facebook has a cavalier attitude towards that data since it’s willing to trade access to it for money, but there’s no evidence presented that this data was used in an abusive way (indeed, by putting a “price” on the access, Facebook likely limited the access to companies who had every reason to not abuse the data).

Similarly, there is a lot of discussion about the API change, which Facebook implemented to actually start to limit how much data app developers had access to. And the documentation here shows that part of the motivation to do this was to (rightfully) improve user trust of Facebook. It’s difficult to see how that’s a scandal. In addition, some of the discussions involve moving specific whitelisted partner to a special version of the API that gives them access to more data… but in a way that the data is hashed, providing better privacy and security to that data, while still making it useful. Again, this approach seems to actually be beneficial to end users, rather than harmful, so the attempts to attack it seem misplaced — and yet take up the vast majority of the 250 pages.

The bigger issues involve specific actions that certainly appear to at least raise antitrust questions. That includes cutting off apps that recreate Facebook’s own features, or that are suddenly getting a lot of traction (and using the access they had to users’ phones to figure out which apps were getting lots of traction). While not definitively violating antitrust laws, that’s certainly the kind of evidence that any antitrust investigator would likely explore — looking to see if Facebook held a dominant position at the time of those actions, and if those actions were designed to deliberately harm competitors, rather than for any useful purpose for end-users. At least from the partial details released in the documents, the focus on competitors does seem to be a driving force. That could create a pretty big antitrust headache for Facebook.

Of course, the details on this… are still a bit vague from the released documents. There are a number of included charts from Onavo included, showing the popularity of various apps, such as this:

Onavo was a data analytics company that Facebook bought in 2013 for over $100 million. Last year, the Wall Street Journal broke the story that Facebook was using Onavo to understand how well competing apps were doing, and potentially using that data to target acquisitions… or potentially to try to diminish those competing apps’ access. The potential “smoking gun” evidence is buried in these files, but there’s a short email on the day that Twitter launched Vine, its app for 6-second videos, where Facebook decides to cut off Twitter’s access to its friend API in response to this move, and Zuckerberg himself says “Yup, go for it.”

Now… it’s entirely possible that there’s more to this than is shown in the documents. But at least on its face, it seems like the kind of thing that deserves more scrutiny. If Facebook truly shut down access to the API because it feared competition from Vine… that is certainly the kind of thing that will raise eyebrows from antitrust folks. If there were more reasons for cutting off Vine, that should come out. But if the only reason was “ooh, that’s a potential competitor to our own service,” and if Facebook was seen as the dominant way of distribution or access at the time, it could be a real issue.

Separately, if the name Onavo sounds familiar to you, that might be because earlier this year, Facebook launched what it called a VPN under the brand name Onavo… and there was reasonable anger over it because people realized (as per the above discussion) that Onavo was really a form of analytics spyware that charted what applications you were using and for what. It was so bad that Apple pulled it from its App Store.

The other big thing that comes out in the released documents is all the way at the end, when Facebook is getting ready to roll out a Facebook app update on Android that will snoop on your SMS and call logs and use that information for trying to get you to add more friends and for determining what kinds of content it promotes to you. Facebook clearly recognized that this could be a PR nightmare if it got out, and they were worried that Android would seek permission from users, which would alert them to this kind of snooping:

That is bad. That’s Facebook knowing that its latest snooping move will look bad and trying to figure out a way to sneak it through. Later on, the team is relieved when they realize, after testing, that they can roll this out without alerting users with a permission dialog screen:

As reporter Kashmir Hill points out, it’s notable that this “phew, we don’t really have to alert users to our sketchy plan to get access to their logs” came from Yul Kwon, who was designated as Facebook’s “privacy sherpa” and put in charge of making sure that Facebook didn’t do anything creepy with user data. From an article that Hill wrote back in 2015:

The face of the new, privacy-conscious Facebook is Yul Kwon, a Yale Law grad who heads the team responsible for ensuring that every new product, feature, proposed study and code change gets scrutinized for privacy problems. His job is to try to make sure that Facebook?s 9,199 employees and the people they partner with don?t set off any privacy dynamite. Facebook employees refer to his group as the XFN team, which stands for ?cross-functional,? because its job is to ensure that anyone at Facebook who might spot a problem with a new app ? from the PR team to the lawyers to the security guys ? has a chance to raise their concerns before that app gets on your phone. ?We refer to ourselves as the privacy sherpas,? says Kwon. Instead of helping Facebook employees scale Everest safely, Kwon?s team tries to guide them safely past the potential peril of pissing off users.

And yet, here, he seems to be guiding them past those perils by helping the team hide what’s really going on.

This is also doubly notable for Kashmir Hill who has been perhaps the most dogged reporter on the creepy levels to which Facebook’s “People You May Know” feature works. Facebook has a history of giving Hill totally conflicting information about how that feature worked, and these documents reveal, at least, the desire to secretly slurp up your call and SMS records in order to find more “people you might know” (shown as PYMK in the documents).

One final note on all of this. I recently pointed out that Silicon Valley really should stop treating fundamental structural issues as political issues, in which they just focus on what’s best for the short-term bottom line, and should focus on the larger goals of doing what’s right overall. In a long email included in the documents from Mark Zuckerberg, musing thoughtfully on various business model ideas for the platform, one line stands out. Honestly, the entire email (starting on page 49 of the document) is worth reading, because it really does carefully weigh the various options in front of them. But there’s also this line:

If you can’t read that, it’s a discussion of how it’s important to enable people to share what they want, and how enabling other apps to help users do that is a good thing, but then he says:

The answer I came to is that we?re trying to enable people to share everything they want, and to do it on Facebook. Sometimes the best way to enable people to share something is to have a developer build a special purpose app or network for that type of content and to make that app social by having Facebook plug into it. However, that may be good for the world but it?s not good for us unless people also share back to Facebook and that content increases the value of our network. So ultimately, I think the purpose of platform ? even the read side ? is to increase sharing back into Facebook.?

I should note that in Damian Collins’ summary of this, he carefully cuts out some of the text of that email to frame it in a manner that makes it look worse, but the “that may be good for the world, but it’s not good for us” line really stands out to me. That’s exactly the kind of political decision I was talking about in that earlier post. Taking the short term view of “do what’s good for us, rather than what’s good for the world” may be typical, and even understandable, in business, but it’s the root of many, many long term and structural problems for not just Facebook, but tons of other companies as well.

I wish that we could move to a world where companies finally understood that “doing good for the world” leads to a situation in which the long term result is also “good for us,” rather than focusing on the “good for us” at the expense of “good for the world.”

Filed Under: , , , , , , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Good For The World, But Not Good For Us: The Really Damning Bits Of The Facebook Revelations”

Subscribe: RSS Leave a comment
16 Comments
Mason Wheeler (profile) says:

indeed, by putting a "price" on the access, Facebook likely limited the access to companies who had every reason to not abuse the data

Umm…?

I suppose that might be true "from a certain point of view," as Obi-Wan put it. That point of view being Facebook’s definition of "abuse." It’s worth keeping in mind, though, that the users whose data is being used are likely to have a very different idea as to what constitutes abuse, and the fact that people are paying Facebook good money for it does nothing to shield them from abusive behavior.

Graham Cobb (profile) says:

It's called "stealing"

This was a company that recognized it had valuable information and was trying to figure out the best way to monetize it. There isn’t much of a scandal there, though some people seem to think there is.

The "scandal" is because Facebook didn’t own the information. Just because it has the information doesn’t mean it has any right to monetize it. If it had been figuring out how it could offer to help the users to monetize their information (and take a cut) that would have been fine — and we would have all got some insight into what our information was worth and be able to decide whether we found the offer acceptable.

Cdaragorn (profile) says:

Re: It's called "stealing"

All you’ve done here is demonstrate the fundamental flaw in the reasoning behind those who want to find a scandal in all of this.

Sure, Facebook doesn’t “own” that information. Neither does the person those facts are about, though. No one “owns” it. It’s just a series of facts about a person. You don’t get to own the factual information about when you were born or what you like and don’t like or who your friends are. Those are facts, not IP.

What Facebook does have is access to those facts because that person chose to share them with it. Other companies don’t have that and would like to. Facebook has every right to choose to share access to any information they’ve gathered under a monetary agreement as long as it’s following privacy laws.

Eldakka (profile) says:

Re: It's called "stealing"

The "scandal" is because Facebook didn’t own the information. Just because it has the information doesn’t mean it has any right to monetize it.

That is not the scandal.

It’s a credibility thing. While it is perfectly reasonable for a commercial organisation to look into ways to monetise their assets, the issue is that FB has publically, and before various government committees, denied ever having even considered doing so.

It is the lieing about doing so that is the issue, not that they did it. i.e. the cover-up is the problem.

It goes to the credibility of the organisation. They are lieing about having done perfectly legal and reasonable things, things they didn’t have to lie about. It would have been perfectly fine to have said "we considered it, but then rejected doing it." But they didn’t say that, they said they never even considered it.

Graham Cobb (profile) says:

That may be the approach in your country — I am happy to believe you. But here in Europe the approach is that we own data about ourselves. If Facebook, or anyone else, finds out or even creates some data about us, it cannot monetise it without our permission, because we own it, not them.

Just like if I take my car to be serviced, the garage can’t monetise my car while they are servicing it — I still own it.

nae such says:

Re: Re:

while it may be the law in europe you owning your data is a legal fiction. the law in europe requires they give you control and if they follow the law they do.

as to who owns the data i suspect it is on the company’s servers and was compiled by company programs after users or websites gave it to them. they posses their servers and the data on them and in most places that means legal ownership as well.

that we are, i think rightly, concerned with what they do with information on us and make laws to regulate it makes sense. unfortunately those responsible for making the law are very likely to bungle it. it’s not like these companies are going to behave well on their own. ownership, though, really depends on the laws and norms of a culture. there are some cultures where to government owns more than america and some less i’m sure, and imminent domain can certainly show us what we really “own” and as with all laws can be abused and make a mockery of our rights.

Professor Ronny says:

I wish that we could move to a world where companies finally understood that "doing good for the world" leads to a situation in which the long term result is also "good for us," rather than focusing on the "good for us" at the expense of "good for the world."

Not really possible because "good for us" really depends on who "us" is. What if Facebook decided to treat all user data as private. That is good for the "us" that are Facebook users but not the "us" that are shareholders of Facebook.

There is simply nothing that is good for everyone so part of making any decision is deciding which "us" will benefit from that decision.

Leave a Reply to Eldakka Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...