Yes, Actually, There Is A Lot Of Good News In Zuckerberg's New Plans For Facebook

from the surprisingly-interesting dept

On Wednesday morning, there was a flurry of discussion and articles concerning Mark Zuckerberg’s giant new post, laying out a new strategy for Facebook. Having first read some of the commentary — nearly all of it someone on the spectrum from “critical” to “mocking,” I expected the actual post to have lots of problems, or just be pointlessly vague, like too much of Facebook’s public communications over the past few years. However, having read through the whole thing, it’s actually a lot more thoughtful, nuanced, and detailed than I expected — and there’s a lot that’s important in there that we should be encouraging, rather than mocking. There are still some questions raised, but rather than the kneejerk “but Facebook is pure evil” response some like to default to, I thought it might be useful to look more closely at the different aspects of what Zuckerberg is saying, where it might be really good, where it might be problematic, and where more info is necessary.

At the very least, rather than simply attacking absolutely everything that Facebook says, there is value in encouraging steps in the right direction.

Messaging Integration:

A key part of the announcement is one that was first reported by the NY Times back in January: a plan to integrate the messaging features on Facebook’s three key platforms: Facebook Messenger, Whatsapp, and Instagram. From Zuck’s post:

People want to be able to choose which service they use to communicate with people. However, today if you want to message people on Facebook you have to use Messenger, on Instagram you have to use Direct, and on WhatsApp you have to use WhatsApp. We want to give people a choice so they can reach their friends across these networks from whichever app they prefer.

As soon as this was hinted back in January, the cynical response to this was that it was really just an effort to make a possible breakup of the company harder. There is a (potentially reasonable!) school of thought around antitrust that even if it makes no sense to break up the big internet platforms’ key business areas, it could make sense to break off the various other services they’ve included. Indeed, a key line that many people noticed in the announcement just last week of a new FTC taskforce on competition was that it would conduct “reviews of consummated technology mergers,” which many people took to mean looking back at things like whether or not it was appropriate to allow Facebook to have purchased Instagram and Whatsapp in the first place.

While it may be true that such integration could make any possible breakup more difficult, it also is actually a good thing. Or, rather it could be depending on how Facebook does it. It is true that it’s not great that we now have so many communication data silos. In the old days, we had a nice open standard in email, and everyone could and did put their own spin on it, and users could choose different flavors of email — but they could all communicate with each other. The same was true on mobile with SMS — though, in that case, carriers got greedy and started charging way too much for SMS. And that helped create the rise of all of these third party apps for messaging one another. A more integrated approach would be great. And it’s good that Facebook at least seems to be thinking about expanding it out beyond its own platforms, saying it could connect to SMS as well:

We plan to start by making it possible for you to send messages to your contacts using any of our services, and then to extend that interoperability to SMS too. Of course, this would be opt-in and you will be able to keep your accounts separate if you’d like.

That’s a good start, but I don’t think it goes far enough. If Zuckerberg/Facebook were truly committed to interoperability across communications platforms, why not actually create a new standard for it, rather than still going the proprietary approach? And, yes, I’m already familiar with the relevant xkcd:


But that doesn’t take away from the point. If anyone could create a new standard that might be more widely adopted, it would be Facebook. And, it might help get at another point in Zuckerberg’s post…

Spreading End-to-End Encryption:

Another key point in Zuckerberg’s announcement that we should absolutely celebrate is that full end-to-end encryption is a key part of his plan:

There is a growing awareness that the more entities that have access to your data, the more vulnerabilities there are for someone to misuse it or for a cyber attack to expose it. There is also a growing concern among some that technology may be centralizing power in the hands of governments and companies like ours. And some people worry that our services could access their messages and use them for advertising or in other ways they don’t expect.

End-to-end encryption is an important tool in developing a privacy-focused social network. Encryption is decentralizing — it limits services like ours from seeing the content flowing through them and makes it much harder for anyone else to access your information. This is why encryption is an increasingly important part of our online lives, from banking to healthcare services. It’s also why we built end-to-end encryption into WhatsApp after we acquired it.

I saw some people claiming that Mark was really just talking about encryption of data in transit, but it’s clear that’s not true at all. He’s talking about full end-to-end encryption where even Facebook won’t have access to your messages. This should be widely supported and celebrated, because we need more use of real encryption. Facebook adopting it even more seriously than it already has (in Whatsapp and parts of Facebook Messenger) would be good. It would, again, be even better if this was driving a new standard that anyone could tap into and which wasn’t just owned by Facebook. But it’s a huge step in the right direction and should be encouraged absolutely.

There are some who are still arguing that this is a “trick” because it’s possible (likely?) that Facebook will retain the metadata of who is communicating with whom, but that is part of the fundamental nature of most messaging systems these days — and is true for email providers as well. It feels a bit ridiculous to hold Facebook to a standard far beyond what any other messaging service has. And, frankly, encrypting content is a really big deal and should be supported. Attacking Facebook, even when it does the right thing, seems entirely counterproductive.

A second area of concern about this is the headfake nod that Zuckerberg makes to the idea that encryption could hinder law enforcement. As we’ve been discussing for years, there’s little evidence to actually support that. Law enforcement has access to more data than ever before in history about people, and the examples of encryption truly hindering a legal investigation are very, very limited. The reality is that encryption is what makes people safer: it makes them safer from snooping from stalkers, abusive ex’s, authoritarian governments, and more. Zuckerberg tries to walk a balancing line between all of this and talk about “tradeoffs” but I wish he’d just come out more strongly on pointing out that there is no real tradeoff here. True end-to-end encryption increases both privacy and security.

In the last year, I’ve spoken with dissidents who’ve told me encryption is the reason they are free, or even alive. Governments often make unlawful demands for data, and while we push back and fight these requests in court, there’s always a risk we’ll lose a case — and if the information isn’t encrypted we’d either have to turn over the data or risk our employees being arrested if we failed to comply. This may seem extreme, but we’ve had a case where one of our employees was actually jailed for not providing access to someone’s private information even though we couldn’t access it since it was encrypted.

At the same time, there are real safety concerns to address before we can implement end-to-end encryption across all of our messaging services. Encryption is a powerful tool for privacy, but that includes the privacy of people doing bad things. When billions of people use a service to connect, some of them are going to misuse it for truly terrible things like child exploitation, terrorism, and extortion. We have a responsibility to work with law enforcement and to help prevent these wherever we can. We are working to improve our ability to identify and stop bad actors across our apps by detecting patterns of activity or through other means, even when we can’t see the content of the messages, and we will continue to invest in this work. But we face an inherent tradeoff because we will never find all of the potential harm we do today when our security systems can see the messages themselves.

Reducing Permanence:

I have less to say about Zuckerberg’s belief that content should, in certain cases, expire over time. I get the arguments for it, and recognize that there are some communications that are worth preserving forever, and others that, by their nature, feel like they should be ephemeral. But, to some extent, that’s a market question in terms of what users actually want (and we’re already seeing some of that demand for ephemeral messaging in the initial success of Snapchat and, of course, Facebook’s and Instagram’s clones of Snapchat with both platforms’ “Stories” feature). There are good arguments for having ephemeral messaging and there are good arguments for enabling permanent content, and having more options is always a good thing.

All sorts of communications historically have been ephemeral, and some of that allows people to be more true to themselves (and perhaps a bit less “performative”). At the same time, deleting digital messages still feels odd. Still, if people have more choices, and they can opt-in to the appropriate setup for however they want to share content, we can let the market decide, and that’s good. Though, again, it might be nice if Facebook were willing to step things up and move to more of an approach of providing a protocol for this kind of thing, rather than being the silo in charge of everything.

Concerns and Opportunities:

I did want to address some of the concerns I’ve seen some people raise about this announcement, but since I think some of them could be turned into opportunities, I’m going to combine both things into this one section.

Antitrust: I discussed this one above a bit. Integrating all of its own systems raises some more competitive questions around anti-trust. I do think making sure that those systems are end-to-end encrypted and not feeding the Facebook data collection beast is actually a good sign, but if Facebook wanted to do the right thing (again, as mentioned) it should focus on building an open standard that others could adopt to make it not just Facebook’s own silo and proprietary system, but an open standard, a la the protocols, instead of platforms concept I’ve been focused on lately (more on that soon).

Harassment: A few people have pointed out that there have been a lot of recent controversies concerning private groups on WhatsApp leading to mob violence in places like India or spreading disinformation in Brazil. And, they argue, nothing in these new plans seems designed to solve that. That might or might not actually be true. First of all, as we’ve pointed out in the past, spreading disinformation is more of a human problem, rather than a technology problem, and it’s a bit bizarre to blame the technology for the disinformation. Second, as Kevin Bankstron rightly points out, disinformation is spread over email all the time, and no one demands that Google fix it. And, indeed, the very same people would likely have a shit fit if Google suddenly jumped in and started trying to block your crazy aunt’s latest viral nonsense email forward:

But, perhaps more to the point of dealing with those kinds of issues on such a platform, if Facebook were to move to more of a “protocols” approach to messaging, rather than controlling everything, they might then be able to open up the system so that end users themselves could make use of third party apps or filters to help them decide if messages were legit or not, rather than leaving it entirely up to Facebook. Still, I’d leave this as an open question. As Professor Kate Starbird noted on Twitter, people appear to be more susceptible to misinformation and disinformation from smaller social circles, and the more that that’s hidden away, the more difficult it is for people to identify what’s happening. Again, that may be true, but that’s also already true of small social circles meeting up at a coffee shop or bar or restaurant.

Business Model: Wired wrote up a big thing suggesting that the big thing that was “missing” in this announcement (or, as they put it, the “crucial” thing) was the business model. Obviously, right now, Facebook’s business model involves sucking up a bunch of data about its users in order to push highly targeted ads at those users. With fully end-to-end encrypted messaging, Facebook doesn’t get access to that content and can’t use it for targeted advertising.

In a later interview that Wired’ editor-in-chief Nick Thompson did with Mark Zuckerberg about all of this, Zuckerberg pointed out that they already don’t use message content for advertising:

One is that we aren?t really using the content of messages to target ads today anyway. So we weren?t planning on doing that. So it?s not like building a system and making it end-to-end encrypted and now we can’t see the messages is really going to hurt ads that much because of the way we were already thinking about that.

But I do think there’s another separate story that also broke in the NY Times last week that also caused a lot of snickering and mockery, but could be very much tied to this: and it’s that Facebook is experimenting with cryptocurrency. I’m actually working on another post about why I think that’s significant in ways that most people are missing, and part of it is in enabling new privacy-supportive business models. But, you’ll need to stay tuned for that post…

On the whole, lots of people are skeptical about this for good reasons. Zuckerberg himself notes that the company doesn’t have a good track record on “privacy-focused” aspects:

I understand that many people don’t think Facebook can or would even want to build this kind of privacy-focused platform — because frankly we don’t currently have a strong reputation for building privacy protective services, and we’ve historically focused on tools for more open sharing.

But the line right after that strikes a nerve with me as well:

But we’ve repeatedly shown that we can evolve to build the services that people really want, including in private messaging and stories.

Many people forget this, but Facebook has completely shifted its focus a few times, including switching from a more privacy focused service to a more open one. Indeed, almost exactly a decade ago I actually (incorrectly!) predicted that Facebook would have trouble becoming a more open content sharing service, because its DNA was about being a closed system for more private interactions with friends. I was wrong — though, part of that was Facebook being a bit careless in how it treated content that had originally been more restricted, leading to some of the problems it’s facing today. But the ability of Facebook to completely shift like that is notable. Not many companies successfully do that, and I’d argue that Facebook has actually done it a few times.

I had recently discussed on our podcast the question of whether or not Zuckerberg understands the Innovator’s Dilemma better than most other big tech CEOs, and this message again has me thinking that, at the very least, he’s much more open to it than most CEOs. Rather than just supporting a core product into oblivion, he’s legitimately thinking about the big shifts that could change the entire company and trying to get ahead of them. That doesn’t mean he’ll succeed, of course. Indeed, there are all sorts of ways this could flop. And, certainly, Facebook has made a whole bunch of missteps on a variety of fronts over the years, so they shouldn’t get any benefit of the doubt.

But I will admit that Zuckerberg’s detailed writeup was a lot more thorough, thoughtful and careful than I had expected given all the snark I saw about it. And given my personal preference that we move to a world of open protocols, over closed platforms, I found it encouraging that even Facebook might make a move in that direction, even if it doesn’t go nearly as far as I would eventually like to see. Still, considering that the alternative was a more locked up platform with Facebook even more in control and with more access to data, a step in this direction should be encouraged, rather than outright mocked.

Filed Under: , , , , , , , , ,
Companies: facebook, instagram, whatsapp

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Yes, Actually, There Is A Lot Of Good News In Zuckerberg's New Plans For Facebook”

Subscribe: RSS Leave a comment
Anonymous Coward says:

So it's more carefully crafted lies than usual...

Masnick falls for it, but even NYT can’t!

Right now on Drudge (both same link)


Antitrust Division making life EASIER for monopolists…

So I’ll just go on Facebook’s history. This known spy center / monopolist / totally unethical mega-corporation has to prove overwhelming reasons why SHOULDN’T / MUST NOT be broken up into tiny pieces.

Federico (profile) says:

Messaging standard

A standard to chat with Facebook users was already available: XMPP worked until recently. The end-to-end encryption is provided by OMEMO on some clients ( ).

So it’s pretty easy to increase the people’s confidence that Facebook truly believes what it says: start by bringing back XMPP and contribute some patch to the most popular apps so that they work well with it. And yes, Google should do the same with its services.

Mason Wheeler (profile) says:

First of all, as we’ve pointed out in the past, spreading disinformation is more of a human problem, rather than a technology problem, and it’s a bit bizarre to blame the technology for the disinformation.

Not necessarily. While you’re technically correct here, the part you’re not addressing is technology’s role as a force multiplier: it takes human problems and makes them much, much bigger. So it’s definitely reasonable to look into what can be done to mitigate that side of the problem.

Anonymous Coward says:

Re: Re:

This is especially true given how the disinformation spread via Facebook, WhatsApp, and Instagram affects people that don’t use Facebook. I don’t use anything of Facebook’s, but I have to contend with events that occur because of the conspiracy theories and anti-vaccine quackery that gets spread on it. People have to deal with how social media’s megaphone powers enabled a fascist to rise to power in Brazil and made it easier for ethnic cleansings to happen in a nation where political and social divisions were already at a boiling point even if they don’t have an account on those social media platforms. Facebook and others can do better and have to do better.

Anonymous Anonymous Coward (profile) says:

Re: Re: Re:

"Facebook and others can do better and have to do better."

The problem then is that Facebook and ‘others’ become the defacto arbiters of what is right or wrong. Personally I don’t want them to be the arbiters of what is right or wrong.

So far as the conspiracy theories and anti-vaccine quackery, I would suggest that members of Facebook and ‘other’ step up and communicate. If there are more of them than you, then one needs to speak longer, and more forthrightly, and with more provable, quantifiable, qualitative facts, with links to sources. They, in the end, will not be able to keep up with that. Yes it is a lot of work, and not being a member of Facebook or ‘others’ I don’t know how the systems operate, but I do know that while you may not change the minds of the incoherent, the message of supported honesty does get through.

We tolerate the incoherent to some degree here on Techdirt. We let them speak, but have the option to flag their posts, which minimizes them though still readable (well, depending on how coherent or incoherent they are at any given time) and also have the option to feed their zealotry or not, and too often we feed rather than not.

That would leave the argument you want to have with Facebook and ‘others’ is how to come up with a system that, with some similarity to Techdirt, allows all to speak, but gives options that to, let us say downgrade, those posts that cannot, or will not produce any verifiable facts to support their conclusions. Because he said, is not sufficient without ‘his’ credentials and some accredited work in the field, or something similar.

Thad (profile) says:

Re: Re: Re: Re:

So far as the conspiracy theories and anti-vaccine quackery, I would suggest that members of Facebook and ‘other’ step up and communicate. If there are more of them than you, then one needs to speak longer, and more forthrightly, and with more provable, quantifiable, qualitative facts, with links to sources. They, in the end, will not be able to keep up with that. Yes it is a lot of work, and not being a member of Facebook or ‘others’ I don’t know how the systems operate, but I do know that while you may not change the minds of the incoherent, the message of supported honesty does get through.

Assuming for the moment that this is true (and I think you will find, if you look around some comments section here, that "people who believe in nonsense will eventually listen to reason if you just speak to them rationally for long enough" is a false premise) — how many people are dying of preventable diseases in the time it takes to convince anti-vaxxers that they’re wrong?

nasch (profile) says:

Re: Re: Re: Re:

allows all to speak, but gives options that to, let us say downgrade, those posts that cannot, or will not produce any verifiable facts to support their conclusions.

Maybe not a bad idea, but don’t fool yourself into thinking that’s all a downgrade option will be used for. Inevitably, it will be an "I don’t like this post" button.

Mike Masnick (profile) says:

Re: Re:

Ah yes, leave it to Masnick to try and defend the indisputably slimiest of the slime, Facebook. lol

We have criticized Facebook quite frequently and will continue to do so (frequently, I imagine). Is your brain so clouded and so "black and white" that you have no room for nuance, and no ability to support a company doing the right thing, even if it’s done many bad things before?

How is "slimy" to point out a possible path for Facebook to not be so slimy?

I get it. You’re cool. You’re hip. Facebook is evil and you don’t like me, so it’s fun to take a cheap shot with NO analysis, NO thought, NO actual argument, but just attack me.

Add Your Comment

Your email address will not be published.

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...