German Court Says Facebook's Real Names Policy Violates Users' Privacy

from the really? dept

With more and more people attacking online trolls, one common refrain is that we should do away with anonymity online. There’s this false belief that forcing everyone to use their “real name” online will somehow stop trolling and create better behavior. Of course, at the very same time, lots of people seem to be blaming online social media platforms for nefarious activity and trollish activity including “fake news.” And Facebook is a prime target — which is a bit ironic, given that Facebook already has a “real names” policy. On Facebook you’re not allowed to use a pseudonym, but are expected to use your real name. And yet, trolling still takes place. Indeed, as we’ve written for the better part of a decade, the focus on attacking anonymity online is misplaced. We think that platforms like Facebook and Google that use a real names policy are making a mistake, because enabling anonymous or pseudononymous speech is quite important in enabling people to speak freely on a variety of subjects. Separately, as studies have shown, forcing people to use real names doesn’t stop anti-social behavior.

All that is background for an interesting, and possibly surprising, ruling in a local German court, finding that Facebook’s real names policy violates local data protection rules. I can’t read the original ruling since my understanding of German is quite limited — but it appears to have found that requiring real names is “a covert way” of obtaining someone’s name which raises questions for privacy and data protection. The case was brought by VZBZ, which is the Federation of German Consumer Organizations. Facebook says it will appeal the ruling, so it’s hardly final.

On the flip side, VZBZ is also appealing a part of the ruling that it lost. It had also claimed that it was misleading for Facebook to say that its service was “free” since users “pay” with their “data.” The court didn’t find that convincing.

It will certainly be interesting to see where the courts come out on this after the appeals process runs its course. As stated above, I think the real names policy is silly and those insisting that it’s necessary are confused both about the importance of anonymity and the impact of real names on trollish behavior. However, I also think that should be a choice that Facebook gets to make on its own concerning how it runs its platform. So I’m troubled by the idea that a government can come in and tell a company that it can’t require a real name to use its service. If people don’t want to supply Facebook with their real name… don’t use Facebook.

But, honestly, what’s really perplexing is that this is all coming down at the same time that Germany — especially — has been trying to crack down on any “bad content” appearing on Facebook, demanding that Facebook wave a magic wand and stop all bad behavior from appearing on its site. I’d imagine that’s significantly harder if it has to allow people to use the site anonymously. This is not to say that anonymity leads to more “bad” content (see above), but it certainly can make moderating users much more difficult for a platform.

So, if you’re Facebook, at this point you have to wonder just what you have to do to keep the service running in Germany without upsetting officials. You can’t let anything bad happen on the platform, and you can’t get user’s names. It increasingly seems that Germany wants Facebook to just magically “only allow good stuff” no matter how impossible that might be.

Filed Under: , , , , ,
Companies: facebook, fzbz

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “German Court Says Facebook's Real Names Policy Violates Users' Privacy”

Subscribe: RSS Leave a comment
65 Comments
PaulT (profile) says:

“On Facebook you’re not allowed to use a pseudonym, but are expected to use your real name”

By their T&Cs, perhaps, but I can vouch for the facts that there’s at least a couple of people I know who have been on there for years using a pseudonym, most recognisably so even by people who don’t know the and nothing to do with their real name. Others add silly things as their middle name, etc, which could probably violate the policy as well, although they’re obviously not doing so to hide their identity in that case. Hell, I still have a few dogs and inanimate objects among my FB “friends” (largely from the days before pages existed) and their accounts don’t seem to have been cleaned up either.

It’s probably a handy policy for kicking people off if they’re found violating any other rules, but to say it doesn’t happen, and happen often, is not correct. It might be their policy, but it’s not held to with any kind of regularity in my experience.

“If people don’t want to supply Facebook with their real name… don’t use Facebook.”

Well… this. Even if it were actually necessary to use real names to sign up (and how FB can possibly confirm this with any accuracy is a different question), it’s not really a problem. If you don’t like the policy of a service, don’t use that service. If you voluntarily sign up using your real name, don’t be surprised if they then know your real name. Simple.

Queex (profile) says:

Re: Re:

A particular issue with this ‘softly softly’ approach to pseudonyms, with patchy enforcement of unreasonable terms, is that it opens up an avenue for targeted harassment.

It only takes one jackass with a grudge to pull the pin on a fake name report and cause great annoyance for someone.

Uneven enforcement is not the solution to bad policy.

I guess that wisdom should be beaten into every social media company manager.

PaulT (profile) says:

Re: Re: Re:

Well, the problem is that Facebook have no way of knowing a person’s real identity. As you mention elsewhere, even when someone supplies their proper documentation they might not be able to confirm – and the vast majority of people will never have had to do that anyway. No website could operate effectively if every potential user had to prove their identity with paper.

I think the system is fine for any realistic scenario. They state they need real names, but don’t enforce until there are real reported problem. At that point, they can use that as an extra rule to enforce against people engaging in harassment, even if what they do directly isn’t specifically against the rules.

It would be impossible for them to check identifies before people sign up, so what’s your solution? If the above is bad policy, what would good policy look like?

Anonymous Coward says:

“But, honestly, what’s really perplexing is that this is all coming down at the same time that Germany — especially — has been”

The German government and the VZBZ are two very distinct entities, with different motives. It’s easy to conflate all Germans as “Germany”, just like it’s easy to be say “all Americans are gun nuts”.

Rich Kulawiec (profile) says:

No magic wand necessary

“But, honestly, what’s really perplexing is that this is all coming down at the same time that Germany — especially — has been trying to crack down on any “bad content” appearing on Facebook, demanding that Facebook wave a magic wand and stop all bad behavior from appearing on its site.”

No magic wand is required: just minimally competent system and network administration skills. Maybe if Facebook’s technical staff weren’t ignorant newbies, maybe if they tried — and I know this is a shocking concept — to learn from the experience of others, maybe if they actually invested some effort in running their own platform, then they could take a big bite out of this problem. (And of course, as everyone equipped with sufficient experience knows, reducing the scope of the problem isn’t a solution, but it does make the remainder more tractable and thus amenable to techniques that might not scale to the size of the original problem. You don’t have to solve it all at once.)

Don’t tell me it can’t be done. Of COURSE it can be done, it’s really not all that hard. The problem isn’t the feasibility, it’s the lack of commitment.

The same situation exists at Twitter and other “social media” companies that are irresponsibly run. These people are making rudimentary mistakes that we *knew* were mistakes decades ago, mistakes that we made and wrote about so that others wouldn’t have to repeat them. But in their ignorance and their arrogance, they’re insisting on doing so anyway.

And now governments are starting to notice the fallout from this, and now they’re responding the way they usually do: with regulation. This *could* have been largely avoided had these companies architected, designed, built, and operated themselves using best practices — but they didn’t. And now they’re reaping what they’ve sown.

PaulT (profile) says:

Re: No magic wand necessary

I love the way you attack the people at these companies for making such “obvious” mistakes but never bother to mention what they might be.

I’m particularly interested in how “minimally competent system and network administration skills” translates into working out which content is “bad” and which is “good”. Especially since most of the things complained about are completely subjective and interpreted differently between different human beings.

Personally, I’ve worked in the industry for 20 years and I’ve never seen a network protocol that has such things included. What have I been missing?

PaulT (profile) says:

Re: Re: Re: No magic wand necessary

It’s not even just scaling in terms of users. The breadth of what people are doing on FB is not exactly like posting on a web forum either. When something like merely tagging the wrong person in a meme could be considered harassment, or linking to a “fake news” story is controversial, I’d love to hear the easy fixes for a competent sysadmin to undertake that he thinks are so easy.

Anonymous Coward says:

Re: Re: No magic wand necessary

Alright, I’ve give you one example: RFC 2142, which is just over 20 years old — therefore not exactly something new that people might not have seen yet. It’s entitled “Mailbox names for common services, roles, and functions”, and it spells out which ones you must have, which ones you should have, which ones are used for what, and so on.

Well-run operations have read this and implemented it, because they know that their peers (and others) will expect to use it to communicate with them. They’ve put in place the appropriate email plumbing to see that incoming traffic is sorted/prioritized/routed as necessary. That might mean forwarding it to a person, or to a group, or to a ticketing system — those are internal choices that are driven by structure and size, RFC 2142 doesn’t specify those. But whatever is behind those addresses, it should ensure that messages end up in front of clueful eyeballs that are in a position to read them, understand them, and do something about them.

This is something that everyone who’s even considering running an Internet operation should know and have in place before they launch. And there are quite a few well-run operations (of all descriptions and sizes) who have done exactly that. Really good operations save all the traffic and do post-mortem analysis on resolved problems in order to identify persistent issues, and then they task someone with figuring out why that’s happening and what can be done about it. The idea, of course, being to forestall the need for future reports by identifying the root cause and fixing it, thus reducing the need to keep dealing with the same thing over and over again.

This isn’t magic and it’s not hard: it’s Internet operations 101. And there exist all kinds of techniques for making it scale, for filtering traffic by priority, for correlating it with internal problem reporting systems, and so on. (I’ll give you one example of those: it’s not hard to construct a procmail filter that keys off the addresses of everyone who’s posted to NANOG in the past five years. If you get a problem report from one of those people directed to your postmaster or hostmaster or abuse or other address, there is a high probability that it’s accurate and very much worth paying prompt attention to. Same for dnsops. Same for outages. And so on. Hey, if one of the more senior people out there is doing you the favor of doing your job for you, the least you can do is pay attention.)

It’s not an accident that the operations that get this and practice it tend to be operations that don’t exhibit chronic, systemic problems. And conversely, it’s not an accident that organizations that neglect this tend to be ones that are cesspools of abuse and attacks.

Like I said at the outset: this is one example. But it happens to be a well-known best practice that everyone should be using, regardless of sector or scale. Any operation that’s not up to this is probably not good enough to be part of the Internet — because this is easy stuff, like I said, Internet operations 101. If they can’t handle even this entry-level task, then they’re going to flail miserably when faced with some of the tougher things.

Anonymous Coward says:

Re: Re: Re: No magic wand necessary

Mailbox names for common services, roles, and functions", and it spells out which ones you must have, which ones you should have, which ones are used for what, and so on.

Those fail to be useful when an operation scales up to the size of YouTube, and the mailboxes are flooded by complaints from users.

You are failing to grasp the scaling issue, which also impacts things like using email for users to notify the site of problems. Note, that is not a spam problem, but rather that with a large users base, enough of your users will find things on the site that they do not like, and flood any mechanism that relies on manual filtering.

Rich Kulawiec (profile) says:

Re: Re: Re:2 No magic wand necessary

I get the scaling issue just fine, thank you. And there are two responses to that:

1. Yes, this works at YouTube scale if you do it right. If you do it foolishly, then of course it will fail miserably. Y’know, YouTube/Google is allegedly staffed by a lot of really smart people and they’re really well-resourced; they should be able to eat a problem like this for breakfast.

2. If you’re getting too many problem reports, then that’s a pretty good indicator you’re doing it wrong…whatever “it” is. The best way to reduce that volume is to figure out what you’ve botched and fix it. Repeat as necessary and watch the volume shrink. It’s not hard.

Rich Kulawiec (profile) says:

Re: Re: Re:4 No magic wand necessary

Go back and re-read. I mentioned RFC 2142 as ONE example. It’s not the only one. There are all kinds of others things that I could also cite as examples, e.g., BCP 38 compliance.

So I’m not saying “set up a really good, scalable RFC 2142 compliance mechanism and stop”. I’m saying start with that, then do some of the other dozens of things that you should be doing.

PaulT (profile) says:

Re: Re: Re:5 No magic wand necessary

“Go back and re-read. I mentioned RFC 2142 as ONE example”

But, an example of what? You’re really not being clear either on what you think they’re not doing right nor how they’re doing it wrong. Which parts of the specs are they not implementing? Why do they need to be implemented in these cases? What are the specific violations?Somewhere in your rambling, you should at least be stating why and how you think they’ve failed.

Then, you can continue the thought and explain why being able to implement these things mean they should be able to filter out “bad” from “good” content when most human beings seem incapable of doing so. That’s the “magic” – not the filtering itself, but determining which content needs to be filtered. No RFC will tell you that.

You’ve done nothing so far but ramble on for paragraphs about things that are irrelevant to the statement you found so objectionable. Perhaps explain why you’re right rather than insisting you know automatically better than everyone else and referencing random specifications that might not be relevant to anything being disucussed.

Rich Kulawiec (profile) says:

Re: Re: Re:6 No magic wand necessary

Yeah, we’re clearly talking past each other here, so let me try again. And sorry for not getting to this yesterday: actual work intervened.

What I’m trying to explain is something that everyone who’s been around long enough already knows: sloppily-run environments become “abuse magnets”, meaning that abusers and attackers eventually figure this out and use the ineptness of the operation for their own purposes. As in “200M bots on Facebook”, which is their announced number and therefore a serious underestimate.

Think about how shitty an operation you have to be running to have an infestation that huge. And think about what kind of shitty people would let that situation persist. Anybody with an ounce of professionalism or responsibility or just self-respect would shut it down immediately and keep it that way while they figured out WTF went wrong, fixed it, and took steps to keep it from happening again.

The same thing is true at Twitter and in AWS and at YouTube and elsewhere. They’re all very poorly run, and so it’s not at all surprising that they have major issues, e.g., everyone who’s paying attention to their own logs knows that AWS is a massive source of brute-force attacks.

The partial (note: PARTIAL) fix to this to not run the operation so damn sloppily. It’s not a panacea, and I’ve never said it was. It’s necessary, not sufficient. And by “necessary”, I mean that it enables folks to have a fighting chance of dealing with this nonsense. Without it? Well, they’re pretty much screwed and so are the rest of us who have to deal with the fallout.

Whether it’s RFC 2142 or BCP 38 or not making the mistakes outlined in RFC 1912 or using the DROP list or any of the other myriad things that are of Internet operations 101 varies by the operation. But it’s not an accident that the operations which have the worst problems are the same ones that’ve failed to do this stuff. Conversely, it’s not an accident that some of the operations we never talk about — because we don’t need to — are the ones who’ve done all of this stuff and more. They’ve pre-empted most of their problems and made the rest easier to solve.

BTW: I don’t want to hear any whining about “scale”. Of course these things scale, and it’s really not even that hard. This is the easy part of professional system/network admin, and anybody who can’t handle these basics is going to be overwhelmed when the hard stuff hits their desk.

Anonymous Coward says:

Re: Re: Re:3 No magic wand necessary

If you’re getting too many problem reports, then that’s a pretty good indicator you’re doing it wrong

And what happens when you aren’t doing it wrong and people just like to complain because you aren’t doing it they want you to do? Because that is the internet. You can do everything right and there will still be a large group of people out there who disagree with you and will complain about it.

Rich Kulawiec (profile) says:

Re: Re: Re:4 No magic wand necessary

You’re right — and that’s a well-known problem. Happily, there are also well-known ways of dealing with it. Here’s one of them. (Note: one. If you want me to give a tutorial introduction to custom-designed problem reporting system, pay me. A lot.)

Let’s suppose that somewhere out there are Alice and Bob. Alice doesn’t send problem reports to hostmaster often, but when she does, they’re accurate, timely, and complete (that is: they lay out the problem explicitly so that you can see what’s wrong).

Then there’s Bob. Bob is a loon. Bob sends problem reports to webmaster every other day saying that the HTML markup is controlled by aliens and they are eating his brain. (I suppose this also lays out the problem explicitly, but in a rather different way.)

Clearly, you want a mechanism that puts Alice’s reports at the top of the queue and Bob’s at the bottom. Now, how you build that mechanism depends on how many Alice’s and Bob’s you’ve got, because it’s got to scale. BUT, and here’s the key, every time you deal with one of these messages, and either solve the problem that Alice told you about, or realize that Bob is still a loon, you incorporate that knowledge into the problem reporting system. (BTW, you do this whether the system takes its input from email or the web or something else or multiple of these.)

Over time, this yields a system that’s quite efficient at prioritizing the things that need to be. Of course you can also augment it with a priori knowledge, e.g., “we work closely with Foo Corp and Charlie is their senior network engineer, so flag anything from Charlie”. Or you can use heuristics – which I won’t get into here, because it’s long. Whatever you use, the point is that you’ll end up building something that isn’t perfect but doesn’t have to be.

To put that last part another way: this gets easier, NOT harder, at scale. It gets easier because you can make a lot of mistakes and still end up with the most important/timely/accurate problem reports at the head of the queue. Of course, you should still fix those mistakes as you find them, but that can be backfilled.

BTW, one place that has deployed this started with about 400 a priori rules and now has about 20,000. Reports are gatewayed into a ticketing system that also gets input from web forms, monitoring systems, etc. Yeah, every now and then, it screws up and something important doesn’t get marked as important…but when that happens, it’s the last time it happens. For the most part, it does a really good job of triage and as a direct result of that, traffic volume has been declining every year since it was deployed.

PaulT (profile) says:

Re: Re: Re:5 No magic wand necessary

You seem to have a real problem with understanding that there’s a fundamental difference between the things you’re describing and the way Facebook operate. Your ideas do nothing when people make memes and share statuses and comment on other peoples’ posts instead of clicking the report button. They also do nothing when the “magic” that you objected to was pre-emptively working out whether users would object on a subjective level to content.

“Reports are gatewayed into a ticketing system that also gets input from web forms, monitoring systems, etc”

This is a world away from what’s being discussed.

OA (profile) says:

Re: Re: Re:6 No magic wand necessary

You seem to have a real problem with understanding that there’s a fundamental difference between the things you’re describing and the way Facebook operate.

If ‘Rich Kulawiec’ is correct then "the way Facebook operate(s)" is, in part, due to the poor and unprofessional build up of their service.

Facebook is big, powerful, famous, rich and therefore "successful". Unfortunately, the popular modern use of the successful tag often includes only the illusion of merit. Mr. Kulawiec seems to be suggesting that the building of Facebook was reckless and/or careless. Too many of us religiously exalt ‘big things’ with little concern with how or why. The ends justify the means, right?

…Anyway. Must we reject ideas 100% or embrace them 100%? Can’t we wrestle with the pieces?

An Onymous Coward (profile) says:

Re: Re: Re:7 No magic wand necessary

Wrestle with the larger problem in as many small pieces as you like. The problem with "Rich’s" argument is that it is not based in reality. The real issue has nothing to do with individual problem reports but in detecting objectionable content before the wrong person does and you get sued for it. That’s a vastly different problem statement than "handle a volume of complaints".

"Rich" also assumes, with a very high degree of possibility he is entirely wrong, that Facebook does not already have a good individual complaint handling system. That system has done nothing to save them from becoming a target of many governments around the world.

It seems clear that "Rich" works for small to mid-sized businesses with a dramatically smaller footprint than Facebook. The business fundamentally changes with global exposure/popularity as must the infrastructure that supports it. The economics are completely different. The same rules cannot apply because technology isn’t powerful enough to scale beyond a certain threshold and economics prevent being able to attempt to scale beyond that threshold. It’s an asymptotic curve. New approaches have to be developed which may include letting some problems work themselves out.

Basically, this whole thread is little more than hot air.

Anonymous Coward says:

Re: Re: Re:8 No magic wand necessary

For the record, I agree with the good majority of Mr. Masnick’s comments on this issue in various articles. I’m also suspicious of ‘Rich Kulawiec’ because of the apparent desire to rant. Mr. Kulawiec’s quoting and objection to this article does not make sense. STILL, I do not wholly reject his commentary, especially older commentary in older articles. I do not find his technical commentary immediately incompatible or irrelevant to Facebook. Part of my reason involves adjacent issues involving Facebook. The loudest one being the way Facebook was used in this Russian manipulation mess.

For me, dismissing whole comments has a “high bar” (which the Internet routinely meets).

OA (profile) says:

Re: Re: Re:9 No magic wand necessary

For the record, I agree with the good majority of Mr. Masnick’s comments on this issue in various articles. I’m also suspicious of ‘Rich Kulawiec’ because of the apparent desire to rant. Mr. Kulawiec’s quoting and objection to this article does not make sense. STILL, I do not wholly reject his commentary, especially older commentary in older articles. I do not find his technical commentary immediately incompatible or irrelevant to Facebook. Part of my reason involves adjacent issues involving Facebook. The loudest one being the way Facebook was used in this Russian manipulation mess.

For me, dismissing whole comments has a "high bar" (which the Internet routinely meets).

Sorry, that was me.

PaulT (profile) says:

Re: Re: Re:9 No magic wand necessary

He does appear to have good points occasionally. However, as demonstrated here, he does tend to act as though he is the font of all knowledge, then post reams of text that are unrelated to the conversation at hand. Even if he’s 100% correct, none of his words have come close to dealing with the issue he objected to in the first place.

PaulT (profile) says:

Re: Re: Re:7 No magic wand necessary

“If ‘Rich Kulawiec’ is correct”

He’s not, on several demonstrable fundamental levels, plus the main thing he objected to is outside of the scope of what he was talking about anyway.

“Must we reject ideas 100% or embrace them 100%? Can’t we wrestle with the pieces?”

Feel free to show out where I said either of those things. My point is, to use an analogy, when other people are complaining about the stocking quality in the supermarket, he’s complaining about the way the wiring was installed. His response to people saying it’s impossible to stock unlimited amounts of fruit is to talk about how the lighting was fitted and therefore they should work out their fruit situation if they were competent. They have nothing in relation to each other, even if he has a point with what he’s bringing up, which I’m still not convincved he has.

PaulT (profile) says:

Re: Re: Re: No magic wand necessary

Wow. I don’t think I’ve ever seen someone type so much yet answer so little about the question.

That RFC specifically for arranging email is great. I’m not sure of Facebook’s email system layout, but the company don’t really use email for most communications so I’m not sure how much actually applies. But, what does that have to do with filtering the content on Facebook into “bad” and “good”, which is the subject at hand?

Stop typing wasteful paragraphs. Explain yourself.

Anonymous Coward says:

Re: Re: Re: No magic wand necessary

I think you are confused. You’re talking about traffic, ostensibly relating to traffic coming from and to IP addresses and various TCP/UDP ports among other things. This is something you can monitor. Additionally, RFC 2142 specifically applies to email being sent and received between users. How does that apply to someone making trollish posts to their facebook or twitter feed? It’s not going to any specific person and there is no email address involved.

What do you do when you have no idea who is behind an IP address, all the requests are coming in either port 80 or 443 and all those requests are simply users uploading text? The problem is not that the requests are coming in on those ports or from specific IP addresses, the problem is the text that those requests post to user profile statuses. The question is, how do you filter that?

No one is talking about the network and sysadmin side of this (except for people who don’t understand how this works), what they are talking about is preventing people from saying bad things online. To my knowledge there is no way to filter that. Sure you can implement some kind of profanity filter but those are notoriously inaccurate, easily bypassed, and regularly flag non-profanity. Plus how do you distinguish between someone saying “I’m going to kill you” as a real threat of violence or two friends bantering back and forth about their next Call of Duty match?

You may think this is merely a simple sysadmin issue, it really isn’t. Take it from us sysadmins who actually do this for a living.

Queex (profile) says:

What makes FB’s policy even sillier is that, in the UK, there’s not really the same concept of ‘real name’. There’s a canonical name for official paperwork, and it can be a ball-ache to get that changed, but there’s no idea that other names are inherently false.

That’s even referenced in FB’s own T&Cs in the UK, whereby the ‘real name’ provision is along the lines of ‘must be a name people use for you’ rather than any reference to birth certificates and whatnot. Which then clashes with the documentation FB demands from people to prove it’s their name. An acquaintance pointed that mismatch out to them in a name dispute, and got a ‘yeah, that doesn’t make sense, your name is fine, carry on’ out of them.

Anonymous Coward says:

Re: Requiring a "real" name

Related quote from Mike’s post:

So I’m troubled by the idea that a government can come in and tell a company that it can’t require a real name to use its service. If people don’t want to supply Facebook with their real name… don’t use Facebook.

Facebook interprets "real" as "appearing on a government-issued identity document", and they sometimes ask for scans of these documents when they think a name’s not "real". So it’s not really correct to pretend this is a private thing with no government ties. Why should they be able to process government documents without being subject to the related rules?

If their policy required "the name people generally use to refer to you", that might be different. It would make more sense for UK users, and for everyone that uses a nickname—some people never use their "real" name except on government/bank forms (my employer doesn’t know, for example).

Anonymous Coward says:

"should be a choice that Facebook gets to make on its own concerning how it runs its platform" -- No, corporatist: it's THE PUBLIC'S PLATFORM. The public allows corporations to exist on condition serve OUR purposes, not the 1%.

Even Germans aren’t so corporatist as you. You’re always pushing that corporations, mere legal fictions that shield from personal liability while allowing to gain money in public markets, have an alleged Right to do as please and control "natural" persons.

Corporations uber alles is the theme of this piece. You simply sneer at "natural" persons not wanting Facebook to sell on their info.

Whenever you can wedge in pro-corporatism, you do. It’s pathological. — I point it up and mock so often as can by mentioning dangers from Google on least excuse, but with YOU it’s primary goal: even if doesn’t rise naturally in topic, you push out the lawyer’s fiction that corporations have instrinsic rights by which can control natural persons.

Ninja (profile) says:

Re: "should be a choice that Facebook gets to make on its own concerning how it runs its platform" -- No, corporatist: it's THE PUBLIC'S PLATFORM. The public allows corporations to exist on condition serve OUR purposes, not the 1%.

“It’s pathological.”

Indeed, you should take your medicines properly. Maybe then you’ll finally leave this site you hate so much but can’t seem to distance yourself from.

Anyway.

“have an alleged Right to do as please and control “natural” persons.”

First, they cannot control ‘natural persons’ whatever you mean with that. Anybody is free NOT TO USE their services. Second, yes they can do as they please within the law in the US where most of of your bs is directed to. In this specific article he is criticizing the interference from the government in an issue that should be left to the companies and even if he disagrees with Facebook policies regarding real names. In this case this ruling may be backed up by a law but said law is misguided in its essence.

It’s amusing to see you accusing Mike of forcing some topic into some place it wouldn’t rise naturally. You are an ace at doing this. Truly a psychology case study you are.

Anonymous Coward says:

(attempted) clarification by a german speaker

I’ll try to summarize the keypoints of the ruling as explained on the VZBZ site. They also have a scan of the ruling, but the quality is abysmal.

1.) You are only allowed to use personal information of a user with the informed consent of that user. Facebooks default privacy settings allow the use of personal data. Those settings can be changed, but they are opt-out instead of opt-in and the existence of the privacy center isn’t explicitly made clear to the user. As a result the consent isn’t informed (and even the consent itself is vague at best) and thus invalid.

2.) Facebook put some premade declarations of consent into their TOS, that allow them to use names and profile images for commercial purposes. Turns out they are invalid and putting stuff like this into your TOS does not equal consent.

3.) The real name thing. Aparently this is invalid for two reasons. The first is, that agreeing to use your real name also somehow implies consent to the use of that information. The second is that, its just against a law that states that online services need to provide users with a way to stay anonymous.

Sok Puppette (profile) says:

This article operates from a child’s understanding of consent and coercion.

If I say "My great grandfather had the biggest club, so he got all the farmland, so suck my dick or starve", that’s not a choice and your doing it doesn’t imply any real consent. And if everybody you need to interact with has been manipulated into using my "platform", or even just chosen to use my platform, then saying "Give me your real name or go be isolated" isn’t a choice either.

And let’s talk about this "come in and tell a company" business.

Facebook. Is. A. Creation. Of. Government.

Governments aren’t "coming in" to Facebook’s affairs. People "came in" and asked governments to create the company in the first place.

A corporation doesn’t exist at all except as a matter of law. It’s not a person. It has no natural rights (and no mind, so it couldn’t exercise natural rights if it had them). By chartering such an entity, the government actually RESTRICTS the rights of natural persons, most famously the right to individually sue people who act in concert to do them damage.

Issuing charters like that has side effects. No actual person could operate at that scale without some similar kind of charter. The existence of Facebook’s "platform" requires the government to recognize fictional entities. And scale is a big part of the reason there’s a problem.

There is absolutely no reason governments shouldn’t put any restrictions those entrusted think appropriate on gifts like the "right" for a total fiction to be treated as a legal entity or the "right" for its owners and employees to avoid accountability for their actions.

It’s not even like Facebook is a vehicle for its owners to exercise their rights to free speech. Facebook is a vehicle for selling advertising, period.

Don’t pretend that massive institutions are beings with rights. If you want a "free" system, then decentralize the technology and eliminate these fiefdoms.

Anon says:

Bad for persecuted minority

Bravo! As an atheist promoting secularism in islamic shithole, I want to remain anonymous to avoid persecution.

Facebook policies are really bad for secularists. We have to face mass reporting from muslim cyber army and facebook really favor these groups of online mobs rather than those who fight for freedom of speech!

Anonymous Coward says:

too glib

“If people don’t want to supply Facebook with their real name… don’t use Facebook.”

Sadly a lot of people are pressured into FB as way of keeping in touch with family, friends etc.
I don’t do FB, but as a consequence there are several (previously good friends in different areas of the country) who I now rarely communicate with as most of their “social chat” is via FB.,, I only chat to them via phone or email.
So, many people are not so much freely consenting, but consenting due to “emotional blackmail” as want to keep in contact with friends who are using FB as primary means of social chat.

PaulT (profile) says:

Re: Re: Re: too glib

That’s a hell of a stretch. MS had a monopoly on the market – didn’t use Windows? There were plenty of things you couldn’t use or do. Before open office you couldn’t use the same documents on Windows as MS office users with 100% compatibility, for example, and they weren’t going to release Office on Linux.

Facebook to communicate? There’s plenty of other ways, even by AC’s own admission, it’s just that some people prefer FB for everything. Facebook aren’t blocking other methods or reducing their effectiveness, they are just one of many methods.

It’s more analogous to texting people. Some people don’t want a mobile or prefer to phone people rather than texting. They might feel pressured into getting a phone and texting because everyone they know does it and they feel left out if they don’t. Understandable, but you shouldn’t blame the phone manufacturer because you felt pressured into buying one so you could text.

PaulT (profile) says:

Re: Re: Re:3 too glib

Yes, but that doesn’t mean the company supplying those means has to bow to the wishes of your friends. You either accept the terms, or you use a different communication medium, or even just a direct competitor with whom you feel more comfortable.

That’s not a problem, any more than it is for your mobile provider to insist on a line rental even though you only want to use it because your friends are. It’s simply not a monopoly or unfair position, no matter how many of your friends accepted the terms you don’t wish to accept.

Thad (user link) says:

Re: Re: too glib

I agree. Also, is Facebook still building "shadow profiles" on people who aren’t members, without their knowledge or consent?

Of course. There’s not really any way for them not to.

Even if you’re not on Facebook, there’s still going to be information about you on Facebook. Somebody’s mentioned you. People who know you have searched for you. People have probably tagged you in photos.

And that’s before we even get into stuff like Facebook scripts on third-party pages that are tracking you.

Mason Wheeler (profile) says:

Re: Re: Re: too glib

Of course. There’s not really any way for them not to.

Sure there is: they could just not do it! Collecting and organizing data isn’t something that happens by default, let alone something that one has to put in effort to avoid doing; it’s something that one has to put in effort to actually do. And Facebook is doing it, when they have no right to.

Thad (user link) says:

Re: Re: Re:2 too glib

Sure there is: they could just not do it!

How?

How are they going to prevent somebody from tagging me in a photo? Or mentioning me in a public conversation? Or sending me an invite?

Collecting and organizing data isn’t something that happens by default

It…is when the function of your software is the collection and organization of data.

Anonymous Anonymous Coward (profile) says:

Re: Re: Re:3 too glib

They could limit their collection and organization of information to those who OPT-IN. Which would by default eliminate anyone who is not a member, or ex-members, and everyone who is a member who does not explicitly give them permission to do so.

I realize that this methodology would be antithetical to their business model. It does point out how their business model needs adjusting. That is before some class action lawsuit about collecting information about non-customers forces them to. You know, a proactive PR position such as ‘we are not actually evil’.

Anonymous Anonymous Coward (profile) says:

Who is that?

My real name is ‘Malicious Infarction’. I swear. My parents were nothing if not vindictive. All those 2:00AM feedings and the results of those feeding that wound up in my diapers.

This caused a certain amount of consternation as I matured. There were endless issues of school districts and teachers misspelling my name, calling me things I was not. Then there was the way other students treated me. Name calling was creative, to say the least. BTW, this had absolutely no impact on my personality, I am quite normal, depending on ones definition of normal.

Despite the above, things went well, until I tried to apply for a drivers license. The DMV absolutely refused to put that name on a government issued document. I had to go into court and get a legal name change, I chose ‘Appropriate Misbehavior’. When that didn’t work I went to the phone book and flipped open pages at random and picked first a first name and second a second name. Then back to court. The DMV finally issued my drivers license, but the fact of the matter is, it isn’t my real name.

Now, about facial recognition…there is this plastic surgeon…

John85851 (profile) says:

Free users pay with data

Changing the subject slightly, I’d like to see a court weigh in on when a “free” item isn’t actually free. We’ve always considered “free” to mean “not paying money”, but what about spending people’s time, such as:
– Forced to watch a 30 second commercial on YouTube before the video starts.
– Forced to watch an ad to continue playing a free game on your phone.
– Going to a Windows download site to get a driver, but having to guess which of the 5 download buttons will download the driver and which will download a toolbar or malware.
– Having to keep a sharp eye on software installers (such as Flash updates) that want to install toolbars or change your home page.
– Not to mention the usually irrelevant ads on places like Facebook and Pinterest.

All of these are considered annoyances and distractions, but we put up with them to get the free item or service.

PaulT (profile) says:

Re: Free users pay with data

Plus, of course, most of the things you mention apply to things that greatly predate any of those services. People have had to waste time flicking past or waiting for ads to finish in order to access free content since the things were invented.

The malware/crapware risk is a relatively new one, but I dare say that if you asked the average German to choose between Facebook knowing some otherwise publicly available information about them and having that stuff on their hard drive doing who knows what, they would prefer the former.

Anonymous Coward says:

Re: Free users pay with data

I’d like to see a court weigh in on when a "free" item isn’t actually free.

When courts aren’t willing to make companies tell you the final price of an item or service (including all taxes, fees, and bullshit), they’re certaining not going to put a stop to this usage of "free". Hell, the FCC recently reversed its policy that required ISPs to tell customers the price of their service—it’s too burdensome for ISPs, you see.

PaulT (profile) says:

Re: Re:

Does this mean the idiots are coming up with conspiracy theories about Facebook rather than Google, now? Does it mean that you still have no actual arguments other than “something was said in their favour so they must be paid to do so?”. Or do you just have no imagination to meet your own paid quota of posts?

Once again – if only you people were as interested in discussing these situations as are in whining about people writing about them.

bhull242 (profile) says:

“Covert”?

I’m surprised no one else mentioned this at all, but whatever you think of Facebook, its moderation policy, or its real-name policy, what I find most absurd is the idea that, by explicitly requiring the user’s full name to use the platform, Facebook is acquiring people’s names “covertly”. Seriously, how is it covert to require a real name?

Shaun Wilson (profile) says:

Even defining what a “real name” is can be problematic. If a person’s birth certificate identifies them as “Name: John Smith, Gender: Male” is that what they must use on Facebook etc? And is registering as “Name: Jane Smith, Gender: Female” automatically a violation, even if their friends would use “She’s Jane Smith” to introduce them?

Leave a Reply to Anonymous Coward Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »