from the the-view-from-the-eu dept
We are cross posting the following interview conducted by Danish journalist, Cato Institute Senior Fellow, and author of The Tyranny of Silence, Flemming Rose with European Parliament Member from the Netherlands, Marietje Schaake — who we’ve discussed on the site many times, and who has even contributed here as well. It’s an interesting look at how she views the question of regulating internet platforms. Since this is a relatively long interview, we have broken it up into two parts, with the second part running tomorrow. Update Part II is now available.
Marietje Schaake is a leading and influential voice in Europe on digital platforms
and the digital economy. She is the founder of the European Parliament Intergroup
on the Digital Agenda for Europe and has been a member of the European
Parliament since 2009 representing the Dutch party D66 that is part of the Alliance
of Liberals and Democrats for Europe (ALDE) political group. Schaake is
spokesperson for the center/right group in the European Parliament on transatlantic
trade and digital trade, and she is Vice-President of the European Parliament’s US
Delegation. She has for some time advocated more regulation and accountability of
the digital platforms.
Recently, I sat down with Marietje Schaake in a caf? in the European Parliament in
Brussels to talk about what’s on the agenda in Europe when it comes to digital
platforms and possible regulation.
FR: Digital platforms like Facebook, Twitter and Google have had a consistent
message for European lawmakers: Regulation will stifle innovation. You have said
that this is a losing strategy in Brussels. What do you mean by that?
MS: I think it’s safe to say that American big tech companies across the board
have pushed back against regulation, and this approach is in line with the quasi-
libertarian culture and outlook that we know well from Silicon Valley. It has
benefited these companies that they have been free from regulation. They have
been free not only from new regulation but also have had explicit exemptions from
liability in both European and American law (Section 230 in the US and the
Intermediary Liability Exemption in the E-commerce Directive in the EU). At the
same time they have benefited from regulations like net neutrality and other
safeguards in the law. We have been discussing many new initiatives here in the
European Parliament including measures against copyright violations, terrorist
content, hate speech, child pornography and other problems. digital platforms
reaction to most of the initiatives has at been at best an offer to regulate
themselves. They in effect say, “We as a company will fix it, and please don’t
stifle innovation.” This has been the consistent counter-argument against
regulation. Another counter-argument has been that if Europe starts regulating
digital platforms, then China will do the same.
FR: You don’t buy that argument?
MS: Well, China does what it wants anyway. I think we have made a big mistake
in the democratic world. The EU, the US and other liberal democracies have been
so slow to create a rule-based system for the internet and for digital platforms.
Since World War II, we in the West have developed a rules on trade, on human
rights, on war and peace, and on the rule of law itself; not because we love rules in
and by themselves, but because it has created a framework that protects our way of
life. Rules mean fairness and a level playing field with regard to the things I just
mentioned. But there has been a push-back against regulation and rules when it
comes to digital platforms due to this libertarian spirit and argument about stifling
innovation, this “move fast and break things” attitude that we know so well from
This is problematic for two reasons. First, we now see a global competition
between authoritarian regimes with a closed internet with no rule of law and
democracies with an open internet with the rule of law. We have stood by and
watched as China, the leading authoritarian regime, has offered its model to the
world of a sovereign, fragmented internet. This alternative model stifles
innovation, and if people are concerned about stifling innovation, they should take
much more interest in fostering an internet governance model that beats the
Chinese alternative. Second, because with the current law of the jungle on the
internet, liberal democracy and democratic rights of people are suffering, because
we have no accountability for the algorithms of digital platforms. At this point
profit is much more important than the public good.
FR: But you said that emphasizing innovation is a losing strategy here in Brussels.
MS: I feel there is a big turning point happening as we speak. It is not only here in
Brussels but even Americans are now advocating regulation.
MS: They have seen the 2016 election in the US, they have seen conspiracy after
conspiracy rising to the top ranks of searches, and it’s just not sustainable.
FR: What kind of regulation are you calling for and what regulation will there be
political support for here in Brussels?
MS: I believe that the e-commerce directive with the liability exemptions in the
EU and Section 230 with similar exemptions in the US will come under pressure. It
will be a huge game changer.
FR: A game changer in what way?
MS: I think there will be forms of liability for content. You can already see more
active regulation in the German law and in the agreements between the EU-
Commission and the companies) to take down content (the code of conduct on hate
speech and disinformation). These companies cannot credibly say that they are not
editing content. They are offering to edit content in order not to be regulated, so
they are involved in taking down content. And their business model involves
promoting or demoting content, so the whole idea that they would not be able to
edit is actually not credible and factually incorrect. So regulation is coming, and I
think it will cause an earthquake in the digital economy. You can already see the
issues being raised in the public debate about more forceful competition
requirements, whether emerging data sets should also be scrutinized in different
ways, and net neutrality. We have had an important discussion about the right to
privacy and data protection here in Europe. Of course, in Europe we have a right to
privacy. The United States does not recognize such a right, but I think they will
start to think more about it as a basic principle as well.
MS: Because of the backlash they have seen.
FR: Do you have scandals like Cambridge Analytica in mind?
MS: Yes, but not only that one. Americans are as concerned about protection of
children as Europeans are if not more. I think we might see a backlash against
smart toys. Think about dolls that listen to your baby, capture its entire learning
process, its voice, its first words, and then use that data for AI to activate toys. I am
not sure American parents are willing accept this. The same with facial
recognition. It’s a new kind of technology that is becoming more sophisticated.
Should it be banned? I have seen proposals to that end coming from California of
FR: Liability may involve a lot of things. What kind of liability is on the political
menu of the European Union? Filtering technology or other tools?
MS: Filtering is on the menu, but I would like to see it off the menu because
automatic filtering is a real risk to freedom of expression, and it’s not feasible for
SME (Small and Medium Enterprises) so it only helps the big companies. We need
to look at accountability of algorithms. If we know how they are built, and what
could be their flaws or unintended consequences, then we will be able to set
deadlines for companies to solve these problems. I think we will look much more
at compliance deadlines than just methods. We already have principles in our laws
like non-discrimination, fair competition, freedom of expression and access to
information. They are not disputed, but some of these platforms are in fact
discriminating. It has been documented that Amazon, the biggest tech company
and the front runner of AI had a gender bias in favor of men in its AI-algorithm for
hiring. I think future efforts will be directed toward the question of designing
technology and fostering accountability for its outcomes.
FR: Do you think the governments in the US and Europe are converging on these
MS: Yes. Liberal democracies need to protect themselves. Democracy is in decline
for 13th year in a row (according to Freedom House). It’s a nightmare, and it’s
something that we cannot think lightly about. Democracy is the best system in
spite of all its flaws, it guarantees the freedoms of our people. It also can be
improved by holding the use of power accountable through checks and balances
and other means.
FR: Shouldn’t we be careful not to throw out the baby with the bath water? We are
only in the early stages of developing these technologies and businesses. Aren’t
you concerned that too much regulation will have unintended consequences?
MS: I don’t think there is a risk of too much regulation. There is a risk of poorly
drafted regulation. We can already see some very grave consequences, and I don’t
want to wait until there are more. Instead, let’s double down on principles that
should apply in the digital world as they do in the physical world. It doesn’t matter
if we are talking about a truck company, a gas company or a tech company. I don’t
think any technology or AI should be allowed to disrupt fundamental principles
and we should begin to address it. I believe such regulation would be in the
companies’ interest too because the trust of their customers is at stake. I don’t think
regulation is a goal in and by itself, but everything around us is regulated: the
battery in your recording device, the coffee we just drank, the light bulbs here, the
sprinkler system, the router on the ceiling, the plastic plants behind you so that if a
child happens to eat it, it will not kill them as fast as it might without regulation,
and the glass in the doors over there, so if it breaks it does so in a less harmful way
and so on and so forth. There are all kinds of ideas behind regulation, and
regulation is not an injustice to technology. If done well, regulation works as a
safeguard of our rights and freedoms. And if it is bad, we have a system to change
The status quo is unacceptable. We already have had manipulation of our
democracies. We just learned that Facebook paid teenagers $20 to get to their most
private information. I think that’s criminal, and there should be accountability for
that. We have data breach after data breach, we have conspiracy theories still rising
to the top search at YouTube in spite of all their promises to do better. We have
Facebook selling data without consent, we have absolutely incomprehensible terms
of use and consent agreements, we have lack of oversight over who is paying for
which messages, how the algorithms are pushing certain things up and other things
down. It’s not only about politics. Look at a public health issues like anti-
vaccination hoaxes. Online sources say it is dangerous to vaccinate your child.
People hear online that vaccinations are dangerous and do not vaccinate their
children leading to a new outbreak of measles. My mother and sister are medical
doctors, cancer specialists, and they have patients who have been online and
studied what they should do to treat their cancer, and they get suggestions without
any medical or scientific proof. People will not get the treatment that could save
their lives. This touches upon many more issues than politics and democracy.
FR: So you see here a conflict between Big Tech and democracy and freedom?
MS: Between Big Tech with certain business models and democracy, yes.
FR: Do you see any changes in the attitudes and behaviour of the tech companies?
MS: Yes, it is changing, but it’s too little, too late. I think there is more
apologizing, and there is still the terminology, “Oh we still have to learn
everything, we are trying.” But the question is, is that good enough?
FR: It’s not good enough for you?
MS: It’s not convincing. If you can make billions and billions tweeking your
algorithm every day to sell ever more adds, but you claim that you are unable to
determine when conspiracies or anti-vaccination messages rise to the top of your
search. At one point I looked into search results on the Eurozone. I received 8 out
of 10 results from one source, an English tabloid with a negative view of the Euro.
FR: Yes, how come, why should that be in the interest of the tech companies?
MS: I don’t think it’s in their interest to change it, but it’s in the interest of
democracy. Their goal is to keep you online as long as possible, basically to get
you hooked. If you are trying to sell television, you want people to watch a lot of
television. I am not surprised by this. It was to be expected. However, it becomes a
problem, when hundreds of millions of people only use a handful of these
platforms for their information. It’s remarkably easy for commercial or political
purposes to influence people whether it’s about anti-vaccination or politics. I
understand from experts that the reward mechanism of the algorithm means that
sensation sells more, and once you click on the first sensational message it pulls
you in a certain direction where it becomes more and more sensational, and one
sensation after another is being automatically presented to you.
I say to the platforms, you are automatically suggesting more of the same. They
say no, no, no, we just changed our algorithm. What does that mean to me? Am I
supposed to blindly believe them? Or do I have a way of finding out? At this point
I have no way of finding out, and even AI machine learning coders tell me that
even they don’t know what the algorithms will churn out at the end of the day. One
aspect of AI is that the people who code don’t know exactly what’s going come
out. I think it’s too vague about safeguards, and clear that the impact is already
I don’t pretend to know everything about how the systems work. We need to know
more because it impacts so many people, and there is no precedent of any service
or product that so many people use for such essential activities as accessing
information about politics, public health and other things with no oversight. We
need oversight to make sure that there are no excesses, that there is fairness, non-
discrimination and free expression.
You can read Part II of this interview now.
Filed Under: eu, eu parliament, flemming rose, free speech, internet regulation, marietje schaake, platform liability, privacy
Companies: facebook, google