konstantinos.komaitis's Techdirt Profile

konstantinos.komaitis

About konstantinos.komaitis

Posted on Techdirt - 22 November 2022 @ 12:02pm

The Global Trend That Could Kill The Internet: Sender Party Network Pays

There is a Korean proverb that says: “There is always a way out, look for it.” South Korea’s recent revision of its Telecommunications Business Act (TBA) might, however, be the one thing South Korea is not able to get out of, unless it abandons its plans for redistributing the monopoly power back to telecommunication providers.

South Korea, just like Europe, faces similar challenges — an ageing population, and the need to compete in high-value sectors and create a digital ecosystem that is robust and facilitates economic and social growth. It is, therefore, quite ironic that both countries are considering policies that could put at risk and undermine much of the Internet and their digital futures. 

The “Sending-Party-Network-Pays”(SPNP) proposal, currently at the center of the respective countries’ legislative agendas, is premised on a simple idea: content platforms that generate and send most internet traffic over the Internet should pay a certain fee to telecommunications providers in order to have those providers deliver that traffic to users. This model may make perfect sense in the telephony environment, where traditionally telephone operators have a termination monopoly for their customers; but, when it comes to the Internet, this proposal will prove counterproductive and dangerous as  it creates bottlenecks for investment and degrades users’ internet experience. 

We often hear that the internet is a network of networks. This is not a philosophical statement; rather, it means that, through voluntary agreements, networks decide independently with whom to interoperate while identifying ways to optimize connectivity in order to meet users’ demands. This ensures resilience and, at the same time, the robustness of the system. As a decentralized network, the Internet has no central authority or a gatekeeper to determine which networks can and cannot join, meaning that any network is able to autonomously participate, decide on which other network to interconnect and at what cost and become part of the global Internet. The only requirement is that it “speaks” the IP protocol language. 

In 2013, a still relevant report by the OECD confirmed the success of the internet model in comparison to traditional telephony. “While national regulatory authorities have closely regulated circuit-switched (TDM) traffic exchange to achieve such policy goals as universal connectivity and competition, the Internet market has attained those same goals with very little regulatory intervention, while performing much better than the older markets in terms of prices, efficiency, and innovation”. 

This fundamental design choice and the benefits it has produced are now getting ignored and the results are at best unpredictable and, in the long run, possibly irreversible. 

In 2016, South Korea became the first country to enforce a “Sending-Party-Network-Pays” model, requiring ISPs to charge fees for the volume of traffic they were exchanging between them. Although enforced only among ISPs to date, it has already been detrimental to South Korea’s competitive market. With high fees being imposed, a number of South Korean and foreign content providers were left with only two options: exit the market or degrade their services. In the meantime, smaller Korean providers and a host of startups have to face insurmountable barriers to entry in the market.  

For a long time, South Korean users have enjoyed fast and reliable internet connectivity and South Korea was an example other countries looked to for addressing issues of connectivity. Not any longer. According to a recent report, in South Korea, “regulation appears to have discouraged peering and investment […], leading to higher costs for ISPs, initially lower quality for users, and need for more regulation to correct unintended consequences.”  In particular, in comparison to Europe, Internet access fee disparity skyrocketed to 8-10 times while, when it comes to the US, that figure was 5-6 times, causing many content providers to intentionally degrade their services.  

Europe may have to face a similar reality soon should it decide to move forward with its own “Sending-Party-Network-Pays” model.  Since March, when the European Commissioner for the Internal Market, Thierry Breton, announced the European Commission’s intention to move forward with such a plan, there has been widespread concern from a diverse set of actors across Europe. Civil society has condemned the proposal, specifically regarding the barriers to entry it will introduce as well as its potential impact on “freedom of expression, freedom to access knowledge, freedom to conduct business and innovation in the EU”. Similarly, the European Consumer Organisation has stated that “for consumers in particular, the risks or potential disadvantages of establishing measures such a SPNP system would range from a potential distortion of competition on the telecom market, negatively impacting the diversity of products, prices and performance, to the potential impacts on net neutrality, which could undermine the open and free access to the Internet as consumers know it today.” Similarly, Europe’s Mobile Virtual Network Operators (MVNO) group called for a “careful impact assessment”, while the European Association for Commercial Television and Video on Demand (ACT) urged European “institutions to thoroughly consider the wider implications before taking any actions that would directly or indirectly impact the stability and sustainability of the European audiovisual industry (and consumer rights) as a whole.  

What could then be driving this fundamental shift both in Europe and in South Korea, especially given that the proposal has been rejected by everyone bar a handful of telecommunication companies? 

Let’s think of this in terms of policy objectives. Telecommunication providers argue that a “fair contribution” scheme is needed for infrastructure and the need for both countries to meet their respective digital agenda targets. If this is really the case, however, then the starting point of the conversation is wrong. Throwing money to the largest telecommunication companies will not lead to infrastructure development so much as encourage monopolistic behavior and unpredictability. Considering that such deals will most definitely be confidential, it will also be hard for anyone to know the tradeoffs that will need to be agreed on every time. A pay-off will simply extend the termination monopoly telecommunication providers enjoy from telephone to content; it will not address any real infrastructure concerns. 

To this end, a real infrastructure strategy might be necessary. In its preliminary assessment of the SPNP proposal, the Body of European Regulators for Electronic Communications (BEREC) said that, although “debate about network investments, traffic volumes and cost drivers needs to be carefully analysed”, at the same time, “the internet has proven its ability to cope with increasing traffic volumes, changes in demand patterns, technology, business models, as well as the (relative) market power between market players”. The focus, therefore, should be on services that facilitate user experience and enhance the resilience and stability of the Internet, including Internet Exchange Points (IXPs), Content Delivery Networks (CDNs), caches and the like.  

Bad ideas tend to be solutions to problems no one really has. And, truly, there is no identifiable problem in the market of interconnection. The norms and rules that were set years ago continue to apply, ensuring connectivity. Europe must learn from the South Korean experience and avoid replicating mistakes that, in the end, will only harm its citizens and its digital future. As other countries, including the UK and India, are starting to flirt with similar ideas, the conversation about what sort of a digital future we want becomes increasingly pressing.

Konstantinos Komaitis, Internet policy expert and author & K.S. Park, Professor, Korea University, Director, Open Net

Posted on Techdirt - 13 October 2022 @ 03:46pm

An (Im)perfect Way Forward On Infrastructure Moderation?

Within every conversation about technology lies the moral question: is a technology good or bad? Or, is it neutral? In other words, are our values part of the technologies we create or is technology valueless until someone decides what to do?

This is the kind of dilemma Cloudflare, the Internet infrastructure company, found itself in earlier this year. Following increasing pressure to drop KiwiFarms, a troll site targeting women and minorities, especially, LGBTQ people, Cloudflare’s CEO, Matthew Prince, and Alissa Starzak, its VP for Public Policy, posted a note stating that “the power to terminate security services for the sites was not a power Cloudflare should hold”. Clouldflare was the provider of such security services to KiwiFarms.

Cloudflare’s position was impossible. On the one hand, Cloudflare, as an infrastructure provider, should not be making any content moderation decisions; on the other, KiwiFarm’s existence was putting the lives of people in danger. Although Cloudflare is not like “the fire department” as it claims (fire departments are essential for the societies to function and feel safe; Cloudflare is not essential for the functioning of the internet, though it does make it more secure), still moving content moderation down the internet stack can have a chilling effect on speech and the internet. At the end of the day, it is services, like Cloudflare’s, which get to determine who is visible in the internet.

Cloudflare ended up terminating KiwiFarms as a customer even though originally it said it wouldn’t. In a way, Cloudflare’s decision to reverse its own intention, placed content moderation at the infrastructure level front and center once again. Now though, it feels like we are running out of time; I am not sure how much more of such unpredictability and inconsistency can be tolerated before regulators step in.

Personally, the idea of content moderation at the infrastructure level makes me uncomfortable, especially because content moderation will move somewhere that is invisible to most. Fundamentally, I still believe that moving content moderation down at the infrastructure level is dangerous in terms of scale and impact. The Internet should remain agnostic of the data that moves around it and anyone who facilitates this movement should adhere to this principle. At least, this must be the rule. I don’t think this will be the priority in any potential regulation.

However, there is another reality that I’ve grown into: decisions, like the one Cloudflare was asked to make, have real consequences to real people. In cases like KiwiFarms inaction feels like aiding and abetting. If there is something that someone can do to prevent such reprehensible activity, shouldn’t they just go ahead, and do it?

That something will be difficult to accept. If content moderation is messy and complex for Facebook and Twitter, imagine for companies like Cloudflare and AWS. The same problems with speech, human rights and transparency will exist at the infrastructure level; just multiply them by a million. To be fair, infrastructure providers already engage in removal of websites and services in the internet. And, they have policies to do that. Cloudflare said so: “Thousands of times per day we receive calls that we terminate security services based on content that someone reports as offensive. Most of these don’t make news. Most of the time these decisions don’t conflict with our moral views.” Not all infrastructure providers have policies though and, in general, decisions about content removal taking place at the infrastructure level are opaque.

KiwiFarms will happen again. It might not be called that, but it’s a matter of time before a similarly disgusting case pops up. We need a way forward and fast.

So, here’s a thought: an “Oversight Board-type” of body for infrastructure. This body – let’s call it “Infrastructure Appeals Panel” – will be funded by as many infrastructure providers as possible and its role will be to scrutinize decisions infrastructure providers make regarding content. The Panel will need to have a clear mandate and scope and be global, which is important as the decisions made by infrastructure providers affect both issues of speech and the Internet. Its rules must be written by infrastructure providers and users, which is perhaps the single most difficult thing. As Evelyn Douek said “writing speech rules is hard”; it becomes even harder if one considers the possible chilling effect. And, this whole exercise becomes even more difficult if you need to add rules about the impact on the internet. Unlike the decisions social media companies make every day, decisions made at the infrastructure of the internet can also create unintended consequences to the way it operates.

Building such an external body is not easy and, many things can go wrong. Finding the right answers to questions regarding board member selection, independence, process and values becomes key for its success. And, although such systems can be arbitrary and abused, history shows they can also be effective. In the Middle Ages, for instance, at the time international trade was shaping, itinerant merchants sought to establish a system of adjudication, detached from local sovereign law and able to govern the practices and norms that were emerging at the time. The system of lex mercatoria originated from the need to structure a system that would be efficient in addressing the needs of merchants and, produce decisions that would carry value equivalent to the decisions reached through traditional means. Currently, content moderation at the infrastructure is an unchecked system, where players can exercise arbitrary power, which is further exacerbated by the lack of interest or understanding at what is happening at that level.

Most likely, this idea will not be enough to address all the content moderation issues at the infrastructure level. Additionally, if it is going to have any real chance of being useful, the Panel’s design, structure, and implementation as well as its legitimacy must be considered a priority. An external panel that is not scoped appropriately or does not have any authority, risks creating false accountability; the result is that policy makers get distracted while systemic issues persist. Lessons can be learned from the similar exercise of creating the Oversight Board.

The last immediate thing is for this Panel not to be seen as the answer to issues of speech or infrastructure. We should continue to discuss ways of addressing content moderation at the infrastructure level and try to institute the necessary safeguards and reforms on what is the best way to moderate content. There is never going to be a way to create fully consistent policies or agree on a set of norms. But, through transparency, which such a panel can provide, we can reach a state where the conversation becomes more focused and driven more by facts and less by emotions.

Konstantinos Komaitis is an internet policy expert and author. His website is at komaitis.org.

Posted on Techdirt - 18 March 2022 @ 12:22pm

Big Tech Pay-Outs To European ISPs Would Just Concentrate Their Power

As the debate about how to rein in Big Tech and its anti-competitive practices continues, news publishers and telecommunications providers are increasingly calling for large pay-outs from major platforms. However, these proposals risk restricting users into ever-smaller walled gardens and cementing the dominance of a few big players.

On Valentine’s day, an open letter from the CEOs of Deutsche Telekom, Telefónica, Vodafone, and Orange surfaced. In the letter, the heads of Europe’s biggest telecommunications providers called “for large content platforms to contribute to the cost of European digital infrastructure that carries their services.” Claiming that the current situation is “not sustainable” for their companies, they argued that “Europe will fall behind” if this situation is not addressed.

The request for large platforms to pay telecom providers to carry their content is not new. In 2011, the same group (absent Deutsche Telekom) attempted to levy charges on Google and other content providers, suggesting an overhaul of how data travels across the internet. In 2013, Orange struck a deal with Google under which Google would pay an undisclosed amount to the carrier for the traffic sent across its networks.

For years, telecom operators have tried to catch up with innovation, but with little real success. In the beginning, it was their inability to identify ways to diversify their centralized business models within the internet’s more decentralized environment. Instead, they have used their political capital to keep pushing, unsuccessfully, for proposals based on the simple idea that everyone else should have to pay up. They even went as far as the ITU. And, the more the internet has grown, the more telecom providers have remained stuck in outdated business models. Part of the problem has always been that telecom providers have never fully grasped the fact that users are mainly paying to connect to the ends of the network and not the middle. In other words, the value of the internet connection comes from the fact that Google, Facebook, or TikTok—not to mention smaller and regional platforms excluded from big telecom deals—make it valuable for them. Without large and small platforms and their services, users would have no reason to use telecom providers’ networks. 

This dynamic of requiring large platforms to pay telecom providers to carry their content may bring the net neutrality debate to mind. But, this post is not about net neutrality. This post is about a different trend that is picking up speed: attempting to force big technology companies to negotiate, with very little transparency, deals that end up creating barriers to entry for smaller businesses on both sides of the equation. Make no mistake, the concern about the concentration of power in big technology companies should neither be underestimated nor ignored; more fundamentally, however, promoting secret deals as the solution to any of the current problems will only make things worse.

And it’s not just the telecom providers. Last year, Australia’s News Media Bargaining Code set the tone for the relationship between Big Tech and publishers by forcing Google and Facebook to negotiate and pay publishers—namely, Rupert Murdoch’s News Corp—to host the publisher’s content.. France followed soon after, with Google agreeing to pay the Alliance de la presse d’information generale (APIG) $36 million dollars in the first case under the new EU Copyright Directive. Canada is considering similar rules, while the UK, ArgentinaBrazil, and Germany have all enacted – or are in the process of enacting – such rules. Big Tech is paying and it is paying big time. Now, telecommunication providers want a piece of this payout, and they might get it.

The obvious question here is how sustainable this is in the long run, especially considering the fact that these deals create an even greater financial interest in maintaining Big Tech’s dominance.

In this context, the paradigm that is forming is one where power will concentrate in the hands of even fewer telecommunications and Big Tech players. While Google and Facebook may be able to afford huge payouts to host publisher content and travel on telecom provider networks, smaller companies cannot. This means more users will be limited to increasingly walled-in ecosystems and services with more concentrated threats to user privacy and expression, especially as smaller players get shut out of such deals.

The openness and freedom that define the internet at its best suffer within walled-garden spaces. These kinds of deals will exacerbate this problem as Facebook and Google become centers for more kinds of user interaction, adding new services that draw users further into their closed systems. In the case of Facebook News in France, for example, users are exposed to the news and information only from certain “partners” adhering to Facebook’s terms and conditions. Independent journalism and informal reporting will vanish or, at best, get hidden.

Now, imagine if Facebook’s and Google’s reach were to extend to infrastructure. Although it is premature to guess the level of involvement and investment Big Tech will be required to commit and what it will mean exactly, it is almost a certainty that big technology companies will get unprecedented access to infrastructure opportunities they have long desired. We are already witnessing a trend towards more privatized networks and more privatized internet infrastructure, with research suggesting that big technology companies are “gaining control over not only the content but the means of transferring the content.” If core parts of the internet’s infrastructure are co-opted by big technology firms, it would further the existing dependencies we tend to experience in the latter. As Brett Frischmann argues:

Ultimately, the outcome of this debate may very well determine whether the internet continues to operate as a mixed infrastructure that supports widespread user production of commercial, public, and social goods, or whether it evolves into a commercial infrastructure optimized for the production and delivery of commercial outputs.

And, no regulation from Europe will be able to prevent this; once Big Tech is in Europe’s infrastructure, there won’t be a way out.

Europe has repeatedly said it wants to be a leader in innovation. Of course, it means every word of it. But, no one is – nor should be – entitled to the proceeds of technical innovation, and trying to enforce that through regulation is a bad idea.

Originally posted to the EFF Deeplinks blog.

Posted on Techdirt - 4 October 2021 @ 12:00pm

Infrastructure And Content Moderation: Challenges And Opportunities

The signs were clear right from the start: at some point, content moderation would inevitably move beyond user-generated platforms down to the infrastructure—the place where services operate the heavy machinery of the Internet and without which user-facing services cannot function. Ever since the often-forgotten incident when Amazon stopped hosting Wikileaks after US political pressure took place in 2010, there has been a steady uneasiness regarding the role infrastructure providers could end up playing in the future of content moderation. 

A glimpse of what this would look like came in 2017, when companies like Clouldflare and GoDaddy took affirmative action against content they considered problematic for their business models, in this case white supremacist websites that had been the subject of massive public opprobrium. Since then, that future has become the present reality as the list of infrastructure companies performing content moderation functions keeps growing. 

Content moderation has two inherent qualities that provide important context. 

First, content moderation is generally complex in real-world process design and implementation. There are a host of conflicting rights, diverse procedural norms and competing interests that come into play every time content is posted on the Internet; each case is unique and on some level so it should be treated. 

Second, content moderation is messy because the world is messy: the global nature of the Internet, economies of scale, societal realities and cultural differences create a multi-layered set of considerations that are difficult to reconcile. 

The bright spot in all this messiness and complexity is the hope of due process and the rule of law. The theory is that, in healthy and competitive markets, users have choice and therefore it becomes more difficult for any mistakes to scale. So, if a user’s post gets deleted on one platform, the user should have the option of posting it someplace else.

Of course, such markets are difficult to accomplish and the current Internet market is certainly not in this category. But, the point here is that it is one thing to have one of your postings removed from Facebook and it is another to go completely offline if Cloudflare stops providing you their services. The stakes are completely different. 

For a long time, infrastructure providers were smart enough to stay out of the content business. The argument was that the actors who are responsible for the pipes of the Internet should not concern themselves with the kind of water that runs through them. Their agnosticism was encouraged because their main focus was to provide other services, including security, network reliability and performance.

However, as the Internet evolved, so did the infrastructure providers’ relationship with content. 

In the early days of content moderation, what constituted infrastructure was more discernible and structured. People would usually refer to the Open System Interconnection (OSI) model as a useful analogy, especially with policy makers who were trying to identify the role and responsibilities various companies held in the Internet ecosystem. 

The Internet of today, however, is very different from those days. The layers of the Internet are not distinguishable any longer and, in many cases, participating actors are not just operating at the infrastructure or the application layers. At the same time, and as applications in the Internet were gaining in popularity and use, innovation started moving upstream. 

“Infrastructure” is now being nested on top of other “infrastructure” all within just layer 7 of the OSI stack. Things are not as clear-cut.

This indicates that, in some ways, we should not be surprised that the content moderation conversations seem to gradually be moving downstream. A cloud provider that provides support to a host of different websites, platforms, news outlets or businesses, will inevitably have to deal with issues of content. 

A content delivery network (CDN) will unquestionably face, at some point, the moral dilemma of providing its services to businesses that walk a tightrope with harmful or even illegal content. It really comes down to a simple equation: if user-generated platforms don’t do their job, infrastructure providers will have to do it for them. And, they do. Increasingly often. 

If this is the reality, the question becomes what is the best way for infrastructure providers to do moderation considering current practices of content moderation, the significant chilling effects, and the often-missed trade-offs. 

If we are to follow the “framework, tools, principles” triad, we should be mindful to not reinvent any existing ecosystem. Content moderation is not new and, over the years, a combination of laws and self-regulatory norms ensures a relatively consistent, predictable and stable environment—at least most of the time. 

Section 230 of the CDA in the US, the eCommerce Directive in Europe, Marco Civil in Brazil and other laws around the world have succeeded in creating a space where users and businesses could manage their affairs effectively and know that judicial authorities would treat their cases equally. 

For content moderation at the infrastructure level, a framework based on certainty and consistency is even more of a priority. Legal theory instructs that lack of consistency can diminish the development of norms or it can undermine the way existing ones can manifest themselves. In a similar vein, lack of certainty means the inability to get organized in such a way that complies with the law. For infrastructure providers that support basic and day-to-day functions of the Internet, such a framework becomes indispensable. 

I often say that the Internet is not a monolith. This is not only to demonstrate how the Internet was never meant to perform one single thing, but also to show the importance of designing a legal framework that behaves the same. When we talk about predictability and certainty, we must be conscious of putting in place requirements of clarity, stability and intelligibility so that participating actors can make calculated and informed decisions about the legal consequences of their actions. That’s what made Section 230 a success for more than two decades.

Frameworks without appropriate tools to implement and assess them, however, mean little. Tools are important as they can help maximize the benefits of processes, ultimately increasing flexibility, reducing complexity, and ensuring clarity. Content moderation has consistently been suffering from lack of tools that could clearly exhibit the effects of moderation. Think, for instance, all these times content is taken down and there is no way to say what the true effect is on free speech and on users.

In this context, we need to think of tools as things that would allow us to better understand the scale and chilling effect that content moderation in the infrastructure causes. Here is what I wrote about this last year

“A starting point is to perform a regulatory impact assessment for the Internet. It is a tested method of policy analysis, intended to assist policy makers in the design, implementation and monitoring of improvements to the regulatory system; it provides the methodology for producing high quality regulation, which can, in turn, allow for sustainable development, market growth and constant innovation. A regulatory impact assessment constitutes a tool that ensures regulation is proportional (appropriate to the size of the problem it seeks to address), targeted (focused and without causing any unintended consequences), predictable (it creates legal certainty), accountable (in terms of actions and outcomes) and, transparent (on how decisions are made).”

In this sense, tools and frameworks are co-dependent. 

Finally, the legitimacy of any framework and of any tools depends on the existence of principles. In content moderation, it is not the lack of principles that is the problem; on the contrary, it is the abundance of them. Big companies have their own Terms of Service (ToS), states operate within their own legal frameworks, and then there is the Internet, which is designed under its own set of principles. Too many principles inevitably create too many conflicts and, therefore, consensus becomes important. 

The Santa Clara principles on transparency and accountability in content moderation have that consensus. Negotiated through a collaborative and inclusive process, they offer a roadmap to content moderation and remove certain obstacles in the process. Their strength lies in their simplicity and straightforwardness. In a similar vein, the properties of the Internet constitute a solid guide in understanding the potential unintended consequences content moderation by infrastructure providers can have. 

The design of the Internet is very specific and adheres to some normative principles that have existed ever since its inception. In fact, the Internet’s blueprint has not changed much since it was sketched a few decades ago. Ensuring that these principles become part of the consideration process is key. 

There are still plenty of unknowns in content moderation at the infrastructure layer. But, there are also quite a few things we do know: the first thing is that scale plays a significant factor. Moderating content down the stack is not just about speech; in many cases, it can be about being present on the Internet. 

The second thing we know is that the general principle of infrastructure actors being allowed to provide their services agnostically of the content they carry should continue to hold as the default. There might be cases when they may need to engage but that should be the exemption rather the rule and it should abide by the “frameworks, tools and principles” identified above. 

Finally, the third thing we know is that content moderation is evolving in ways that could now directly affect the future of the Internet. Ensuring that the role and responsibilities of infrastructure providers is appropriately scoped will be content moderation’s greatest challenge yet! 

Konstantinos Komaitis is a veteran of developing and analysing Internet policy to ensure an open and global Internet. Konstantinos has spent almost ten years in active policy development and strategy as a Senior Director at the Internet Society, and is currently a policy fellow at the Brave New Software Foundation. Before that, he spent 7 years researching and teaching at the university of Strathclyde, Glasgow, UK.

Techdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we’ll have many of this series’ authors discussing and debating their pieces in front of a live virtual audience (register to attend here).

Posted on Techdirt - 20 May 2021 @ 03:42pm

G7 And Technical Standards: Blink And You Might Have Missed The New Battleground

Amid all the news about the third wave of the COVID-19 Pandemic and the politics behind the vaccination roll out, you might have missed the Ministerial Declaration from the G7 Digital and Technology Ministers? meeting. As per tradition, the G7 Digital Ministerial provides the opportunity for the seven richest countries of the world to declare their commitments and vision on the type of digital future they would like to see. The document is non-binding but it has the tendency to provide some useful insights on the way the G7 countries view digital issues and their future positions in multilateral fora; it is also informative of other, more formal, multilateral processes. On 28 April 2021, a statement was made addressing key technology issues and opportunities including security in ICT supply chains, Internet safety, free data flows, electronic transferable records, digital competition and technical standards.

Yes, you read that right – technical standards. In the last several years technical standards have moved from the realm of engineers into wider politics. News stories have been replete with China?s efforts to become a competitive force on 5G, AI and facial recognition standards and its wish to be developed internationally based on their national rules, culture and technology. But the public eye turned more closely to China when it was discovered that the facial recognition standards being developed by China in the UN system were from countries on the US sanctions list and used by China for monitoring Uighurs.

None of this is new. For the past few years and for anyone who has been paying attention, China has been strategically positioning itself in various standards bodies realizing that shifting from a unipolar to a multipolar world order cannot happen unless it is capable of demonstrating a more strategic and competitive approach to the domination of the west. What was the tipping point, however, that made the seven richest countries in the world offer explicit language on standards inserted into their declaration? Everything seems to be pointing to the “newIP” standard proposal, recommending a change in the current Internet technology, that was put forward by Huawei and supported by China in the International Telecommunications Union (ITU). Although this new standard did not manage to pass the ITU?s study group phase, it did raise the eyebrows of the West. And, rightly so.?

Historically, Internet standards have paved their own path and have majorly managed to stay outside of politics.? In one of the earliest Requests for Comments (RFC), the definition of a standard was specific and narrow:? a standard is ?a specification that is stable and well-understood, is technical competent, has multiple, independent and interoperable implementations with operations experience, enjoys significant public support, and is recognizably useful in some or all parts of the Internet?.?

Traditionally, governments have had a hands-off approach in the development and deployment of standards related to the Internet; their development was part of the consensus-based, community-driven process developed and nurtured by the Internet Engineering Task Force (IETF) and their deployment was left to the market.? A standard?s life has always depended on its utility and contribution to the evolution of the Internet.?

This seems to be the case less and less. Over the past years, governments have shown increasing interest in the development of standards, and have sought ways to inject themselves into Internet standardization processes. There are two distinct ways that this trend has emerged. First, there’s China, which actively seeks to displace the current Internet infrastructure. That was clear in the attempt with the ?newIP? proposal. China has been strategic in not directly suggesting a complete rejection of the Internet model; instead, its claims have been that the Internet cannot meet future technologies and needs and, therefore, a new infrastructure, developed? and nurtured by governments, is necessary. The second trend continues to support the open, market-driven standards development processes, but seeks ways for governments to be more actively involved. This, so far, has mainly been interpreted as identifying ways to provide incentives for the creation and deployment of certain standards, often those deemed strategically important.?

Even though these approaches reflect different political and governance dimensions – China supports a top-down approach over the West?s bottom-up model – they do share one commonality: in both cases, politics are becoming part of the standardization process. This is entirely unlike the past 30 years of Internet development.?

This could have significant implications in the development and future of the Internet. There are benefits from the current structure: efficiency, agility and collaboration. The existing process ensures quick responses to problems. But, its main advantage is really the collective understanding that standards are driven by what is ?good for the Internet?; that is, what is required for the Internet?s stability, resilience and integrity.?

This doesn?t mean that this process is perfect. Of course, it comes with its own limitations and challenges. But, even then, it is a tested process that has worked well for the Internet throughout most of its existence. It has worked – despite its flaws – because it has managed to keep political and cultural dimensions separate. Participants, irrespective of background, language, and political persuasion have been collaborating successfully by having the Internet and what’s good for it, as their main objective.?

On the contrary, intergovernmental standards are driven by political differences and political motives. They are designed this way. This is not to say that governments should not be paying attention to the way standards are developed. But, it is crucial to do so in ways that do not seek? to upend a model that is tested and responsive to the needs of the Internet.?

Dr. Konstantinos Komaitis is the Senior Director, Policy Strategy and Development at the Internet Society.

Dominique Lazanski is the Director of Last Press Label, and a Consultant in International Internet and Cybersecurity Standards and Policy.

Posted on Techdirt - 11 January 2021 @ 09:41am

The Slope Gets More Slippery As You Expect Content Moderation To Happen At The Infrastructure Layer

What a week the first week of January has been! As democracy and its institutions were tested in the United States, so were the Internet and its actors.

Following the invasion of the Capitol Hill by protesters, social media started taking action in what appeared to be a ripple effect: first, Twitter permanently suspended the account of the President of the United States, while Facebook and Instagram blocked his account indefinitely and, at least, through the end of his term; Snapchat followed by cutting access to the President’s account, and Amazon’s video-streaming platform Twitch took a similar action; YouTube announced that it would tighten its election fraud misinformation policy in a way that it would allow them to take immediate action against the President in the case of him posting misleading or false information. In the meantime, Apple also announced that it would kick off Parler, the social network favored by conservatives and extremists, from its app store on the basis that it was promoting violence associated with the integrity of the US institutions.

It is the decision of Amazon, however, to kick off Parler from its web hosting service that I want to turn to. Let me first make clear that if you are Amazon, this decision makes total sense from a business and public relations perspective – why would anyone want to be associated with anything that even remotely hinges on extremism? The decision also falls within Amazon’s permissible scope given that, under its terms of service, Amazon reserves the right to terminate users from their networks at their sole discretion. Similarly, from a societal point of view, Amazon may be seen as upholding most peoples’ values. But, I want to offer another perspective here. What about the Internet? What sort of a message does Amazon’s decision send to the Internet and everyone who is watching?

There are several actors participating in the way a message – whether an email, cat video, voice call, or web page – travels through the Internet. Each one of them might be considered an “intermediary” in the transmission of the message. Examples of Internet infrastructure intermediaries include Content Delivery Networks (CDNs), cloud hosting services, domain name registries, and registrars. These infrastructure actors are responsible for a bunch of different things, from managing network infrastructure, to providing access to users, and ensuring the delivery of content. These – mostly – private sector companies provide investment as well as reliability and upkeep of the services we all use.

In the broadcasting world, a carrier also controls the content that is being broadcast; with the Internet, however, an actor responsible for the delivery of infrastructure services (e.g., an Internet Service Provider or a cloud hosting provider) is unlikely or not expected to be aware of the content of the message they are carrying. They simply do not care about the content; it is not their job to care. Their one and only responsibility is to relay packets on the Internet to other destinations. Even if, for the sake of the argument, they were to care, at the end of the day, they are not the producers of the content. Like postal and telephone services, they have the essential role of carrying the underlying message efficiently.

Over the past year, the role and responsibility of intermediaries has been placed under the policy microscope. The focus is currently on user-generated content platforms, including Facebook, Twitter and YouTube. In the United States, policy makers on both sides of the aisle have been considering anew the role of intermediaries in disseminating dis- and mis-information. Section 230, the law that has systematically, consistently and predictably shielded online platforms from liability over the content their users post, has been highly politicized and change now is almost inevitable. In Europe, after a year of intense debate, the newly released Digital Services Act has majorly upheld the long-standing intermediary liability regime, but, still, there are implementation details that could see some change (e.g, all of provisions on ‘trusted flaggers’).

It is the actions like the one that Amazon took against Parler, however, that go beyond issues of just speech and can set a precedent that could have an adverse effect on the Internet and its architecture. By denying cloud hosting services, Amazon is essentially taking Parler offline and denying its ability to operate, unless the platform can find another hosting service. This might be seen as a good thing, prima facie; at the end of the day, who wants such content to even exist, let alone circulate online? But, it does send a quite dangerous message: as infrastructure intermediaries can take action that cuts the problem from its root (i.e., getting a service completely offline), regulators might start looking at them to “police” the Internet. In such a scenario, infrastructure intermediaries would have to deploy content-blocking measures, including IP and protocol-based blocking, deep packet inspection (i.e., viewing content of “packets” as they move across the network), and URL and DNS-based blocking. Such measures ‘over-block’, imposing collateral damage on legal content and communications. They also interfere with the functioning of critical Internet systems, including the DNS, and compromise Internet security, integrity, and performance.

What Amazon did is not unprecedented. In 2017, Cloudflare took a similar action against the Daily Stormer website when it stopped answering DNS requests for their sites. At the time, Cloudflare said: “The rules and responsibilities for each of the organizations [participating in Internet] in regulating content are and should be different.” A few days later, in an op-ed, published at the Wall Street Journal, Cloudflare’s CEO, Matthew Prince said: “I helped kick a group of neo-Nazis off the internet last week, but since then I’ve wondered whether I made the right decision.[…] Did we meet the standard of due process in this case? I worry we didn’t. And at some level I’m not sure we ever could. It doesn’t sit right to have a private company, invisible but ubiquitous, making editorial decisions about what can and cannot be online. The pre-internet analogy would be if Ma Bell listened in on phone calls and could terminate your line if it didn’t like what you were talking about.”

Most likely Amazon faced the same dilemma; or, it might have not. One thing, however, is certain: so far, none of these actors appears to be considering the Internet and how some of their actions may affect its future and the way we all may end up experiencing it. It is becoming increasingly important that we start looking into the salient, yet extremely significant, differences between moderation happening by user-generated content platforms as opposed to moderation happening by infrastructure providers.

It is about time we make an attempt to understand how the Internet works. From where I am sitting, this past year has been less lonely and semi-normal because of the Internet. I want it to continue to function in a way that is effective; I want to continue seeing the networks interconnecting and infrastructure providers focusing on what they are supposed to be focusing on: providing reliable and consistent infrastructure services.

It is about time we show the Internet we care!

Dr. Konstantinos Komaitis is the Senior Director, Policy Strategy and Development at the Internet Society.

Posted on Techdirt - 16 November 2020 @ 12:11pm

Upload Filters And The Internet Architecture: What's There To Like?

In August 2012, YouTube briefly took down a video that had been uploaded by NASA. The video, which depicted a landing on Mars, was caught by YouTube’s Content ID system as a potential copyright infringement case but, like everything else NASA creates, it was in the public domain. Then, in 2016, YouTube’s automated algorithms removed another video, this time a lecture by a Harvard Law professor, which included snippets of various songs ranging from 15 to roughly 40 seconds. Of course, use of copyright for educational purposes is perfectly legal. Examples of unwarranted content takedowns are not limited to only these two. Automated algorithms have been responsible for taking down perfectly legitimate content that relates to marginalized groups, political speech or the mere existence of information that relates to war crimes

But, the over-blocking of content through automated filters is only one part of the problem. A few years ago, automated filtering was somewhat limited in popularity, being used by a handful of companies; but, over the years, they have become increasingly the go-to technical tool for policy makers wanting to address any content issue — whether it is copyrighted or any other form of objectionable content. In particular, in the last few years, Europe has been championing upload filters as a solution for the management of content. Although never explicitly mentioned, upload filters started appearing as early as 2018 in various Commission documents but became a tangible policy tool in 2019 with the promulgation of the Copyright Directive

Broadly speaking, upload filters are technology tools that platforms, such as Facebook and YouTube, use to check whether content published by their users falls within any of the categories for objectionable content. They are not new – YouTube’s Content ID system dates back to 2007; they are also not cheap – YouTube’s Content ID has cost a reported $100 million to make. Finally, they are ineffective as machine learning tools will always over-block or under-block content.

But, even with these limitations, upload filters continue to be the preferred option for content policy making. Partly, this is due to the fact that policy makers depend on online platforms to offer technology solutions that can scale and can moderate content en masse. Another reason is that elimination of content and take-downs is perceived to be easier and has an instant effect. In a world where more than 500 hours of content are uploaded hourly on YouTube or 350 million photos are posted daily on Facebook, technology solutions such as upload filters appear more desirable than the alternative of leaving the content up. A third reason is the computer-engineering bias of the industry. What this means is that typically when you build programmed systems, you follow a pretty-much predetermined route: you identify a gap, build something to fill that gap (and, hopefully, in the process make money at it) and, then you iteratively fix bugs in the program as they are uncovered. Notice that in this process, the question of whether the problem is best solved through building a software is never asked. This has been the case with the ‘upload filters’ software.

As online platforms become key infrastructure for users, however, the moderation practices they adopt are not only about content removal. Through such techniques, online platforms undertake a governance function, which must ensure the productive, pro-social and lawful interaction of their users. Governments have depended on platforms carrying out this function for quite some time but, over the past few years, they have become increasingly interested in setting the rules for social network governance. To this end, there seems to be a trend of several new regional and national policies that mandate upload filters for content moderation.

What is at stake? 

The use of upload filters and the legislative efforts to promote them and make them compulsory is having a major effect on Internet infrastructure. One of the core properties of the Internet is that it is based on an open architecture of interoperable and reusable building blocks. In addition to this open architecture, technology building blocks work together collectively to provide services to end users. At the same time, each building block delivers a specific function. All this allows for fast and permissionless innovation everywhere.

User-generated-content platforms are now inserting deep in their networks automated filtering mechanisms to deliver services to their users. Platforms with significant market power have convened a forum called the Global Internet Forum to Counter Terrorism (GIFCT), through which approved participants (but not everyone) collaborate to create shared upload filters. The idea is that these filters are interoperable amongst platforms, which, prima facie, is good for openness and inclusiveness. But, allowing the design choices of filters to be made by a handful of companies turns them into de facto standards bodies. This provides neither inclusivity nor openness. To this end, it is worrisome that some governments appear keen to empower and perhaps anoint this industry consortium as a permanent institution for anyone who accepts content from users and republishes it. In effect, this makes an industry consortium, with its design assumptions, a legally-required and permanent feature of Internet infrastructure.

Convening closed consortiums, like the GIFCT, combined with governments’ urge to make upload filters mandatory can violate some of the most important Internet architecture principles: ultimately, upload filters are not based on collaborative, open, voluntary standards but on closed, proprietary ones, owned by specific companies. Therefore, unlike traditional building blocks, these upload filters end up not being interoperable. Smaller online platforms will need to license them. New entrants may find the barriers to entry too high. This, once again, tilts the scales in favor of large, incumbent market players and disadvantages an innovator with a new approach to these problems.

Moreover, mandating GIFCT tools or any other technology, determines the design assumptions underpinning that upload filter framework. Upload filters function as a sort of panopticon device that is operated by social media companies. But, if the idea is to design a social media system that is inherently resistant to this sort of surveillance, then upload filters are not going to work because the communications are protected from users. In effect, that means that mandating GIFCT tools, further determines what sort of system design is acceptable or not. This makes the regulation invasive because it undermines the "general purpose" nature of the Internet, meaning some purposes get ruled out under this approach.

The current policy objective of upload filters is twofold: regulating content and taming the dominance by certain players. These are legitimate objectives. But, as technology tools, upload filters fail on both counts: not only do they have limitations in moderating content effectively, but they also cement the dominant position of big technology companies. Given the costs of creating such a tool and the requirement for online platforms to have systems that ensure the fast, rigorous and efficient takedown of content, there is a trend emerging where smaller players depend on the systems of bigger ones.

Ultimately, upload filters are imperfect and not even an effective solution to our Internet and social media governance problems. They don’t reduce the risk of recidivism and only eliminate the problems, not their recurrence. Aside from the fact that upload filters cannot solve societal problems, mandated upload filters can adversely affect Internet architecture. Generally, the Internet’s architecture can be impacted by unnecessary technology tools, like deep packet inspection, DNS blocking or upload filters. These tools produce consequences that run counter to the benefits expected by the Internet: they compromise its flexibility and do not allow the Internet to continuously serve a diverse and constantly evolving community of users and applications. Instead, they require significant changes to the networks in order to support their use.

Overall, there is a real risk that upload filters become a permanent feature of the Internet architecture and online dialogue. This is not a society that any of us should want to live in – a society where speech is determined by software that will never be able to grasp the subtlety of  human communication. 

Konstantinos Komaitis is the Senior Director, Policy Strategy at the Internet Society

Farzaneh Badiei is the Director of the Social Media Governance Initiative at Yale Law School. 

Posted on Techdirt - 11 August 2020 @ 03:37pm

The Silver Lining Of Internet Regulation: A Regulatory Impact Assessment

To design better regulation for the Internet, it is important to understand two things: the first one is that today’s Internet, despite how much it has evolved, still continues to depend on its original architecture; and, the second relates to how preserving this design is important for drafting regulation that is fit for purpose. On top of this, the Internet invites a certain way of networking ? let’s call it the Internet way of networking. There are many types of networking out there, but the Internet way ensures interoperability and global reach, operates on building blocks that are agile, while its decentralized management and general purpose further ensure its resilience and flexibility. Rationalizing this, however, can be daunting because the Internet is multifaceted, which makes its regulation complicated. The entire regulatory process involves the reconciliation of a complex mix of technology and social rules that can be incompatible and, in some cases, irreconcilable. Policy makers, therefore, are frequently required to make tough choices, which often manage to strike the desired balance, while, other times, they lead to a series of unintended consequences.

Europe’s General Data Protection Regulation (GDPR) is a good example. The purpose of the regulation was simple: fix privacy by providing a framework that would allow users to understand how their data is being used, while forcing businesses to alter the way they treat the data of their customers. The GDPR was set to create much-needed standards for privacy in the Internet and, despite continuous enforcement and compliance challenges, this has majorly been achieved. But, when it comes to the effect it has had on the Internet, the GDPR has posed some challenges. Almost two months after going into effect, it was reported that more than 1000 websites were affected, becoming unavailable to European users. And, even now, two years after, fragmentation continues to be an issue.

So, what is there to do? How can policy makers strike a balance between addressing social harms online and policies that do not harm the Internet?

A starting point is to perform a regulatory impact assessment for the Internet. It a tested method of policy analysis, intended to assist policy makers in the design, implementation and monitoring of improvements to the regulatory system; it provides the methodology for producing high quality regulation, which can, in turn, allow for sustainable development, market growth and constant innovation. A regulatory impact assessment constitutes a tool that ensures regulation is proportional (appropriate to the size of the problem it seeks to address), targeted (focused and without causing any unintended consequences), predictable (it creates legal certainty), accountable (in terms of actions and outcomes) and, transparent (on how decisions are made).

This type of thinking can work to the advantage of the Internet. The Internet is an intricate system of interconnected networks that operates according to certain rules. It consists of a set of fundamental properties that contribute to its flexible and agile character, while ensuring its continuous relevance and constant ability to support emerging technologies; it is self-perpetuating in the sense that it systematically evolves while its foundation remains intact. Understanding and preserving the idiosyncrasy of the Internet should be key in understanding how best to approach regulation.

In general, determining the context, scope and breadth of Internet regulation is important to determine whether regulation is needed and the impact it may have. Asking questions that under normal circumstances policy makers contemplate when seeking to make informed choices is the first step. These include: Does the proposed new rule solve the problem and achieve the desired outcome? Does it balance problem reduction with other concerns, such as costs? Does it result in a fair distribution of the costs and benefits across segments of society? Is it legitimate, credible and, trustworthy? But, there should be an additional question: Does the regulation create any consequences for the Internet?

Actively seeking answers to these questions is vital because regulation is generally risky, and risks arise from acting as well as from not acting. To appreciate this, imagine if the choices made in the early days of the Internet dictated a high regulatory regime in the deployment of advanced telecommunications and information technologies. The Internet would, most certainly, not be able to evolve the way it has and, equally the quality of regulation would suffer.

In this context, the scope of regulation is important. The fundamental problem with much of the current Internet regulation is that it seeks to fix social problems by interfering with the underlying technology of the Internet. Across a wide range of policymaking, we know that solely technical fixes rarely fix social problems. It is important that governments do not regulate aspects of the Internet that could be seen as compromising network interoperability, to solve societal problems. This is a “category error” or, more elaborately, a misunderstanding of the technical design and boundaries of the Internet. Such a misunderstanding tends to confuse the salient similarities and differences between the problem and where this problem occurs; it not only fails to tackle the root of the problem but causes damage to the networks we all rely on. Take, for instance, data localization rules, which seek to force data to remain within certain geographical boundaries. Various countries, most recently India, are trying to forcibly localize data, and risk impeding the openness and accessibility of the global Internet. Data will not be able to flow uninterrupted on the basis of network efficiency; rather, special arrangements will need to be put in place in order for that data to stay within the confines of a jurisdiction. The result will be increased barriers to entry, to the detriment of users, businesses and governments seeking to access the Internet. Ultimately, forced data localization makes the Internet less resilient, less global, more costly, and less valuable.

This is where a regulatory risk impact analysis can come in handy. Generally, what the introduction of a risk impact analysis does is that it shows how policy makers can make informed choices about how some of the regulatory claims can or cannot possibly be true. This would require a shift in the behavior of policy makers from solely focusing on process to a more performance-oriented and result-based approach.

This sounds more difficult than it actually is. Jurisdictions around the world are accustomed to performing regulatory impact assessments which has successfully been integrated in many governments’ policy making process for more than 35 years. So, why can’t it be part of Internet regulation?

Dr. Konstantinos Komaitis is the Senior Director, Policy Strategy and Development at the Internet Society.

More posts from konstantinos.komaitis >>