Caroline De Cock's Techdirt Profile

Caroline De Cock

About Caroline De Cock

Posted on Techdirt - 22 August 2025 @ 03:30pm

AI Training: What Creators Need To Know About Copyright, Tokens, And Data Winter

This is the final piece in a series of posts that explores how we can rethink the intersection of AI, creativity, and policy. From examining outdated regulatory metaphors to questioning copyright norms and highlighting the risks of stifling innovation, each post addresses a different piece of the AI puzzle. Together, they advocate for a more balanced, forward-thinking approach that acknowledges the potential of technological evolution while safeguarding the rights of creators and ensuring AI’s development serves the broader interests of society. You can read the firstsecondthirdfourthfifth, and sixth posts in the series.

As the conversation about AI’s impact on creative industries continues, there’s a common misconception that AI models are “stealing” content by absorbing it for free. But if we take a closer look at how AI training works, it becomes clear that this isn’t the case at all. AI models don’t simply replicate or repackage creative works—they break them down into something much more abstract: tokens. These tokens are tiny, fragmented pieces of data that no longer represent the creative expression of an idea. And here’s where the distinction lies: copyright is meant to protect expression, not individual words, phrases, or patterns that make up those works.

The Lego Analogy: Breaking Down Creative Works into Tokens

Imagine you’re a creator, and your work is like a detailed Lego model of the Star Wars Millennium Falcon. It’s intricate, with every piece perfectly assembled to create something unique and valuable. Now imagine that an AI system comes along—not to take your Millennium Falcon and display it as its own creation, but to break it down into individual Lego blocks. These blocks are then scattered among millions of others from different sources, and the AI uses them to build entirely new structures—things that look nothing like the Millennium Falcon.

In this analogy, the Lego blocks are the tokens that AI models use. These tokens are fragments of data—tiny bits of information stripped of the original context and creative expression. Just like Lego pieces, tokens are abstract and can be recombined in an infinite number of ways to create something entirely new. The AI doesn’t copy your Falcon; it takes the building blocks (tokens) and uses them to create something that’s not a replica of the original but something completely different, like a castle or a spaceship you’ve never seen before.

This is the key distinction: AI models aren’t absorbing entire creative works and reproducing them as their own. They’re learning patterns from vast datasets and using those patterns to generate new content. The tokens no longer reflect the expression of the original work, and thus, they don’t infringe on the creative essence that copyright law is designed to protect.

Why Recent Content Matters: AI Needs to Reflect Modern Language and Values

There’s another critical point that often gets overlooked: AI models need access to recent, contemporary content to be useful, relevant, and ethical. Let’s imagine for a moment what would happen if AI models were restricted to learning only from public domain works, many of which are decades or even centuries old.

While public domain works are valuable, they often reflect the social norms and biases of their time. If AI models are trained primarily on outdated texts, there’s a serious risk that they could “speak” in a way that’s misogynistic, biased, anti-LGBTQ+, or even outright racist. Many public domain works contain language and ideas that are no longer acceptable in today’s society, and if AI is limited to these sources, it may inadvertently propagate harmful, antiquated views.

To ensure that AI reflects current values, inclusive language, and modern social norms, it needs access to recent content. This means analyzing and learning from today’s books, articles, speeches, and other forms of communication. If creators and copyright holders opt out of allowing their content to be used for AI training, we risk creating models that don’t reflect the diversity, progress, and inclusivity of modern society.

For example, language evolves quickly—just look at the increased use of gender-neutral pronouns or terms like intersectionality in recent years. If AI is cut off from these contemporary linguistic trends, it will struggle to understand and engage with the world as it is today. It would be like asking an AI trained exclusively on Shakespearean English to have a conversation with a 21st-century teenager—it simply wouldn’t work.

Article 4 of the EU Directive: Opting Out of Text and Data Mining

Let’s bring the EU Directive on Copyright in the Digital Single Market (DSM) into the picture. The Directive includes provisions (Article 4) allowing copyright holders to opt out of having their content used in text and data mining (TDM). TDM is crucial for training AI models, as it allows them to analyze and learn from large datasets. The opt-out mechanism gives creators and copyright holders the ability to expressly reserve their works from being used for TDM.

However, it’s important to remember that this opt-out applies to all AI models, not just generative AI systems like ChatGPT. This means that by opting out in a broad, blanket manner, creators could inadvertently limit the potential of AI models that have nothing to do with creative industries—tools that are critical for advancements in healthcare, education, and even in day-to-day conveniences that many of us benefit from.

The Risk of a Data Winter: Why Broad Opt-Outs Could Harm Innovation

What happens if creators and copyright holders across Europe start opting out of TDM on a large scale? The answer is something AI researchers dread: a data winter. Without access to a diverse and rich array of data, AI models will struggle to evolve. This could slow innovation not just in the creative industries, but across the entire economy.

AI needs high-quality data to function properly. The principle of Garbage In, Garbage Out applies here: if AI models are starved of diverse input, their output will be flawed, biased, and of lower quality. And while this may not seem like an issue for some industries, it has a ripple effect. Every AI tool we rely on—from smart assistants to medical research applications—depends on robust training data. Restricting access to this data doesn’t just hinder progress in AI innovation; it stifles public interest tools that have far-reaching benefits for society.

Think about it: many creators themselves probably use AI-driven tools in their daily lives—whether it’s for streamlining workflows, generating new ideas, or even just organizing information. By opting out of TDM, they could inadvertently be damaging the very tools that enhance their own creative processes.

The Way Forward: Balance Between Protection and Innovation

While copyright is crucial for protecting creators and ensuring fair compensation, it’s equally important not to over-regulate in a way that stifles innovation. AI models aren’t absorbing entire works for free; they’re breaking them down into unrecognizable tokens that enable transformative uses. Rather than opting out of TDM as a knee-jerk reaction, creators should consider the long-term consequences of limiting AI’s potential to innovate and enhance their own industries.

A balance needs to be struck. Copyright protection should ensure that creators are fairly compensated, but it shouldn’t be wielded as a tool to restrict the very data that drives AI innovation. Creators and policymakers must recognize that AI isn’t the enemy—it’s a collaborator. And if we’re not careful, we might find ourselves facing a data winter, where the tools we rely on for both convenience and advancement are weakened due to short-sighted decisions.

Caroline De Cock is a communications and policy expert, author, and entrepreneur. She serves as Managing Director of N-square Consulting and Square-up Agency, and Head of Research at Information Labs. Caroline specializes in digital rights, policy advocacy, and strategic innovation, driven by her commitment to fostering global connectivity and positive change.

Posted on Techdirt - 15 August 2025 @ 03:49pm

When Copyright Enters the AI Conversation

This series of posts explores how we can rethink the intersection of AI, creativity, and policy. From examining outdated regulatory metaphors to questioning copyright norms and highlighting the risks of stifling innovation, each post addresses a different piece of the AI puzzle. Together, they advocate for a more balanced, forward-thinking approach that acknowledges the potential of technological evolution while safeguarding the rights of creators and ensuring AI’s development serves the broader interests of society. You can read the firstsecondthirdfourth, and fifth posts in the series.

Whenever content is involved, copyright enters the conversation. And when we talk about AI, we’re talking about systems that absorb petabytes of content to meet their training needs. So naturally, copyright issues are at the forefront of the debate.

Interestingly, copyright usually only becomes an issue when there’s the perception that someone or something is successful—and that copyright holders are missing out on potential control or revenues. For decades, “reading by robots” has been a part of our digital lives. Just think of search engines crawling billions of pages to index them. These robots read far more content than any human ever could. But it wasn’t until AI began learning from this content—and, more crucially, producing content that appeared successful—that the rules inspired by the Queen Anne Statute of 1710 come into play.

The Input Side: Potential Innovation and the Garbage In, Garbage Out Principle

On the input side, generative AI relies heavily on the data it consumes, but under EU law, its access is carefully regulated. The 2019 EU Directive on Copyright in the Digital Single Market (DCDSM) sets the framework for text and data mining (TDM). Article 3 of the Directive permits TDM for scientific research only, while Article 4 allows it more broadly—provided the rightsholder hasn’t expressly reserved their rights.

With the AI Act adopted in 2024 referring to these provisions, we’re left with a raft of questions about the future of AI models. One of the key concerns is the potential for a data winter—a scenario where AI models face limited access to the data they need to evolve and improve.

This brings us to a fundamental concept in AI—Garbage In, Garbage Out. AI models are only as good as the data they are trained on. If access to high-quality, diverse datasets is restricted by rigid copyright rules, AI systems will end up training on lower-quality data. Poor-quality data leads to unreliable, biassed, or outright inaccurate AI outputs. Just as a chef can only make a great dish with fresh ingredients, AI needs high-quality input to deliver reliable, innovative, and useful results. Restricting access due to copyright concerns risks leading AI into a “data winter” where innovation freezes, limited by the garbage fed into the system.

A data winter not only stifles technological advancement but also risks widening the gap between regions that enforce stricter copyright policies and those that embrace more flexible rules. Ultimately, Europe’s global competitiveness in AI hinges on whether it can provide an environment where AI can access the data it needs without unnecessary restrictions.

But access to diverse data is also important from a cultural perspective: if AI is trained predominantly on Anglo-Saxon or non-European content, it naturally reflects those cultures in its outputs. This could mean that European creativity becomes increasingly marginalised, with AI-generated content lacking in cultural relevance and failing to reflect the diversity of Europe. AI should be a tool that amplifies the diversity of human expression, not one that homogenises it.

Challenges on the Output Side: Copyright Protection for AI-Generated Content

Now let’s look at the output side of generative AI. The assumption that creative works, like movies, video games, or books, are automatically protected by copyright may not apply to AI-generated content. The traditional protection of creative expression hinges on human authorship, and while creative elements like prompt choices could be considered for copyright, the level of protection will likely be much lower than expected. This could mean that parts of a work—such as AI-generated backgrounds in video games or movies—could be freely copied by others.

This uncertainty could lead to increased pressure from creative industries to modify copyright law, pushing for more familiar levels of protection that might extend copyright to currently unprotected AI-generated content. If such changes happen, we could end up in a spiral where access to knowledge becomes more restricted, stifling creativity and innovation. We’ve seen similar debates before—most notably during the advent of photography, when early courts struggled to determine whether machine-created works could be protected.

The path forward requires a careful balancing act: we need copyright laws that protect human creativity and labour without hampering access to the data that AI—and society—need to innovate and grow. By avoiding a data winter and ensuring AI systems have access to diverse, quality inputs, we can harness AI’s potential to drive the creative industries forward, rather than allow outdated copyright rules to drag progress backward.

Caroline De Cock is a communications and policy expert, author, and entrepreneur. She serves as Managing Director of N-square Consulting and Square-up Agency, and Head of Research at Information Labs. Caroline specializes in digital rights, policy advocacy, and strategic innovation, driven by her commitment to fostering global connectivity and positive change.

Posted on Techdirt - 1 August 2025 @ 03:29pm

Creativity, The Fifth Freedom & Access To Knowledge

This series of posts explores how we can rethink the intersection of AI, creativity, and policy. From examining outdated regulatory metaphors to questioning copyright norms and highlighting the risks of stifling innovation, each post addresses a different piece of the AI puzzle. Together, they advocate for a more balanced, forward-thinking approach that acknowledges the potential of technological evolution while safeguarding the rights of creators and ensuring AI’s development serves the broader interests of society. You can read the firstsecondthird, and fourth posts in the series.

In April 2007, Janez Potočnik, then European Commissioner for Science and Research, introduced the concept of the Fifth Freedom: the “freedom of knowledge.” His vision was simple but ambitious—enhance Europe’s ability to remain competitive through knowledge and innovation, the cornerstones of prosperity. Fast forward to today, the momentum for this Fifth Freedom is building once again, with both the Letta Report and the Mission Letter of the new EU Commissioner for Startups, Research, and Innovation emphasizing its significance.

But how does this freedom of knowledge intersect with creativity and copyright?

AI, Learning, and the Limits of Copyright

Machine learning (ML) systems learn in a way strikingly similar to humans—by observing and copying. This raises an important question: should ML systems be allowed to freely use copyrighted materials as part of their learning process? The answer is not just about technology; it goes to the heart of what copyright law aims to protect.

Traditionally, copyright protects the expression of ideas, not the ideas themselves. This is an important distinction because it allows others to take inspiration, innovate, and build upon ideas without infringing on someone else’s creative output. When an ML system is trained, it doesn’t care about specific creative choices—like the lighting or composition of a photo. It just wants to learn the underlying pattern, such as recognizing a stop sign. Similarly, a natural language model uses written text not because it appreciates the author’s unique writing style, but because it needs to learn the structure of language.

Humans also do this all the time. We often replicate expressions when learning, but our goal is not to plagiarize someone’s unique creative touch—it’s to grasp the idea behind it. This concept is embedded in many legal precedents. For instance, in the American Geophysical Union v. Texaco case, photocopying was used not for the beauty of the prose, but simply as a convenient way to access scientific ideas. Similar issues arise in cases about software interoperability, functional objects like clothing designs, and even in disputes over yoga routines. Copyright should protect creative expression—not the ideas, facts, or functional elements that underpin them.

Why This Matters for Machine Learning

This distinction is particularly important for ML. If we allow copyright law to get in the way of machines learning from data for purely non-expressive purposes, we’re potentially hampering technological advancement. Allowing ML systems to copy for learning—without trying to replicate the creative aspects of the original work—is essential for innovation. This is not just a matter of advancing technology but also of staying true to the spirit of copyright law, which is meant to balance the interests of creators and the public good.

However, as Professor Lemley has pointed out from a U.S. law perspective, the freedom for ML to learn should have limits. If an ML system is being trained to create a song that mimics the style of Ariana Grande, it’s no longer just about learning—it’s about copying a creative expression. In such cases, the question of whether it qualifies as fair use becomes much tougher. Yet, even here, it’s crucial that copyright doesn’t end up controlling unprotectable elements like a musical genre or a broad artistic style.

Finding the Balance: Innovation and Protection

The concept of the Fifth Freedom—freedom of knowledge—cannot thrive if copyright is used to restrict learning and innovation. We need a balanced approach: one that protects the hard work of creators, while ensuring that copyright doesn’t stifle the fundamental right to learn, innovate, and build upon existing knowledge. This is especially relevant now, as AI and machine learning shape the future of creativity and the knowledge economy in Europe. If we get this balance right, we can ensure that both creativity and innovation continue to flourish in the digital age.

Caroline De Cock is a communications and policy expert, author, and entrepreneur. She serves as Managing Director of N-square Consulting and Square-up Agency, and Head of Research at Information Labs. Caroline specializes in digital rights, policy advocacy, and strategic innovation, driven by her commitment to fostering global connectivity and positive change.

Posted on Techdirt - 25 July 2025 @ 03:37pm

Creativity, Freedom Of Speech & Freedom Of Thought

This series of posts explores how we can rethink the intersection of AI, creativity, and policy. From examining outdated regulatory metaphors to questioning copyright norms and highlighting the risks of stifling innovation, each post addresses a different piece of the AI puzzle. Together, they advocate for a more balanced, forward-thinking approach that acknowledges the potential of technological evolution while safeguarding the rights of creators and ensuring AI’s development serves the broader interests of society. You can read the firstsecond, and third post in the series.

In recent discussions around AI, the focus has often been on the potential for these tools to reinforce biases or avoid controversial topics altogether. But what if the stakes are even higher? What if the restrictive policies applied to AI chatbots affect not only freedom of speech but also freedom of thought?

AI Chatbots and Self-Censorship: A Free Speech Issue

AI chatbots like Google’s Gemini and OpenAI’s ChatGPT are designed to generate content based on user prompts. However, their output is often restricted by vague, broad policies that aim to avoid generating controversial content. The recent article by Calvet-Bademunt and Mchangama points out that major chatbots routinely refuse to produce certain outputs—not necessarily because these outputs would be illegal or even harmful, but because the companies behind these tools fear backlash, negative press, or legal liabilities. The result? A form of self-censorship that limits the potential of these AI tools to serve as platforms for free expression and thought exploration.

For instance, chatbots were asked questions about topics like transgender rights and European colonialism. While they readily generated content in support of one side, they refused to generate content for the other—effectively shaping the kind of information and perspectives users can explore. This is far from what freedom of speech, as recognized in international human rights standards, is meant to protect.

From Freedom of Speech to Freedom of Thought

This type of restriction doesn’t just affect what we can say—it affects how we think. Imagine you’re brainstorming ideas for a creative project, or seeking out different perspectives to better understand a complex issue. When you interact with a chatbot, you’re often engaging in a private, one-on-one exchange, similar to bouncing ideas off a friend or jotting down thoughts in a notebook. This process is an essential part of freedom of thought—the ability to explore, question, and challenge ideas without external interference.

However, when AI chatbots refuse to engage with certain topics because of vague company policies or fear of liability, it effectively limits your ability to think freely. The information you’re exposed to becomes curated not by your curiosity, but by what an algorithm deems “acceptable.” Unlike social media, where the information is broadcast to a wide audience and might be moderated for public safety, these exchanges are private, individual, and form the basis of personal exploration and creativity. Restricting this space is far more insidious, as it can shape what ideas are considered “thinkable” in the first place.

Ensuring AI Supports Free Thought and Creativity

If AI is going to live up to its potential as a partner in creativity and a tool for learning, we need to rethink how content policies are applied. AI providers should recognize the difference between private, individual use of chatbots and public broadcast on platforms like social media. Stricter moderation may be necessary for public content, but in private interactions, the focus should be on allowing free exploration.

Rather than outright refusals to generate content, chatbots could provide context, offer balanced viewpoints, or encourage users to think critically about controversial topics. This approach respects freedom of thought while ensuring that users are not left in an echo chamber. By building a culture that supports free speech and responsible exploration, AI can empower users to think more broadly and creatively—not less.

As we consider the role of AI in our society, we must ensure that these tools serve to expand our freedoms, not restrict them. Creativity, freedom of speech, and freedom of thought are interconnected—and if we allow AI to become overly restricted out of fear or pressure, we risk stifling all three.

Caroline De Cock is a communications and policy expert, author, and entrepreneur. She serves as Managing Director of N-square Consulting and Square-up Agency, and Head of Research at Information Labs. Caroline specializes in digital rights, policy advocacy, and strategic innovation, driven by her commitment to fostering global connectivity and positive change.

Posted on Techdirt - 18 July 2025 @ 01:37pm

Creative Industries, Creators & Creatives

This series of posts explores how we can rethink the intersection of AI, creativity, and policy. From examining outdated regulatory metaphors to questioning copyright norms and highlighting the risks of stifling innovation, each post addresses a different piece of the AI puzzle. Together, they advocate for a more balanced, forward-thinking approach that acknowledges the potential of technological evolution while safeguarding the rights of creators and ensuring AI’s development serves the broader interests of society. You can read the first post and second post in the series.

In policy circles, creative industries have become the loudest voices in copyright debates. The problem? They are often mistaken for representing creativity itself, or even protecting individual creators and culture. But let’s get one thing straight: creativity is very different from the creative industries—as different as music is from the music business. Think The Beatles vs. Bad Boy Records: not the same vibe!

The creative industries are an economic concept, an invention of the British government in 1997 under Tony Blair. This was when the Creative Industries Task Force was born, bringing together sectors like advertising, design, fashion, film, music, and software—all under one umbrella. We’re talking about a vast range, from opera and ballet to architecture, advertising and video games. This is way beyond what most people think of as “culture.” And let’s not even talk about the hodgepodge concept of IPR-intensive industries waved by the Intellectual Property Office (EUIPO) and European Patent Office (EPO), which covers pretty much any company that filed patents or geographical indications, from McDonalds to the wonderful vendors of Prosciutto di Parma.

Who’s Who in the Creative Industry?

When talking about the creative industries, it’s important to differentiate between the players involved. There are rightsholders, who may be those producing and distributing content, or sometimes simply financial investors—think Scooter Braun vs. Taylor Swift. Then there are the creators themselves, who often don’t even own the rights to what they’ve created. And of course, there are all the other people who work in the industry—from “creatives” to those in support roles, just like in any other industry.

This complexity becomes crucial when considering AI. As we’ve seen with the Hollywood writers’ strike, the creative industry is already embracing AI, viewing it as either a new creative tool or a cost-cutting measure that could replace human jobs. That’s the “industries” part of the label—a business-driven focus that doesn’t necessarily align with the interests of individual creators or the broader value of creativity.

AI, Authenticity, and the Human Touch

The real challenges posed by AI aren’t limited to copyright or creative rights—they’re about the future of work and how we value human contribution in an automated world. To understand the human creator’s role, let’s take a look at the evolution of electronic dance music (EDM). As Douglas Rushkoff describes, EDM started with anonymous techno raves, with the DJ barely visible or hidden entirely. Over time, the DJ became the centerpiece, part of the spectacle—because humans relate to humans. This dynamic isn’t going to change with AI.

Or, as Dan Graham, owner of Gothic Storm Limited and Founder of the Library of the Human Soul, puts it: “We’re suckers for a backstory and authenticity. We hate knock-offs, even if they’re perfect. Fake Rolexes, forged artwork—it doesn’t matter how good it is, the real thing is always worth more, because we care.” AI might make flawless imitations, but the value of human creativity, authenticity, and connection remains unmatched.

So, while AI will certainly change the creative industries, it won’t replace the core of creativity—the human spirit, storytelling, and the authenticity we all crave as fans.

Caroline De Cock is a communications and policy expert, author, and entrepreneur. She serves as Managing Director of N-square Consulting and Square-up Agency, and Head of Research at Information Labs. Caroline specializes in digital rights, policy advocacy, and strategic innovation, driven by her commitment to fostering global connectivity and positive change.

Posted on Techdirt - 27 June 2025 @ 03:36pm

Creativity & Technological Evolution 

This series of posts explores how we can rethink the intersection of AI, creativity, and policy. From examining outdated regulatory metaphors to questioning copyright norms and highlighting the risks of stifling innovation, each post addresses a different piece of the AI puzzle. Together, they advocate for a more balanced, forward-thinking approach that acknowledges the potential of technological evolution while safeguarding the rights of creators and ensuring AI’s development serves the broader interests of society. You can read the first post in the series here.

Let me show my age here—does anyone remember the movie Fame?

There’s a scene where Bruno Martelli, a confident student, declares, “Violins are on the way out, you don’t need strings today.” He insists that with “a keyboard and some oscillators,” orchestras have become obsolete. The teacher’s response is simple yet powerful: “The music survived.”

This scene perfectly captures a recurring theme in the history of creativity. Every time a new technology comes along, people predict the end of traditional art forms. Yet, time and again, creativity not only survives—it thrives.

Technology: A Tool for Growth, Not a Threat

Take the Gutenberg Press. When it was invented, many feared that the painstaking art of manuscript copying by monks would vanish forever. And yes, the printing press transformed how books were produced, but it didn’t destroy writing or creativity. Instead, it democratised knowledge, making literature accessible to a broader audience and sparking an explosion of new ideas and artistic expression.

Or consider photography. When the camera was invented, people thought painters were doomed. Why spend hours painting when a camera could capture the same moment in an instant? But painting didn’t vanish. Instead, it evolved—movements like Impressionism and Cubism flourished, as artists found new ways to express themselves beyond mere replication of reality.

Film didn’t kill theatre, and electric guitars didn’t kill acoustic ones. These technologies expanded the toolkit available to creators, offering new ways to explore their craft. In fact, new technologies have even created entirely new art forms. Just look at the video games industry—within fifteen years of its inception, it surpassed the century-old film industry in value, creating fresh opportunities for storytelling, artistry, and engagement.

AI: Expanding the Boundaries of Creativity

The same holds true for AI. Just like violins didn’t disappear when synthesizers came along, AI won’t replace human creativity. It will push boundaries, open up new possibilities, and allow artists and innovators to do things we couldn’t have imagined even a decade ago. But the essence of creativity—the spark of human imagination—remains indispensable.

Instead of fearing AI, we should embrace it as the latest in a long line of tools that expand human potential. AI will help creative industries thrive by providing new ways to create, innovate, and engage audiences. But the true magic—the core of creativity—will always come from the human mind.

The Music Will Play On

The lesson here is simple: creativity will survive. It always has. Every time a new tool, technology, or innovation emerges, there’s a tendency to think it spells the end for what came before. But history tells a different story—one of adaptation, evolution, and growth.

And as always, the creative industries will continue to thrive, building on the spark of human ingenuity.

Caroline De Cock is a communications and policy expert, author, and entrepreneur. She serves as Managing Director of N-square Consulting and Square-up Agency, and Head of Research at Information Labs. Caroline specializes in digital rights, policy advocacy, and strategic innovation, driven by her commitment to fostering global connectivity and positive change.

Posted on Techdirt - 20 June 2025 @ 03:46pm

The Way Forward For AI: Learning From The Elephant & The Blind Men

This series of posts explores how we can rethink the intersection of AI, creativity, and policy. From examining outdated regulatory metaphors to questioning copyright norms and highlighting the risks of stifling innovation, each post addresses a different piece of the AI puzzle. Together, they advocate for a more balanced, forward-thinking approach that acknowledges the potential of technological evolution while safeguarding the rights of creators and ensuring AI’s development serves the broader interests of society.

Let’s start with the original metaphor of the six blind men and the elephant. In this classic Indian tale, each man feels a different part of the elephant—one touches the tusk and declares it’s a spear, another grabs the tail and swears it’s a rope, and so on. Each is convinced they’ve got the whole picture, but in reality, they’re missing the full scope of the elephant because they refuse to share their perspectives.

Now, let’s apply this to AI regulation. Imagine six policymakers, each with a firm grip on their own slice of the AI puzzle. One is fixated on privacy, another sees only risks, while yet another is laser-focused on copyright. But as a result, their narrow focus is leaving the broader picture woefully incomplete. And that, my friends, is where the trouble begins.

Accepting the Challenge of Innovation

AI is so much more than just a collection of legal headaches. It’s a powerful, transformative force. It’s revolutionizing industries, supercharging creativity, driving research, and solving problems we couldn’t have even dreamed of a few years ago. It’s not just a new avenue for academics to write articles—it’s a tool that could unlock incredible potential, pushing the boundaries of human creativity and innovation.

But what happens when we regulate it with tunnel vision? When we obsess over the tail and ignore the rest of the elephant? We end up stifling the very innovation we should be encouraging. The piecemeal approach doesn’t just miss the bigger picture—it risks handcuffing the future of AI, limiting its capacity to fuel new discoveries and reshape industries for the better.

By focusing solely on risks and potential copyright or privacy violations, we’re leaving research, creativity, and innovation stranded. Think of the breakthroughs AI could help us achieve: revolutionary advances in healthcare, educational tools that adapt to individual learners, creative platforms that democratize access to artistic expression. AI isn’t just a regulatory problem to be tackled—it’s a massive opportunity. And unless policymakers start seeing the whole elephant, we’re going to end up trampling the very future we’re trying to protect.

So, What’s the Way Forward?

We need to rethink our approach. AI, especially generative AI, can offer immense societal benefits—but only if we create policies that reflect its potential. Over-focusing on copyright claims or letting certain stakeholders dominate the conversation means we end up putting brakes on the very technology that could drive our next era of progress.

Imagine if, in the age of the Gutenberg Press, we had decided to regulate printing so heavily to protect manuscript copyists that books remained rare and knowledge exclusive. We wouldn’t be where we are today. The same logic applies to AI. If we make it impossible for AI to learn, to explore vast amounts of data, to create based on the expressions of humanity, we will end up in a data winter—a future where AI, stifled and starved of quality input, fails to reach its true potential.

AI chatbots, creative tools, and generative models have shown that they can be both collaborators and catalysts for human creativity. They help artists brainstorm, assist writers in overcoming creative blocks, and enable non-designers to visualize their ideas. By empowering people to create in new ways, AI is democratizing creativity. But if we let fears over copyright overshadow everything else, we risk shutting down this vibrant new avenue of cultural expression before it even gets started.

Seeing the Whole Elephant

The task of policymaking is challenging, especially with emerging technologies that shift as rapidly as AI. But the answer isn’t to clamp down with outdated regulations to preserve the status quo for a few stakeholders. Instead, it’s to foster an environment where innovation, creativity, and research can flourish alongside reasonable protections. We must encourage fair compensation for creators (and let’s not forget they should not be equated to the creative industry) while ensuring that AI can access the data it needs to evolve, innovate, and inspire.

The metaphor of the blind men and the elephant serves as a clear warning: if we only see a part of the elephant, we can only come up with partial solutions. It’s time to step back and view AI for what it truly is—a powerful, transformative force that, if used wisely, can uplift our societies, enhance our creativity, and tackle challenges that once seemed impossible.

The alternative is to regulate AI into irrelevance by focusing only on a single aspect. We need to see the whole elephant—understand AI in its entirety—and allow it to shape a future where human creativity, innovation, and progress thrive together.

Caroline De Cock is a communications and policy expert, author, and entrepreneur. She serves as Managing Director of N-square Consulting and Square-up Agency, and Head of Research at Information Labs. Caroline specializes in digital rights, policy advocacy, and strategic innovation, driven by her commitment to fostering global connectivity and positive change.