The Way Forward For AI: Learning From The Elephant & The Blind Men

from the a-new-vision dept

This series of posts explores how we can rethink the intersection of AI, creativity, and policy. From examining outdated regulatory metaphors to questioning copyright norms and highlighting the risks of stifling innovation, each post addresses a different piece of the AI puzzle. Together, they advocate for a more balanced, forward-thinking approach that acknowledges the potential of technological evolution while safeguarding the rights of creators and ensuring AI’s development serves the broader interests of society.

Let’s start with the original metaphor of the six blind men and the elephant. In this classic Indian tale, each man feels a different part of the elephant—one touches the tusk and declares it’s a spear, another grabs the tail and swears it’s a rope, and so on. Each is convinced they’ve got the whole picture, but in reality, they’re missing the full scope of the elephant because they refuse to share their perspectives.

Now, let’s apply this to AI regulation. Imagine six policymakers, each with a firm grip on their own slice of the AI puzzle. One is fixated on privacy, another sees only risks, while yet another is laser-focused on copyright. But as a result, their narrow focus is leaving the broader picture woefully incomplete. And that, my friends, is where the trouble begins.

Accepting the Challenge of Innovation

AI is so much more than just a collection of legal headaches. It’s a powerful, transformative force. It’s revolutionizing industries, supercharging creativity, driving research, and solving problems we couldn’t have even dreamed of a few years ago. It’s not just a new avenue for academics to write articles—it’s a tool that could unlock incredible potential, pushing the boundaries of human creativity and innovation.

But what happens when we regulate it with tunnel vision? When we obsess over the tail and ignore the rest of the elephant? We end up stifling the very innovation we should be encouraging. The piecemeal approach doesn’t just miss the bigger picture—it risks handcuffing the future of AI, limiting its capacity to fuel new discoveries and reshape industries for the better.

By focusing solely on risks and potential copyright or privacy violations, we’re leaving research, creativity, and innovation stranded. Think of the breakthroughs AI could help us achieve: revolutionary advances in healthcare, educational tools that adapt to individual learners, creative platforms that democratize access to artistic expression. AI isn’t just a regulatory problem to be tackled—it’s a massive opportunity. And unless policymakers start seeing the whole elephant, we’re going to end up trampling the very future we’re trying to protect.

So, What’s the Way Forward?

We need to rethink our approach. AI, especially generative AI, can offer immense societal benefits—but only if we create policies that reflect its potential. Over-focusing on copyright claims or letting certain stakeholders dominate the conversation means we end up putting brakes on the very technology that could drive our next era of progress.

Imagine if, in the age of the Gutenberg Press, we had decided to regulate printing so heavily to protect manuscript copyists that books remained rare and knowledge exclusive. We wouldn’t be where we are today. The same logic applies to AI. If we make it impossible for AI to learn, to explore vast amounts of data, to create based on the expressions of humanity, we will end up in a data winter—a future where AI, stifled and starved of quality input, fails to reach its true potential.

AI chatbots, creative tools, and generative models have shown that they can be both collaborators and catalysts for human creativity. They help artists brainstorm, assist writers in overcoming creative blocks, and enable non-designers to visualize their ideas. By empowering people to create in new ways, AI is democratizing creativity. But if we let fears over copyright overshadow everything else, we risk shutting down this vibrant new avenue of cultural expression before it even gets started.

Seeing the Whole Elephant

The task of policymaking is challenging, especially with emerging technologies that shift as rapidly as AI. But the answer isn’t to clamp down with outdated regulations to preserve the status quo for a few stakeholders. Instead, it’s to foster an environment where innovation, creativity, and research can flourish alongside reasonable protections. We must encourage fair compensation for creators (and let’s not forget they should not be equated to the creative industry) while ensuring that AI can access the data it needs to evolve, innovate, and inspire.

The metaphor of the blind men and the elephant serves as a clear warning: if we only see a part of the elephant, we can only come up with partial solutions. It’s time to step back and view AI for what it truly is—a powerful, transformative force that, if used wisely, can uplift our societies, enhance our creativity, and tackle challenges that once seemed impossible.

The alternative is to regulate AI into irrelevance by focusing only on a single aspect. We need to see the whole elephant—understand AI in its entirety—and allow it to shape a future where human creativity, innovation, and progress thrive together.

Caroline De Cock is a communications and policy expert, author, and entrepreneur. She serves as Managing Director of N-square Consulting and Square-up Agency, and Head of Research at Information Labs. Caroline specializes in digital rights, policy advocacy, and strategic innovation, driven by her commitment to fostering global connectivity and positive change.

Filed Under: , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “The Way Forward For AI: Learning From The Elephant & The Blind Men”

Subscribe: RSS Leave a comment
23 Comments
Anonymous Coward says:

If we make it impossible for AI to learn, to explore vast amounts of data, to create based on the expressions of humanity, we will end up in a data winter—a future where AI, stifled and starved of quality input, fails to reach its true potential.

Whereas on the other hand if we allow it to continue generating limitless amounts of slop, poisoining all our repositories of information, we likewise end up stuck in a world with no access to the accurate information necessary for innovation, which slows to a crawl as a result.

Anonymous Coward says:

Unfortunately “AI” is driven by corporate capitalist asshats. Unfortunately “AI” is ridiculously power and water hungry – regular datacenters are already bad enough.

“t’s a powerful, transformative force. It’s revolutionizing industries, supercharging creativity, driving research, and solving problems we couldn’t have even dreamed of a few years ago.”

i hear this a lot. i have seen zero evidence for any of it. You cannot include the decades of ML in science where they use and tune it and *actually verify the output” for specific tasks. Commercial “AI” has done none of these things, and is not operated in a reasonable manner. Like, at all. There has been nothing positive about it’s disruption of which i have ever been made aware. It’s merely the replacement for 5G in terms of something about which to make wild-ass claims (on both the pro and con sides, and perhaps from other angles).

The mere fact that it is an environmental (and economic) catastrophe of epic proportions is enough to make me suggest that perhaps we should dial it back. But i am not one who would support bad (e.g., most copyright nuttery) reasons for laws or regulation. Unfortunate, since no one is going to regulate the creation of more fossil fuel generation for “AI” farms, in places where it is already stupidly hot so no points for clear thinking there, either.

Ranty? Maybe. Probably.

This comment has been flagged by the community. Click here to show it.

Arianity (profile) says:

Re:

So this is a paid post?

It’s not a paid post, TD sometimes posts/reposts stuff from lobbyists/think-tanks and the like on issues they agree with. On this particular issue, they are very pro-AI/anti-copyright.

(Note: At the bottom of the post, it actually lists things like her company. The language just tends to be soft-peddled. It’ll still technically be listed, but you’ll have to google it to actually get context on the bias.)

It is a disappointing editorial choice, and not the first time. They’re better about it with the regular contributors.

Arianity (profile) says:

Re: Re:

She’s literally not, though.

The first google result is literally: Caroline De Cock is a tech policy lobbyist and communications expert… She is also the author of iLobby.eu: Survival Guide to EU Lobbying, sharing insights on navigating EU policy and leveraging digital platforms for advocacy

That’s before getting into the euphemistic ones, or stuff listed on their portfolio.

(Did a bit more digging, and apparently they’re related to Glyn’s book, too)

She’s managing director of a public affairs company that specialises in tech and telecoms.

A public affairs company in tech/telecoms that is trying to influence policy/policymakers is a form of lobbying.

Bloof (profile) says:

‘Banning leaded petrol and coal burning road vehicles is stifling automotive innovation!’

There is no other industry where something as harmful to people and the environment as generative AI would be released into the world without pushback. We know the harms it is doing to education and the creative fields, we know the resource usage is obscene for what it actually contributes but since it’s tech and can make people unemployed, we’re told it’s vital and inevitable by many of the same crew of charlatans who said the same about web 3 and blockchain.

Maura says:

I love Tech Dirt, but man, all these positive spins on generative AI is disappointing. These chat bots give out false information with impunity, and no one knows how to fix it. They are all owned by men who can’t wait to use them to further enrich themselves and carry out their techno fascist fever dreams. These are not the tools I want to “collaborate with.” Im not against AI, but as it currently exists, it deserves far more skepticism than I’ve see Tech Dirt is willing to admit. Also, not for nothing, but people are pretty creative and interesting all on their own. Based on AI as it exists right now, I will continue to seek out real people. I’m not interested in a future that replaces us.

Leave a Reply to Arianity Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt needs your support! Get the first Techdirt Commemorative Coin with donations of $100
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...