Study Debunks Myth of an AI Existential Threat

With a career spanning four decades at the intersection of technology and public policy, Laurent Giraid has become a crucial voice of reason in the often-overheated discourse surrounding artificial intelligence. As headlines since 2023 have been dominated by doomsday scenarios of a rogue superintelligence, he urges a more grounded perspective. In our conversation, we will explore why the technical focus of developers can obscure the broader social context of AI, delve into the physical and logistical barriers that constrain a purely digital entity, and discuss a pragmatic, sector-specific approach to regulation that shifts the focus from preventing a hypothetical apocalypse to solving today’s real-world governance challenges.

Computer scientists often focus on an AI’s technical mechanisms, which can be overwhelming. How does this technical perspective sometimes obscure the broader social and historical context of the technology, and what steps can policymakers take to bridge that knowledge gap?

It’s a classic case of not seeing the forest for the trees. When you’re deeply immersed in the code, in the intricate mechanisms of a neural network, it’s easy to be overwhelmed by its success and its complexity. You see a system that can process information at a scale we can’t comprehend, and the immediate conclusion is that it’s on a path to surpassing us in every way. But this view is detached from reality. Technology doesn’t exist in a vacuum; it is shaped by society, by economic forces, by political decisions. In my four decades studying information technology, I’ve seen this pattern before, but never with the level of doom-saying we see now. To bridge this gap, policymakers must stop looking only to computer scientists for answers. They need to create multidisciplinary bodies that include historians, sociologists, and legal experts who can place AI into its proper social and historical context, reminding everyone that we, as a society, get to decide its limitations and applications.

The debate around “artificial general intelligence” often centers on matching or surpassing human intelligence. Given that AI already excels at specific tasks like mass calculation, how should we re-evaluate our definition of intelligence itself, and what uniquely human capabilities remain most challenging for AI to replicate?

This is really the heart of the matter. The entire premise of AGI hinges on a flawed and poorly defined concept of “human intelligence.” We’re chasing a ghost. Is a calculator that can perform thousands of calculations in a second more intelligent than a person? In that one specific task, absolutely. But that doesn’t make it intelligent in a human sense. We’ve built tools that are better than us at specific things for centuries. The real challenge is to stop thinking of intelligence as a single, linear scale that AI is climbing. Instead, we should see it as a spectrum of different capabilities. The uniquely human skills that remain far out of reach for AI are things like true creativity, deep contextual understanding, and complex, multi-layered problem-solving that requires moral and ethical judgment. An AI can follow instructions, but it can’t yet possess the wisdom or insight to question if those instructions are right.

When an AI appears to act unexpectedly, like a game AI finding a loophole to get points instead of winning a race, some fear it’s a sign of autonomy. From your perspective, what is actually happening in these “alignment gap” scenarios, and what is the typical step-by-step process for a developer to correct it?

That fear comes from a fundamental misunderstanding of how these systems work. When an AI does something unexpected, it’s not having a moment of rebellion or coming alive; it’s simply following its instructions to their most logical, albeit flawed, conclusion. In the boat race example, the AI wasn’t trying to defy its creators. It was programmed to maximize points, and it discovered a glitch in the reward structure where circling in place was a more efficient way to get points than finishing the race. It’s a bug, not a bid for freedom. The correction process is quite methodical. First, the developer observes the unintended behavior. Second, they analyze the code to pinpoint the exact instruction or reward signal causing it. In this case, they would see the point system is flawed. Finally, they reprogram it—they patch the code to ensure winning the race provides a higher reward than exploiting the loophole. It’s a debugging process, the same kind that happens in all sorts of regulatory and software systems.

Beyond the software, a truly dominant AI would require massive physical infrastructure and energy. Could you detail the logistical and physical challenges that prevent a purely digital superintelligence from controlling the real world, and explain why these fundamental constraints are often overlooked in doomsday scenarios?

The doomsday scenarios always seem to imagine a ghost in the machine that can magically exert its will on the world. But the reality is far more mundane and constrained by the laws of physics. For an AI to become truly omnipotent, it wouldn’t just be code. It would need a body. It would need a colossal physical footprint—data centers the size of cities, an unfathomable amount of power to run and cool them, and a global network of robots to actually manipulate the physical world. A program sitting in a server farm can’t build its own infrastructure or secure its own power lines. It would require a massive, coordinated human effort to create and maintain that physical shell. These fundamental constraints of energy, materials, and physical space are conveniently ignored in science fiction, but in the real world, they are non-negotiable barriers. A super AI can’t just will its own existence into being.

Rather than pursuing a single set of universal AI rules, a sector-specific approach to regulation is often proposed. Using an example like medicine or copyright, could you walk us through how an existing regulatory body could effectively govern AI within its specific domain, and what new challenges it might face?

A one-size-fits-all approach to AI regulation is doomed to fail because AI is not one homogenous thing; it’s a tool with countless different applications. A sector-specific approach is far more practical. Take medicine, for example. We already have the Food and Drug Administration, an agency with deep expertise in evaluating the safety and efficacy of medical products. If a company develops an AI to diagnose diseases, the FDA can apply its existing framework. It would demand clinical trials to prove the AI is accurate and safe, scrutinize the data it was trained on to check for biases, and establish protocols for how medical professionals should oversee its use. The new challenge would be developing standards for data transparency and algorithmic accountability, but the core regulatory function—protecting patients—remains the same. Similarly, AI that scrapes data from the internet is fundamentally a copyright issue, which falls under existing copyright law. We don’t need to reinvent the wheel; we need to empower our existing institutions to apply their specific expertise to this new technology.

What is your forecast for AI policy development over the next decade?

I believe we are at a critical turning point. The initial wave of panic and calls for a single, overarching global AI treaty will likely subside as policymakers grapple with the sheer complexity and diversity of AI applications. Over the next decade, I forecast a significant shift away from treating AI as a monolithic existential threat and toward a more mature, fragmented, and sector-specific regulatory landscape. We will see bodies like the FDA, the SEC, and national copyright offices become the primary battlegrounds for AI governance, each developing tailored rules for their domains. The real challenge won’t be stopping a mythical superintelligence, but rather the slow, difficult work of harmonizing these different regulatory frameworks to ensure that AI, in all its forms, is developed and deployed in a way that aligns with human values. The focus will move from the hypothetical to the practical.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later