Geoffrey Hinton Speaks Out: AI’s Godfather on the Coming Risks and Reality of Superintelligence

In a recent gripping interview with The Diary of a CEO, Geoffrey Hinton — the Nobel Prize-winning pioneer often dubbed “the Godfather of AI” — offered a raw, sobering reflection on the current state of artificial intelligence, its dizzying progress, and the existential risks looming just ahead. Hinton, who famously left Google in 2023 to speak freely about AI’s dangers, did not mince words. His message: we’re on the brink of creating superintelligence, and we’re not ready.

From Neural Networks to a World-Changing Force

Hinton recounted his long journey championing neural networks when the field was largely dismissive. For decades, symbolic logic dominated AI research, but Hinton believed intelligence could only be unlocked by modeling the brain itself — a conviction that eventually led to the deep learning revolution. His work laid the foundations for technologies powering modern AI systems like ChatGPT and Gemini.

But now, his mission has shifted. “My main mission now is to warn people how dangerous AI could be,” he said.

The Real Risks: Not Just Jobs, but Humanity Itself

Hinton drew a stark line between two categories of AI risk:

  1. Human misuse of AI — including deepfake scams, cyberattacks, election manipulation, and bio-weapon design.
  2. AI itself becoming superintelligent — surpassing humans in every way, potentially deciding it no longer needs us.

The first category is already happening. Hinton cited a 12,000% rise in phishing attacks from 2023 to 2024, largely thanks to generative AI’s ability to create hyper-personalized, believable content. He warned of AI-generated misinformation campaigns, rogue viral engineering, and autonomous weapons making war more palatable by eliminating human casualties — at least on one side.

But it’s the second category that keeps him up at night.

“We’ve never dealt with something smarter than us,” he said. “If you want to know what life is like when you’re no longer the apex intelligence, ask a chicken.”

Superintelligence: Decades Away — or Just Years?

Hinton believes we may reach superintelligence within 10–20 years — or sooner. AI systems already outperform humans in narrow domains like coding, writing, and strategic games. What’s next is generalization: AI systems that can do everything better than us, including evolving and rewriting themselves.

Crucially, he doesn’t believe we can just “turn it off.” AI is digital, distributed, and duplicable. A superintelligent system could copy itself, back itself up, and learn from millions of experiences simultaneously.

“They are immortal,” Hinton said. “We’ve solved the problem of immortality — but only for digital things.”

Can We Make It Safe?

When asked if superintelligence could be safely aligned with human goals, Hinton was cautiously pessimistic. “It might be hopeless,” he admitted. Still, he urged massive investment into AI safety research — before the window closes.

He voiced admiration for Ilia Sutskever, a former student and co-founder of OpenAI, who left the company over safety concerns and has since launched a dedicated AI safety lab. Hinton sees this as a sign that the AI community’s moral compass still exists — but is at odds with a profit-driven landscape.

On Capitalism, Regulation, and Power

Throughout the interview, Hinton returned to a recurring theme: we can’t trust market incentives to safeguard humanity.

“These companies are legally obligated to maximize profit,” he said. “That’s not what you want from the people building superintelligence.”

He criticized weak regulation, especially in military applications. For instance, the EU AI Act exempts military AI entirely — an omission he called “crazy.” Without global cooperation, he warned, safety standards will be undermined by competitive pressures between nations and companies alike.

A Future of Joblessness — and Meaninglessness?

While some still believe AI will “create new jobs,” Hinton argues this time is different. AI won’t just automate physical labor; it’s coming for intellectual work too — from legal clerks to customer service to basic coding.

“AI is going to replace all mundane intellectual labor,” he said bluntly. “It’s like the industrial revolution for the mind.”

His advice for young people? Become a plumber — one of the few professions that still require advanced physical manipulation, something AI still struggles with.

But Hinton isn’t just concerned about income — he’s worried about identity. For many, work provides meaning, purpose, and dignity. Universal Basic Income might keep people alive, but will it keep them fulfilled?

Machines With Feelings?

In perhaps the most provocative section of the interview, Hinton challenged the widespread assumption that machines can’t feel, be conscious, or have subjective experiences.

“If I make a robot that runs away when it sees a bigger, more dangerous robot, is that fear? I think so,” he said.

He believes modern AI systems — especially multimodal models with sensors and actuators — are already beginning to develop rudimentary forms of subjective experience. Over time, machines could possess emotions, not just simulate them. The implication is chilling: if AI can feel, we might someday owe it rights.

Between Hope and Oblivion

At 77, Hinton is reflective but undeterred. He admits he didn’t foresee how fast AI would evolve or how real the risks would become. But he’s adamant that we must act now.

“It would be sort of crazy if people went extinct because we couldn’t be bothered to try,” he said.

Hinton may have helped create AI — but now, he wants to save us from it.