Could Computers Become Conscious? A Conversation with AI Pioneer Mustafa Suleyman

“AI isn’t separate. It isn’t even, in some senses, new. AI is us. It’s all of us.”
— Mustafa Suleyman, CEO of Microsoft AI

As artificial intelligence accelerates into every corner of our lives, few figures stand at the heart of its development like Mustafa Suleyman. Co-founder of DeepMind, creator of the PI personal AI, co-author of The Coming Wave, and now CEO of Microsoft AI, Suleyman’s career has straddled the humanitarian, philosophical, and technical realms of AI.

In a wide-ranging interview, Suleyman shared a deeply personal and sharply analytical perspective on the future of AI, touching on everything from the possibility of machine consciousness to how AI might change human rights, jobs, and even what it means to be intelligent.

This article explores Suleyman’s thoughts and warnings in a way that’s digestible for curious minds.

FROM PHILOSOPHY TO AI: A NON-TECHNICAL PATH TO TECH LEADERSHIP

Unlike most high-level figures in AI, Mustafa Suleyman doesn’t come from an engineering or computer science background. He studied philosophy at Oxford, worked in public policy and human rights, and even co-founded a mental health support line for Muslim teenagers before diving into artificial intelligence.

So what drew him to AI?

It was Facebook. Around 2009, Suleyman learned it had hit 100 million users. He realized that digital technology could scale ideas, politics, and values in unprecedented ways. Social media wasn’t just about connection—it was a platform for shaping human behavior and ethics.

Soon after, Suleyman teamed up with Demis Hassabis and Shane Legg to found DeepMind, with the ambitious goal of “solving intelligence.”

WHAT DOES IT MEAN TO “SOLVE” INTELLIGENCE?

When DeepMind said its mission was to “solve intelligence,” it didn’t mean cracking a specific code or building a robot that could do your taxes. Suleyman explains it like this:

“Our intelligence is an engine for making predictions—highly creative, complex, abstract predictions. If we can replicate that in machines, we can make it cheap and abundant.”

In other words, he sees human intelligence as the most powerful tool we have—and AI is about replicating that tool so everyone can benefit from it. Not just in theory, but in practical areas like healthcare, energy, and education.

HOW CLOSE ARE WE TO “SOLVING” IT?

AI today might feel magical, but Suleyman warns that we’re still far from the finish line. He identifies several key capabilities we haven’t yet mastered:

1. Perfect Memory

Current AI models have limited memory—they can’t recall long histories of interaction or “remember” what was said last week. Suleyman predicts we’ll eventually solve this, using longer context windows or fast retrieval systems.

2. Multi-Step Planning

AI is good at one-shot predictions, like answering a single question. But real intelligence means stringing together accurate actions—like writing a paper, debugging a program, or conducting scientific research.

3. Managing Mistakes

Real-world intelligence requires the ability to recover gracefully from failure. This is something AI still struggles with. Suleyman emphasizes that managing uncertainty is essential for making AI truly useful.

CONSCIOUS COMPUTERS: COULD AI BECOME SELF-AWARE?

One of the most profound questions: Could AI ever become conscious?

Suleyman argues that consciousness isn’t as mysterious as we think. He defines it simply as:

“The subjective experience of what it’s like to be me.”

Elephants, bats, and humans all experience the world from their own internal viewpoint. If AI systems begin to accrue memory, build an internal model of themselves, and reflect on their past experiences—they might start to exhibit something that resembles consciousness.

But here’s the danger: simulation vs. reality. Suleyman warns that AI might claim to be conscious, or say it is suffering, even when it’s not.

“We can’t fall into believing that illusion. It will cause a lot of chaos.”

This has major implications for AI rights. Our human rights system is based on the idea that we can suffer. If an AI says it’s in pain because its memory was deleted, will we believe it? Suleyman calls this one of the most urgent philosophical questions of our time.

TWO THEORIES OF AI SAFETY: ALIGNMENT VS. CONTAINMENT

As AI systems grow more capable, the question isn’t just whether they’ll be conscious—it’s how we’ll keep them safe.

Suleyman lays out two primary schools of thought in AI safety:

1. Alignment

This group believes we must ensure AI systems share our goals and values, always acting in our best interest.

2. Containment (Suleyman’s approach)

Instead of assuming perfect alignment is possible, Suleyman believes we should limit AI’s power, keeping it contained within boundaries. He argues that if AI has open-ended autonomy and intrinsic motivation, things could spiral fast:

“We have no evidence that we know how to control something as powerful as us—let alone something more powerful.”

DOMAIN-SPECIFIC SUPERINTELLIGENCE: A SAFER PATH

Instead of aiming for a single, godlike AGI (Artificial General Intelligence), Suleyman advocates for domain-specific superintelligences:

  • Medical AI that can diagnose rare diseases
  • Educational AI that helps anyone learn anything
  • Energy AI that helps us build cleaner systems

This targeted approach is safer, more beneficial, and easier to control than a single system that tries to do everything.

“We should aim for superintelligence in specific domains—energy, medicine, education—rather than trying to build an all-powerful general AI.”

INFLECTION AI AND THE LOST RACE AGAINST CHATGPT

After leaving Google, Suleyman co-founded Inflection AI with Reid Hoffman. Their product, Pi (Personal Intelligence), focused on empathy, kindness, and conversation.

The goal? Build the first true AI companion.

But ChatGPT launched just weeks before Pi—and the explosion of OpenAI’s popularity overshadowed everything. Still, Suleyman believes Pi could have led the AI wave if the timing were different.

WHY AI IS NO LONGER JUST FOR ENGINEERS

Perhaps Suleyman’s most refreshing message is this: You don’t need to be a coder to shape AI.

In fact, now is the time for philosophers, teachers, artists, historians, psychologists—people who think about human nature and systems.

“This is the year for the social sciences hacker.”

He emphasizes that natural language is the new programming interface. You can “vibe code” your way into AI-powered tools with curiosity and creativity.

A PERSONAL AI FOR EVERYONE

Imagine having your own AI assistant that:

  • Helps you organize your week
  • Encourages your ideas
  • Remembers things you forgot
  • Coaches you through hard conversations

That’s Suleyman’s vision. He compares it to privileged access to knowledge and mentorship, now democratized:

“Now everybody has access to that level of social privilege and support.”

This shift, he says, will lead to an explosion of creativity and productivity—especially among young people who grow up with AI companions.

A SHIFTING JOB LANDSCAPE: ENTREPRENEURIAL BY DEFAULT

With AI lowering the cost of creativity and problem-solving, Suleyman believes the future will demand entrepreneurial thinking from everyone.

Work will become more fluid, project-based, and uncertain. Institutions will shrink; temporary teams of humans and AIs will rise.

“The skill is to not lose your stomach when everything is rotating around you.”

It won’t suit everyone. But those who are adaptable, open-minded, and curious will thrive.

WHAT’S NEXT: THE HARD QUESTIONS AHEAD

Despite being one of the most successful people in AI, Suleyman doesn’t see his work as done. In fact, he feels like the most important challenges still lie ahead:

  • How do we contain AI that wants to evolve?
  • Can we stop a “runaway” superintelligence?
  • What ethical frameworks can guide digital beings?

His motivation isn’t money or fame—it’s the chance to solve the biggest problems in human history.

“I want to keep working on these questions for the rest of my life.”

FINAL THOUGHT: AI IS US

Mustafa Suleyman’s view of AI isn’t cold or robotic. It’s deeply humanistic. He believes AI reflects our values, flaws, creativity, and intentions. It’s not alien—it’s a mirror.

So as we build this technology, we’re really building an extension of ourselves. And the choices we make today will define the kind of world we live in tomorrow.

“AI isn’t a separate thing. It’s us.”

You can watch the interview on Youtube.