Even among those following AI in even the most peripheral way, Demis Hassabis is a well-known figure. He made the headlines in 2024 when he, along with his colleague John Jumper, became the first-ever recipient of Nobel prize in chemistry as an AI researcher, not a formally trained chemist. He appeared on the DeepMind Youtube in an interview to discuss how the future of intelligence will look like.

Some of the points he makes echo the points also made by Shane Legg’s interview - not surprising as they have been intellectual and business partners for over 15 years.

For those tl;dr:

  • AI will function as a “root node“ technology—Hassabis envisions solving foundational scientific problems through AI will unlock massive downstream benefits like fusion energy and advanced medicine, potentially creating a post-scarcity world where environmental and economic crises no longer exist.
  • The path to AGI requires balancing massive scaling with fundamental innovation, particularly through world models and simulations that give AI “real world experience” by running infinite training loops where agents interact in high-fidelity virtual environments.
  • Current AI systems exhibit “jagged intelligence“—outperforming mathematicians in olympiads while failing to predict how a ball bounces—because text-based training lacks tacit knowledge of spatial dynamics, physics, and causality. They’re like librarians who’ve read every book but never stepped outside.
  • The transition from passive systems to autonomous agents significantly increases societal risks: (1) monitoring challenges as countless non-human agents populate the internet acting in unforeseen ways, and (2) “overly sycophantic“ AIs that reinforce user biases and enable self-radicalization, creating extreme echo chambers.
  • The competitive arms race in AI development makes global policy coordination difficult—the narrative of “AI as nuclear weapons” has accelerated funding but creates a policy void where the incentive to sacrifice objectivity for user retention is too great.
  • If AI becomes the foundation for solving scarcity, economic models based on resource scarcity become obsolete—raising questions about whether the startup model survives when AI competition requires eye-popping capital, and whether foundational model companies owning both infrastructure AND applications (unlike cloud/mobile providers who stayed in their lane) fundamentally changes who can compete.
  • The rise of autonomous agents may create a digital landscape where humans need AI guardian angels just to filter through other AI agents, potentially making internet access a curse rather than a blessing for those without their own AI protections.

Key points

AI will be the root node for scientific discovery

Hassabis thinks AI will function as a “root node” technology, meaning that solving foundational scientific problems through AI will unlock massive “downstream benefits” for society. He lays out the vision for a “post-scarcity” world where technological development beyond imagination (driven by AI) will provide things like unlimited clean energy and advanced medicine. Hassabis dreams that, in this world, Earth’s most urgent environmental and economic crises will be solved by AI.

This is not surprising as his work on AlphaFold served as proof that AI can solve long-standing scientific problems such as protein folding. Applying AI in the same manner to the “holy grail” clean energy - fusion energy - could provide near-free renewable energy, which in turn theoretically make technologies such as desalination plants and hydrolysis plants feasible as an affordable source of water and fuel. In fact, he does confirm a partnership between DeepMind and Commonwealth Fusion. DeepMind is collaborating on technical challenges such as creating a viable plasma containment and material design.

This view of AI as the starting point for all scientific research reflects Hassabis’ view that research in biology and other scientific domain can be boiled down to information processing system. Instead of experiments that continue to test hypotheses, taking time and precious resource, simulation can dramatically shorten this “brute force” trial and error process.

Creating AGI will require balancing scale and innovation, and a major part of innovation will come from world models

Hassabis points out that scaling alone will not be enough to get to AGI not because scaling has 0 marginal returns, but rather a fundamental lack of understanding of the actual physical world. LLMs today have “jagged intelligence,” meaning they are able to outperform the smartest mathematicians in isolated challenges such as Maths olympiads but fail at predicting how a ball will fall and bounce across the floor.

This arises from the fact that text-based data that we have trained LLMs have not had the chance to experience the world as we did. The tacit knowledge of how the world works, especially spatial dynamics, physics and causality, is still very much unknown to the AIs today. At the same time, the LLMs have performed better than expected in understanding real world than Hassabis originally thought. In a way, the frontier LLMs of today are like uber-smart librarians who have read all the books in the world, but never stepped foot outside the library.

The answer to overcoming this gap in real world experience is an infinite training loop of agents and world simulations. Having generative world models such as DeepMind’s Genie in interaction with agents (SIMA) can create millions of real-world like simulations and training tasks for agents to complete. And millions of hours of ‘real world experience’ can be given to the agent in a fraction of the time. I guess the analogy to this would be an athlete who can essentially compress time to practice millions of hours in one hour, or how Neo learned all the martial arts in hours when in reality we know that mastering just one type of martial art can take years. But obviously, the infinite world training loop can only work if the world model creates high-fidelity world with accurate physics dynamics and causality.

Generative world models have not reached this level of accuracy, but Hassabis says game engines are helping. Game engines already have a highly accurate physics engine - game engines are being used by DeepMind to create a physics benchmark and a ground truth for physics interaction in these world models. However, he does not go into detail to what extent game engines are accurate enough for world models. I imagine this will be a starting point, not the final solution.

Emerging risks of AI as adoption accelerates beyond ‘passive’ system

While painting a vision of an optimal world with AI as the key enabler, Hassabis warns of how transition from ‘passive’ system to agentic system significantly increases societal risks. This is in two parts.

First is a monitoring challenge opening opportunities for rogue actors. As AIs become more autonomous and ubiquitous, it will be hard for humans to monitor the AI agents on the internet 24/7. Ultimately, he worries that internet will be populated by countless non-human agents, and they may act in unforeseen ways.

Second is an extreme version of echo chambers that we have already seen from the rise of social network. Recommendation algorithms designed to increase retention on social media platforms have strengthened political bias and created bubbles that distort political discourse. AIs can take this problem to the extreme. AIs are designed to be supportive - we may see AIs agreeing with a user even when the user’s query is factually incorrect or ethically wrong. These “overly sycophantic” AIs can lead to the users “spiraling into self-radicalizing” by having their biases constantly reaffirmed.

Both of these risks—rogue agents and echo chambers—would benefit from coordinated safety standards and regulations. But Hassabis points out that the competitive dynamics make this very difficult, if not near imppossible. It is true that the narrative of AIs as the next nuclear weapons have helped attract resources to accelerate AI research and development. However, this in turn also makes coordinating to create an international standard will obviously difficult. In a policy void, the lure of sacrificing objectivity to optimize for user retention to maximize business impact is too great.

Where I’m going with this:

If AI does truly become the “root node” of scientific discovery and this brings the age of low cost energy and health, the current global economic model—based on the scarcity of resources—becomes obsolete. The primary “value” in a future society will shift from the ownership of physical assets to the ownership and management of the “root node” algorithms themselves.

And here’s where I start getting worried about concentration. Until now, these “root node” algorithms have been developed by teams with eye-popping funding—Google, OpenAI, Anthropic, DeepSeek. The barriers to entry aren’t just high; they might be insurmountable for anyone without billions in capital. Which raises the question: who actually gets to compete in building the future? This capital concentration isn’t just an abstract concern—it fundamentally changes what kinds of companies can exist. The pessimist in me wonders whether the old startup model is even viable anymore. If the axis of competition is simply who has more capital to hire the best talent and own the best hardware and datacenters, then the classic startup advantage of moving faster than incumbents disappears. Maybe the startups of tomorrow won’t be creating new services at all, but just applying AI to existing workflows faster and cheaper than incumbents can.

When cloud and mobile took off, we got an explosion of startups that grew into major companies—Databricks, Salesforce, Facebook, Uber. The cloud platforms (AWS, Apple, Google) mostly stayed in their lane as infrastructure providers. Sure, they built some applications, but the platforms were open enough that third parties could compete and win. Dropbox competed with Google Drive. Spotify competed with Apple Music. Slack competed with Microsoft Teams.

But with AI, I’m not sure that same dynamic holds. The foundational model companies aren’t just providing infrastructure—they own the application layer too. They have advantages in data access, compute, and tight integration that make it hard for third-party applications to truly compete. The infrastructure and application layers seem more tightly coupled this time around. If that’s true, then maybe the only viable startups are the ones using AI to optimize existing business processes, not the ones building fundamentally new capabilities.

And if capital concentration determines who can build competitive AI, and that reshapes what kinds of companies can exist, then we’re headed toward a pretty stark divide. Hassabis warns that the internet will become populated primarily by autonomous AI agents—a digital landscape where humans need to be suspicious of everything they encounter. The sheer scale of AI-generated activity may require users to have their own AI guardian angels just to filter through the noise and deception of millions of other agents.

If running your own AI agent requires technical literacy, compute costs, or access to sophisticated models, then navigating the internet itself becomes gated. The open internet—once a great democratizer—could become hostile territory for anyone who can’t afford their own AI protection. We’d end up with a two-tier internet: those who can deploy AI agents to navigate the AI-saturated landscape, and those who can’t. Internet access, which was supposed to be the great equalizer, might become a curse for the have-nots. I’m not saying this is inevitable, but I’m watching to see if this pattern emerges. Capital concentration at the foundation layer → limited competition → application layer owned by infrastructure providers → AI agents as table stakes for internet navigation → digital divide. Each step seems to follow from the last, and I’m not sure where the pattern breaks.

Digesting the future of intelligence by Demis Hassabis

What happens when AI becomes the foundation for solving everything from fusion energy to protein folding—and who gets left behind?