Marketplace®

Daily business news and economic stories
Dec 15, 2025

A case for AI models that understand, not just predict, the way the world works

Gary Marcus, professor emeritus at NYU, explains the differences between large language models and "world models" — and why he thinks the latter are key to achieving artificial general intelligence.

Download
A case for AI models that understand, not just predict, the way the world works
Vertigo3d/Getty Images

Subscribe:

For a while now, large language models have been the hot thing in AI. But while they do some remarkable things by predicting words in a sequence, they don't have a true internal understanding of how the world works the way a human learns grammar or the laws of physics.

World models attempt to bridge that gap.

AI pioneer Fei Fei Li at Stanford University has been working on them, and so has Yann Le Cun, the former head of Meta's AI research. Google is also developing world models for robotics. But they're all pursuing them in slightly different ways.

To help explain we asked Gary Marcus, a cognitive scientist and author of the book “Taming Silicon Valley.” He argues we need AI systems where the rules of the world are partially programmed into the algorithms by humans.

More on this

AI’s next act: World models that move beyond language” from Axios

What are world models? The key to the next big AI leap” from The Wall Street Journal

How o3 and Grok 4 accidentally vindicated neurosymbolic AI” from Marcus on AI

The Team

Large language models versus "world" models