Meta is pushing for major changes in how artificial intelligence is trained. At the AI Action Summit in Paris, the company’s top AI scientist, Yann LeCun, shared why he believes today’s models still fall short of true intelligence.
Good to know
Instead of improving language-based models like ChatGPT or Gemini by simply adding more data or functions, LeCun says the field needs a full reset in strategy. He explained that genuine intelligence must involve four core abilities: understanding physical environments, maintaining long-term memory, reasoning through complex situations, and making detailed plans.
“Understanding the physical world, having persistent memory, being able to reason and being able to plan complex actions, particularly planning hierarchically,” LeCun said.
He also criticized how some large tech companies are piling new features onto existing systems rather than addressing deeper shortcomings. According to him, true AI progress will only happen when models are designed to process and adapt to real-life experiences—what he calls “world-based models.”
These models would not just respond to prompts but could simulate possible actions and outcomes, similar to how humans predict consequences. Because the real world is unpredictable, LeCun believes AI systems must learn to think in abstract ways, using strategies humans develop naturally.
Meta is already exploring this shift. One project is retrieval augmented generation (RAG), which helps large language models answer better by pulling in outside information. Another is V-JEPA, introduced in February, which learns by watching videos and figuring out what is missing or masked, without generating text responses.
LeCun believes these experiments could eventually lead to smarter AI that is less reliant on pre-programmed responses and more capable of real-world thinking.