Business

Meta AI director says large language models will not reach human intelligence


Meta’s chief artificial intelligence officer said the large language models that power AI products capable of creating things like ChatGPT will never achieve human-like reasoning and planning abilities, because he focuses instead on a radically alternative approach to creating “superintelligence” in machines.

Yann LeCun, chief AI scientist at the social media giant that owns Facebook and Instagram, said LLM has “a very limited understanding of logic. . . does not understand the physical world, has no persistent memory, cannot reason according to any reasonable definition of terms, and cannot plan. . . according to hierarchy”.

In an interview with the Financial Times, he argued against relying on advanced LLMs in efforts to create human-level intelligence, as these models can only accurately respond to prompts. if they are given the right training data and are therefore “intrinsically insecure”. ”.

Instead, he is working to develop a whole new generation of AI systems that he hopes will power machines with human-level intelligence, although he says the scope This vision may take 10 years to achieve.

Meta has poured billions of dollars into developing its own LLM as synthetic AI explodes, aiming to catch up with rival technology groups, including Microsoft-backed OpenAI and Alphabet’s Google.

LeCun runs a team of about 500 employees at Meta’s Basic AI Research (Fair) lab. They are trying to create artificial intelligence can develop common sense and learn how the world works in ways similar to humans, in an approach called “world modeling.”

The Meta AI chief’s experimental vision is a potentially risky and expensive gamble for the social media group at a time when investors are eager to see quick returns on investments into AI.

Last month, Meta lost nearly $200 billion in value when CEO Mark Zuckerberg Committed to increasing spending and transformed the social media conglomerate into “the world’s leading AI company,” leaving Wall Street investors concerned about rising costs with little immediate revenue potential.

“We’re at a point where we think we might be on the cusp of next-generation AI systems,” LeCun said.

LeCun’s comments come as Meta and its rivals push for ever more advanced LLMs. Figures like OpenAI director Sam Altman believe they provide an important step towards the creation of artificial general intelligence (AGI) – a time when machines have greater cognitive abilities than humans.

OpenAI last week released a new, faster GPT-4o model, and Google announced a new “multimodal” artificial intelligence agent that can answer queries in real time on video, audio and text called Project Astra, powered by an upgraded version of the Gemini model.

Meta also launched new products Camel 3 last month’s model. The company’s head of global affairs, Sir Nick Clegg, said its latest LLM has “significantly improved capabilities such as reasoning” – the ability to apply logic to queries. For example, the system will infer that a person with a headache, sore throat, and runny nose has a cold, but may also recognize that allergies may be causing the symptoms.

However, LeCun said this development of LLM has been superficial and limited, with models only learning when human engineers intervene to train it on that information, rather than AI drawing conclusions naturally like humans.

“For most people, it’s definitely reasoning — but mostly it’s exploiting accumulated knowledge from lots of training data,” LeCun said, but added: “[LLMs] very useful despite its limitations.”

Google DeepMind has also spent years pursuing alternative methods for building AGI, including methods like reinforcement learning, in which AI agents learn from their surroundings in a virtual, game-like environment. play.

At an event in London on Tuesday, DeepMind Director Sir Demis Hassabis said what’s missing in language models is that “they don’t understand the spatial context that you’re in. . . So that ultimately limits their usefulness.”

Meta founded the Fair lab in 2013 to pioneer AI research, recruiting leading scholars in the field.

However, in early 2023, Meta formed a new GenAI team, led by product manager Chris Cox. It has attracted many AI researchers and engineers from Fair, and is leading work on Llama 3 and integrating it into products, such as new AI assistants and image generation tools.

The creation of the GenAI team comes as some insiders argue that the academic culture in the Fair lab is partly to blame for Meta’s late arrival to the AI ​​boom. Zuckerberg has been pushing for more commercial applications of AI under pressure from investors.

Still, LeCun remains one of Zuckerberg’s core advisors, according to people close to the company, due to his achievements and reputation as one of the founders of AI, having won the Turing Award for his work. research on neural networks.

“We have refocused Fair on our long-term goal of human-level AI, essentially because GenAI is now focused on things that we have a clear roadmap toward,” LeCun said.

“[Achieving AGI] It’s not a product design problem, it’s not even a technology development problem, it’s really a science problem,” he added.

LeCun first published a paper on his world modeling vision in 2022, and Meta has since released two research models based on this approach.

Today, he said Fair is testing different ideas for achieving human-level intelligence because “there’s a lot of uncertainty and exploration in this, [so] We cannot know which one will be successful or ultimately chosen.”

Of these, LeCun’s team is feeding systems hours of video and intentionally discarding frames, then making the AI ​​predict what will happen next. This is intended to mimic the way children learn by passively observing the world around them.

He also said Fair is exploring building “a universal text encoding system” that would allow the system to process abstract knowledge representations in text, which could then be applied to video and sound.

Some experts doubt whether LeCun’s vision is feasible.

Aron Culotta, an associate professor of computer science at Tulane University, said common sense has long been “a thorn in the side of AI” and that teaching models of cause and effect is a challenge, making them “easy to encounter these unexpected failures.

A former Meta AI employee described the push for modeling in the world as “vague,” adding: “It feels like a lot of flag-planting.”

Another current employee said Fair has yet to prove itself a real competitor to research groups like DeepMind.

In the long term, LeCun believes this technology will power AI agents that users can interact with through wearable technology, including augmented reality or “smart” glasses and “rings”. electromyography” (EMG) wristband.

“[For AI agents] To be truly useful, they need to have something resembling human-level intelligence,” he said.

Additional reporting by Madhumita Murgia in London

News7f

News 7F: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button