Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Meta chief AI scientist says warnings about AI are complete nonsense

Yann LeCun, a famous computer scientist, who is a professor at New York University, and the chief AI scientist at Meta, says we overestimate the pros and cons of artificial intelligence (AI). In an interview with the Wall Street Journal, LeCun said that we often overestimate how smart AI is. He quips that AI does not even have the intelligence of our pets, let alone humans. On the other hand, he also believes that the cons associated with AI –– mostly the hazards that may come with its development –– are also overestimated. In fact, he goes on to say, “it’s complete BS”.
Yann LeCun has been a pivotal figure in the rise of artificial intelligence, particularly in the development of deep learning. He co-created the convolutional neural network (CNN), a breakthrough that powers much of today’s image and speech recognition systems. His work has influenced advancements in computer vision, natural language processing, and autonomous systems. As a founding member of Meta’s AI research lab (FAIR), LeCun has driven innovations that have helped make AI more accessible and scalable. His contributions have shaped modern AI applications, solidifying his status as a key architect of the AI revolution.
Yann LeCun believes that concerns about AI are often exaggerated, especially compared to the more dramatic warnings voiced by several experts in the industry. LeCun sees AI as an immensely valuable tool, fundamental to Meta’s operations. It powers everything from real-time translation to content moderation, helping to fuel Meta’s growth and contributing to the company’s $1.5 trillion valuation. His team, including FAIR and a product-focused division called GenAI, continuously advances large language models and other AI technologies, integrating them deeply into Meta’s products.
Yet, despite recognising AI’s importance, LeCun remains sceptical about some of the more dire predictions from others in the field. For instance, he is convinced that today’s AI systems, while powerful, are not truly intelligent. He often critiques what he sees as overblown claims from AI startups and leaders like OpenAI’s Sam Altman, who has recently suggested that artificial general intelligence (AGI) could arrive within “a few thousand days”. LeCun counters that such predictions are premature, arguing that the world has yet to design a system that even approaches the cognitive complexity of a house cat, let alone something more advanced than human intelligence.
“It seems to me that before “urgently figuring out how to control AI systems much smarter than us” we need to have the beginning of a hint of a design for a system smarter than a house cat,” LeCun wrote in a post on X.
LeCun’s perspective contrasts sharply with that of Geoffrey Hinton, who has become a vocal critic of AI’s rapid development. Hinton, who spent over a decade at Google, was instrumental in developing neural networks that serve as the backbone for popular AI models like ChatGPT and Bard. However, he has grown increasingly concerned about the potential risks associated with AI. In 2023, Hinton made headlines when he left Google, warning about the dangers posed by the rise of powerful AI systems. He raised concerns about the proliferation of misinformation, the potential for AI to disrupt job markets, and the existential risks tied to machines that could surpass human intelligence.
In a particularly ominous tone, Hinton highlighted the possibility of AI systems gaining the ability to manipulate human behaviour. He suggested that advanced AI could leverage its vast knowledge of literature, history, and political strategies to become highly persuasive, posing a threat to societal stability. Hinton’s warnings have added fuel to the ongoing debate about AI’s potential risks, emphasising the need for caution as the technology evolves.
While LeCun acknowledges that AI poses challenges, he remains optimistic about its future, dismissing fears of imminent super-intelligent machines as far-fetched. For him, the focus should remain on harnessing AI’s capabilities for innovation and solving real-world problems. The differing perspectives between LeCun and Hinton underscore a central tension in the AI field: whether we should focus on mitigating hypothetical future dangers or capitalise on the transformative potential of AI today.

en_USEnglish