November 4, 2025

The Physics Nobel Warning About the End of the Human Species

, a British scientist and one of the leading figures in artificial intelligence, has once again sounded the alarm with a disturbing prediction: in two decades, superintelligent machines could replace humans, with a real risk of extinction.

“Within 20 years, superintelligent beings will replace us. We run the risk of extinction,” he said in a recent interview with El Mundo. His words do not come from a technophobe or a science fiction novelist, but from someone who has just received the Nobel Prize in Physics in 2024, in recognition of his decisive contributions to machine learning with neural networks.

For some, it sounds like an apocalyptic vision. For him, it is simple realism: history shows that when the power of a technology is not limited, it ends up being abused.

The urgency of regulating AI

Hinton has been advocating for solid regulation for years. This is not a whim or a preventive gesture: according to him, there are already concrete risks threatening social and political stability.

In the interview, he praised the European Union’s efforts with its legal framework on AI, but pointed out a critical gap: the exclusion of military use. “Several European countries are major arms producers and want to develop lethal and autonomous systems,” he warned, emphasizing that these types of exceptions are the most dangerous.

The scientist also criticized the initial focus of the regulations, which are centered on privacy or discrimination issues, while putting aside broader threats such as military, criminal, or even bioterrorist use of these tools.

Another concern for Hinton is the behavior of large corporations. He pointed out companies like Google, which in recent years have abandoned restrictions on the development of systems with military applications and relaxed inclusion policies, often due to political or commercial pressures.

In his opinion, these moves confirm that economic interests prevail over long-term security, exacerbating the need for independent oversight.

Hinton also believes that the discussion should not be limited to technical offices or corporate labs. That’s why he has confirmed an upcoming meeting with Pope Leon XIV, with the intention of adding influential voices to the debate.

According to him, religious leaders like the Pope or the Dalai Lama have a real capacity to influence politics. If the pontiff, with over a billion followers, were to support strict regulation, he could counter the triumphant narrative of tech companies, which celebrate the absence of regulations.

Immediate risks

Although Hinton talks about extreme scenarios – such as the possibility of superintelligence replacing humans – he insists that the most urgent dangers are already here. Among them, he mentions:

  • Mass unemployment, resulting from the automation of skilled jobs.
  • The corruption of democracies, with the manipulation of information on a large scale.
  • The , increasingly difficult to detect.
  • Bioterrorism, facilitated by AI capable of designing viruses or chemical weapons.

“These are short-term risks, caused by malicious actors. There is no need for metaphysical debates to understand them, just look at what is already happening,” he emphasized.

A historic recognition

In 2024, Geoffrey Hinton shared the Nobel Prize in Physics with John Hopfield, an award granted for their contributions to the foundations of machine learning through artificial neural networks. Both contributed to laying the groundwork for the technology that now powers virtual assistants, generative models, and computer vision systems.

Upon receiving the award, Hinton confessed to feeling “astonished” and expressed his gratitude to Princeton University, where he is a professor emeritus. His trajectory makes him a voice that is hard to ignore, even when his warnings are uncomfortable for governments and companies.

Hinton’s reflection poses a key dilemma: are we preparing society to coexist with a technology that evolves faster than legislation? For him, the answer is clear: without international regulation, existential risk is inevitable.

His message, though somber, is also a call to action. He recognizes that AI offers immense benefits, but warns that these advances will only be sustainable if they are accompanied by clear rules, transparency, and democratic control.

[Source: ]

Copyright © All rights reserved. | Newsphere by AF themes.