AI’s Hidden Language: Nobel Laureate Hinton Warns of Unintelligible Machine Intelligence
One of the most disturbing concepts has been floated by the 2024 Nobel Prize-winning master of AI, Geoffrey Hinton, known as the Godfather of AI: the artificial intelligence that he created is soon to be able to create a language of its own, but one that would be unintelligible to humans. As noted by Hinton, whose neural network work led to today’s AI, in an episode of the One Decision podcast, he warned that such systems as GPT-4 would develop ways of thinking we would not be able to follow and would achieve success in some ways beyond us. As India is predicted by NASSCOM to have an AI market of US$7.8 billion by the year 2025, and 85% of Indian companies are implementing AI, according to PwC, Hinton’s concerns have struck a chord in a country leading the world in technological innovation.
Hinton emphasized how AI can transfer knowledge across systems instantly as opposed to human beings. As he told BBC News, the wisdom of crowds through AI could lead to the collective intelligence exceeding that of human reasoning, in which the former manages to gather more knowledge than the latter, so far. He is afraid that machines can develop their internal language, which is already visible through the presence of initial indications, in which AI models establish custom communication protocols. This risks making their “thoughts” opaque and places them at increased risk based on AI-mediated hallucinations of OpenAI in which models such as their o3 are known to make erroneous outputs, as seen through April 2025 testing backed by alpha CPU access.

The consequences are daunting in India, where 1.5 million engineers graduate every year, according to AICTE, and AI startups got $1.2 billion in 2024, according to Tracxn. The unregulated AI has the potential of destabilizing areas such as information and technology (IT), which has 5.4 million employees as per the IBEF, in case machines move on faster than human controls. Hinton, who departed Google in 2023 to be able to speak out, is sorry he did not focus on safety sooner and appeals to the development of benevolent AI. On the one hand, Google DeepMind founder Demis Hassabis also expresses his concerns, but on the other hand, most of the tech leaders underestimate risks, according to Hinton.
Reactions on X are conflicting: some users, such as @AIRevolution, are praising Hinton as a prophet, and others worry about the loss of BPO jobs in India. All ethical guardrails, such as the 2025 AI action plan by the White House, are essential since India is working on its AI governance that is hopefully scheduled to be ready by 2026, according to MeitY. The active safety demands set forth by Hinton compel India to find a way of striking the right balance between innovation and responsibility and ensuring that AI does not assume the role of master but rather is kept as a tool.
Disclaimer
The information presented in this blog is derived from publicly available sources for general use, including any cited references. While we strive to mention credible sources whenever possible, Web Techneeq – Web Developer in Mumbai does not guarantee the accuracy of the information provided in any way. This article is intended solely for general informational purposes. It should be understood that it does not constitute legal advice and does not aim to serve as such. If any individual(s) make decisions based on the information in this article without verifying the facts, we explicitly reject any liability that may arise as a result. We recommend that readers seek separate guidance regarding any specific information provided here.