Humanity has crossed a threshold – and most people simply scrolled on
Something occurred recently that most people simply overlooked. Two Amazon data centres in the United Arab Emirates were hit during Iranian retaliation for US military actions. Another facility in Bahrain was reportedly damaged after a drone crashed nearby. The earlier attacks that triggered this retaliation were allegedly carried out using AI-powered targeting systems.

It was just a brief moment in the news cycle, quickly overshadowed by the next political story. But the consequences are hard to ignore. Artificial intelligence has now entered into an active geopolitical conflict.
The infrastructure that powers the digital world – the same systems that store family photos, run businesses, and answer questions on our smartphones – has become strategic warfare infrastructure. Algorithms silently woven into civilian technologies now help decide where weapons are targeted.
But history tells us that major technological shifts are rarely heralded by a single dramatic moment. They first appear as signals in small news reports, political disputes, or the unexplained departure of insiders.
Another signal appeared almost simultaneously.
The US federal government recently removed the artificial intelligence systems developed by Anthropic from its networks. Shortly thereafter, OpenAI replaced them with its own defence agreement.
The public does not know the full story behind this change. We don’t know exactly what demands were made behind closed doors, which ethical guidelines were contested, or why one of the world’s leading AI companies was suddenly forced out of government systems. But the event itself is another signal.
And yet another signal is quietly emerging within the AI industry itself: the departure of security researchers. In recent years, numerous high-ranking researchers tasked with investigating the risks and security of advanced AI systems have left their positions at leading companies and research institutions. Many of these departures occurred without significant public explanation.
These researchers rarely talk about the internal debates they have experienced. Few are in a position to do so.
But such patterns are significant. When the people closest to a powerful technology quietly withdraw, it often means they have seen tensions that the public hasn’t even begun to investigate.
History has seen such moments before.
In the early 1940s, scientists working on what would later become the Manhattan Project realized they were creating something unprecedented. Some expressed concerns about the implications of this technology once it left the laboratory. However, these debates largely took place behind closed doors. The public only grasped the full extent of its impact after the technology had already been deployed.
Artificial intelligence could develop according to a similar pattern. We are seeing the signs now – researchers are abandoning their positions, governments are arguing about ethical guidelines, and AI systems are emerging in real geopolitical conflicts.
And yet, the public discussion about artificial intelligence is still shaped by assumptions that make these signals harder to recognize.
Myth No. 1: AI is “just a tool”
This analogy is reassuring. We imagine AI as being like a calculator or a word processor – machines that perform tasks efficiently while remaining firmly under human control.
Tools can become strategic resources in war. But they don’t usually produce results of their own in ways that their developers sometimes struggle to explain, and they don’t require constant negotiations about the ethical boundaries of their behaviour.
Modern AI systems are no longer programmed line by line in the traditional sense. They are trained with massive datasets and learn patterns within that data. Their behaviour arises from statistical relationships rather than clear instructions. AI researchers therefore say that these systems are more ‘grown’ than built. And that makes them fundamentally different from the tools we are used to controlling.
Myth No. 2: AI is neutral
AI systems are trained with human-generated information. This information reflects human biases, historical conflicts, and unequal representation. When an AI system generates a response, it combines patterns it has picked up from this material.

AI has developed a fluent language that can create the impression of objectivity. But confident formulations are not the same as truth.
Recent conflicts between governments and AI companies clearly demonstrate this. Debates about surveillance limits or autonomous weapons are not merely technical issues; they are moral ones. Guardrails exist precisely because the systems themselves are not neutral.
Myth No. 3: Humans completely control AI
Traditional software behaves according to the clear instructions written by programmers. Modern AI systems work differently. Their results are probabilistic – they arise from layers of learned relationships within the model.
Developers are now using AI systems to build and manage other AI systems. They’re letting AI write code they would have previously written themselves – and it’s going on so fast that they can no longer monitor or even understand every line generated by systems that never sleep.
Control in this environment is not a simple switch. It is more like a shifting boundary that no one has ever seen before – and the language to even define it is still in its infancy.
Mistake No. 4: The experts know where this leads.
In most scientific fields, disagreements among experts remain within a relatively narrow range. In artificial intelligence, this range is unusually broad. Some researchers believe AI will revolutionize medicine and scientific discovery. Others warn that the technology could cause significant societal disruption if its development progresses faster than human wisdom.
Among those who express such concerns is Geoffrey Hinton, Nobel laureate and one of the fundamental pioneers of modern AI research. This range of opinions shows that even the people who build these systems don’t completely agree on where they are leading.
Artificial intelligence is rapidly integrating into the systems that shape modern life – communication, commerce, national security, and governance.
We see signals in all these areas. We can clearly see that AI is shaping our future, whether we like it or not. The only question is whether we will recognize the signals in time to understand what is developing – or whether, as societies often do, we will wait until the consequences make the signals impossible to ignore.
yogaesoteric
March 20, 2026