New “consciousness-reading” Centaur AI predicts human behaviour with startling accuracy – but at what cost?

In a breakthrough that blurs the line between science fiction and reality, researchers have developed an artificial intelligence (AI) system capable of predicting human decisions with uncanny precision.

Dubbed Centaur, this AI doesn’t just guess whether a user will click an ad. It anticipates how humans will navigate complex moral dilemmas, learn new skills or even strategize in unfamiliar scenarios. The implications are staggering – from revolutionizing marketing and education to raising urgent ethical questions about privacy and free will.

Centaur, detailed in a study published on July 2 in Nature, was trained on a staggering dataset – 60,000 people making over 10 million decisions across 160 psychological experiments. Unlike traditional models that specialize in narrow tasks like predicting stock trades or gambling habits, Centaur operates as a general predictor of human behaviour. It outperforms decades-old cognitive models, and some suggest AI may soon understand us better than we understand ourselves.

The system was built by fine-tuning Meta Platforms’ Llama 3.1 language model – the same technology behind ChatGPT – using a technique that modifies only a fraction of its programming. The training took just five days on a high-end processor – a testament to the accelerating power of machine learning.

Centaur didn’t just match existing psychological models; it demolished them. In head-to-head tests, it predicted human choices more accurately than 14 specialized cognitive and statistical models in 31 out of 32 tasks. Even more striking, it adapted to new scenarios it had never encountered such as altered versions of memory games or logic puzzles.

This adaptability suggests something profound. Human decision-making, for all its complexity, follows underlying patterns that AI can decode.

As one researcher noted, the human consciousness is “remarkably general” – capable of both mundane choices (picking breakfast cereal) and monumental ones (curing diseases). Centaur’s success implies that our behaviour may be more predictable than we’d like to admit.

Centaur’s internal processes resemble human brain activity

In a bizarre twist, Centaur’s internal processes began resembling human brain activity without being explicitly trained to do so. When compared to brain scans of people performing the same tasks, the AI’s neural patterns aligned more closely than expected. This suggests that by studying human choices, the system reverse-engineered aspects of human cognition.

Some scientists see Centaur as a tool for accelerating research. It can simulate experiments in silico, potentially replacing or supplementing human trials in psychology.

But sceptics warn that the model is far from perfect. It struggles with reaction times, social dynamics and cross-cultural differences. Moreover, its training data skews heavily toward Western, educated populations.

The rise of behaviour-predicting AI isn’t just a scientific milestone – it invites dystopian concerns. Could governments or corporations use it to manipulate choices? Will insurance companies predict risky behaviours and adjust premiums accordingly? And if AI knows humanity better than humans know themselves, what occurs to free will?

Despite its broad capabilities, researchers say that Centaur still has limitations. Centaur’s creators insist their model is open-source, inviting scrutiny. But history shows that even well-intentioned tools can be weaponized.

Consider how virtual communication algorithms, originally designed to connect people, now exploit psychological vulnerabilities for profit. If AI can predict human behaviour at scale, the potential for abuse is immense.

 

yogaesoteric
September 24, 2025

 

Leave A Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More