Psywar: AI bots manipulate your feelings

The “splinternet” refers to the fragmentation of the internet into separate, often isolated networks due to political, cultural, technological, or commercial reasons. It describes a scenario where the internet is no longer a unified global system but is instead divided into distinct “splinters” or subnetworks. This can occur through government censorship (like China’s Great Firewall), regional regulations (such as the EU’s GDPR), or tech companies creating “walled gardens” (e.g., Apple’s ecosystem).

The term highlights how these divisions limit universal access to information and create digital borders, often reflecting real-world geopolitical tensions or differing values on privacy, security, and free expression.

Elon Musk asked a key question. This is not dark humour or sarcasm; this is today’s reality:

What does a modern Bot farm look like?

The magazine Fast Company recently published an article on bot farms, detailing how these automated systems are increasingly sophisticated and can manipulate virtual communication networks and other online platforms. According to Fast Company, “bot farms” are used to deploy thousands of bots that mimic human behaviour, often to mislead, defraud, or steal from users. These bot farms can create fake virtual communication engagement to promote fabricated narratives, making ideas appear more popular than they actually are.

They are used by governments, financial influencers, and entertainment insiders to amplify specific narratives worldwide. For instance, bot farms can be used to create the illusion that a significant number of people are excited or upset about a particular topic, such as a volatile stock or celebrity gossip, thereby tricking virtual communication platforms’ algorithms into displaying these posts to a wider audience.

Here is the link to the article mentioned above.

Welcome to the world of virtual communication networks consciousness control. By amplifying free speech with fake speech, you can numb the brain into believing just about anything. Surrender your comforting ignorance and swallow the red pill. You’re about to discover how your thinking is being engineered by modern masters of deception.

The means by which information gets drilled into our psyches has become automated. Lies are yesterday’s problem. Today’s problem is the use of bot farms to trick virtual communication platforms’ algorithms into making people believe those lies are true. A lie repeated often enough becomes truth.”

A couple of months ago, I had the privilege of recording a podcast with Tim Poole, which focused on MAHA, health, seed oils, desiccant contaminants of our grains and soybeans such as glyphosate (Roundup), and a whole host of related issues. But, as far as I am concerned, the most important part of that visit was not what was broadcast, but rather the long off-camera conversation that followed. Consider that Jill and I published a consistent analysis of the use of PsyWar, censorship and propaganda technology deployed during the covid crisis.

So I am well informed about the topic, and am interviewed regularly about this or that aspect of PsyWar tech currently being deployed by the “Fake News” industry, Big Pharma, the US Government, the WHO, the UN, and a wide variety of other actors.

But Tim’s insights made me aware of aspects of the current landscape that Jill and I did not cover in the book. In particular, he provided great examples of the effects and use of “small rooming” – otherwise known as freedom of speech but not of reach (which is explicitly a core X algorithmically-enforced policy). But what really expanded my awareness was his explanations of how AI-driven bots are being deployed.

In illustrating his points, he began with the example of a certain influencer who used to be associated with the Daily Wire. I will withhold the names to protect the innocent and reduce the risk of defamation lawsuits. Once upon a time, this influencer posted content mildly to moderately critical of Israeli policies and actions in response to the October 7, 2023 Hamas invasion. Basically, the influencer ventured outside of what was then the window of allowable public discourse on the topic. The response on virtual communication networks was immediate and strikingly encouraging. Thousands of likes and new followers.

So, feeling like a nerve had been struck, the influencer followed up with even more strident statements, and once again, a wave of encouraging response swept over the sites where these opinions were posted. Feeling emboldened, the influencer continued to push forward, motivated by the growing number of new followers. And in so doing, the influencer crossed a number of lines into what has been designated by many as “hate speech.” The result was widespread deplatforming, including from the Daily Wire, other conservative media sites, and general censorship.

Here’s the fact: the majority of the new followers who were egging on the influencer were not real people. They were bots. Bot armies that had been launched specifically to drive the influencer into self-delegitimization by promoting and advancing what most perceived as hate speech. Mission accomplished, and another influential conservative voice bit the dust.

Turning back to the article from Fast Company:

Bot farm amplification is being used to make ideas on virtual communication networks seem more popular than they really are. A bot farm consists of hundreds and thousands of smartphones controlled by one computer. In data-centre-like facilities, racks of phones use fake accounts and mobile apps to share and engage. The bot farm broadcasts coordinated likes, comments, and shares to make it seem as if a lot of people are excited or upset about something like a volatile stock, a global travesty, or celebrity gossip – even though they’re not.

Meta calls it ‘coordinated inauthentic behaviour.’ It fools the network’s algorithm into showing the post to more people because the system thinks it’s trending. Since the fake accounts pass the Turing test, they escape detection.

‘It’s very difficult to distinguish between authentic activity and inauthentic activity,’ says Adam Sohn, CEO of Narravance, a virtual communication threat intelligence firm with major networks as clients. ‘It’s hard for us, and we’re one of the best at it in the world’.”

If one of the leading intel companies in the world has a hard time distinguishing between real accounts and bots, particularly AI-enabled bots, then if you think you can easily tell the difference, you are fooling yourself.

In their article, Fast Company shares a fascinating tale from Depression-era history that involves the Kennedy family’s fortune, which I had thought was just derived from bootlegging during Prohibition.

Distorting public perception is hardly a new phenomenon. But in the old days, it was a highly manual process. Just months before the 1929 stock market crash, Joseph P. Kennedy, JFK’s father, got richer by manipulating the capital markets. He was part of a secret trading pool of wealthy investors who used coordinated buying and media hype to artificially pump the price of Radio Corp. of America shares to astronomical levels.

After that, Kennedy and his rich friends dumped their RCA shares at a huge profit, the stock collapsed, and everyone else lost their asses. After the market crashed, President Franklin D. Roosevelt made Kennedy the first chairman of the Securities and Exchange Commission, putting the fox in charge of the henhouse.

Today, stock market manipulators use bot farms to amplify fake posts about ‘hot’ stocks on Reddit, Discord, and X. Bot networks target messages laced with ticker symbols and codified slang phrases like ‘c’mon fam,’ ‘buy the dip,’ ‘load up now’ and ‘keep pushing.’ The self-proclaimed finfluencers behind the schemes are making millions in profit by coordinating armies of virtual avatars, sock puppets, and bots to hype thinly traded stocks so they can scalp a vig after the price increases. [Vig is slang for interest paid on a loan – usually a loan with higher than market interest rates, a so-called loan shark loan].

‘We find so many instances where there’s no news story,’ says Adam Wasserman, CFO of Narravance. ‘There’s no technical indicator. There are just bots posting phrases like ‘This stock’s going to the Moon’ and ‘Greatest stock, pulling out of my 401k.’ But they aren’t real people. It’s all fake’.”

Read that last sentence again. “They aren’t real people. It’s all fake.”

Beware, fellow consumer of virtual communication and corporate media. Consume this information at your peril. The reality you encounter there is all manufactured. Some may tell themselves that they are influential players, but in fact, all are victims. The very fabric of truth and reality is a victim. And AI-driven bots are now becoming the leading tool for spinning the lies.

If there’s no trustworthy information, what we think will likely become less important than how we feel. That’s why we’re regressing from the Age of Science – when critical thinking and evidence-based reasoning were central – back to something resembling the Edwardian era, which was driven more by emotional reasoning and deference to authority.

When Twitter introduced microblogging, it was liberating. We all thought it was a knowledge amplifier. We watched it fuel a pro-democracy movement that swept across the Middle East and North Africa called the Arab Spring and stoke national outrage over racial injustice in Ferguson, Missouri, planting the seeds for Black Lives Matter. [In reality, all these movements were in fact orchestrated by the elites from the shadows, aiming for the New World Order.]

While Twitter founders Evan Williams and Jack Dorsey thought they were building a platform for political and social activism, their trust and safety team was getting overwhelmed with abuse. ‘It’s like they never really read Lord of the Flies. People who don’t study literature or history, don’t have any idea of what could occur,’ said tech journalist Kara Swisher in Breaking the Bird, a CNN documentary about Twitter.

Whatever gets the most likes, comments, and shares gets amplified. Emotionally charged posts that lure the most engagement get pushed up to the top of the news feed. Enrage to engage is a strategy. ‘Virtual communication networks manipulation has become very sophisticated,’ says Wendy Sachs, director-producer of October 8, a documentary about the campus protests that erupted the day after the October 7th Hamas attack on Israel. ‘It’s paid for and funded by foreign governments looking to divide the American people.’

Malicious actors engineer virality by establishing bots that leach inside communities for months, sometimes years, before they get activated. The bots are given profile pics and bios. Other tricks include staggering bot activity to occur in local time zones, using U.S. device fingerprinting techniques like setting the smartphone’s internal clock to the time zone to where an imaginary ‘user’ supposedly lives, and setting the phone’s language to English.

Using AI-driven personas with interests like cryptocurrency or dogs, bots are set to follow real Americans and cross-engage with other bots to build up perceived credibility. It’s a concept known as social graph engineering, which involves infiltrating broad interest communities that align with certain biases, such as left- or right-leaning politics.

[…….]

‘Bot accounts lay dormant, and at a certain point, they wake up and start to post synchronously, which is what we’ve observed they actually do,’ says Valentin Châtelet, research associate at the Digital Forensic Research Lab of the Atlantic Council. ‘They like the same post to increase its engagement artificially.’

Bot handlers build workflows with enough randomness to make them seem organic. They set them to randomly reshare or comment on trending posts with certain keywords or hashtags, which the algorithm then uses to personalize the bot’s home feed with similar posts. The bot can then comment on home feed posts, stay on topic, and dwell deeper inside the community.

This workflow is repetitive, but the constant updates on the platform make the bot activity look organic. Since virtual communication platforms update frequently, programmed bots appear spontaneous and natural.

Software bots posted spam, also known as copypasta, which is a block of text that gets repeatedly copied and pasted. But the bot farmers use AI to author unique, personalized posts and comments. Integrating platforms like ChatGPT, Gemini, and Claude into a visual flow-building platform like Make.com, bots can be programmed with advanced logic and conditional paths and use deep integrations leveraging large language models to sound like a 35-year-old libertarian schoolteacher from the Northwest, or a MAGA auto mechanic from the Dakotas.

The speed at which AI image creation is developing dramatically outpaces the speed at which social networking algorithms are advancing. ‘Virtual communication platforms’ algorithms are not evolving quick enough to outperform bots and AI,’ says Pratik Ratadiya, a researcher with two advanced degrees in computer science who’s worked at JPL and Apple, and who currently leads machine learning at Narravance. ‘So you have a bunch of accounts, influencers, and state actors who easily know how to game the system. In the game of cat and mouse, the mice are winning.’

And here is the mousetrap that caught our influencer formerly with the Daily Wire:

On October 7, 2023, as Hamas launched its deadly terror attack into Israel, a coordinated disinformation campaign – powered by Russian and Iranian bot networks – flooded the virtual communication networks with false claims suggesting the attack was an inside job. Posts in Hebrew on X with messages like ‘There are traitors in the army’ and ‘Don’t trust your commanders’ were overwhelmed with retweets, comments, and likes from bot accounts.

Along with the organic pro-Palestinian sentiment on the internet, Russian and Iranian bot farms promote misinformation to inflame divisions in the West. Their objective is to pit liberals against conservatives. They amplify Hamas’ framing of the conflict as a civil rights issue.

In the case of the pro-Palestinian campus protests, they erupted before the Israeli death toll from the Hamas attack had even been established. How could it occur so quickly?

‘We think most of what we see online is real. But most of what we see is deceptive,’ said Ori Shaashua, who is chairman of Xpoz and an AI entrepreneur behind a host of other tech ventures. Shaashua’s team analysed the ratio between bots, avatars, and humans. ‘It doesn’t make sense when 418 accounts generate 3 million views in two hours,’ says Shaashua.

Parenthetically, in the case Fast Company makes of to illustrate virtual communication platforms amplifying events like Israel bombing hospitals, the bot farms (Russian and Iranian) actually hewed closer to the truth than its editors might find comfortable. And the October 8 documentary is a masterclass in Israeli disinformation.

Closing Argument

It’s not just the bots that are gaming the algorithms through mass amplification. It’s also the algorithms that are gaming us. We’re being subtly manipulated by the virtual communication networks. We know it. But we keep on scrolling.” – Fast Company, Eric Schwartzman

Get a clue. The reality that you think you experience on virtual communication networks and corporate media is fabricated. You are being manipulated by a wide variety of agents, and what you think of as “truth” is nothing like the truth.

Beware of strident voices seeking to manage your emotions. Even people who you think are on your side. Many of these are “sponsored” by corporations that seek to manipulate your behaviour and opinions.

Be careful out there, use your discernment and stay true to your heart. It may be the only aspect standing between your ability to think and the thoughts and emotions that are being so actively promoted to bend your consciousness to the will of others.

Never forget that, in fifth-generation warfare, the battle is no longer over territory. The battleground is for control of your consciousness. In a successful fifth-generation warfare action, those being influenced should not be able to discern who is manipulating them.

Author: Robert W. Malone

 

yogaesoteric
September 21, 2025

 

Leave A Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More