New “thinking” AI chatbot capable of terrorizing humans, stealing cash from “huge numbers” of people

There is a new cash-stealing scam that is sweeping the globe: artificial intelligence (AI) chatbots that are capable of “reasoning” and “thinking” up endless ways to cheat people out of their money.

OpenAI recently showed off its new o1 ChatGPT model that the company says is much “smarter” than existing AI chatbots. The o1 model has the ability “to spend more time thinking before they respond,” the company revealed.

They can reason through complex tasks and solve harder problems than previous models in science, coding, and math.”

OpenAI’s o1 model of ChatGPT is the first major advancement to the system since it was first launched in late 2022. Currently, it is only available for paying ChatGPT members.

According to cybersecurity expert Dr. Andrew Bolster, the o1 ChatGPT AI model is a dream come true for cyber-criminals who are sure to dream up all kinds of scams that even the savviest internet users will be unable to detect before it bilks them out of their hard-earned cash.

Large Language Models (LLMs) continue to improve over time, and OpenAI’s release of their ‘o1’ model is no exception to this trend,” Dr. Bolster says.

Where this generation of LLM’s excel is in how they go about appearing to ‘reason.’ Where intermediate steps are done by the overall conversational system to draw out more creative or ‘clever’ appearing decisions and responses.”

Driving the American scam economy with AI

A big part of what keeps the American economy running these days is crime. Pretty much everything is some kind of Ponzi scheme now, whether it be business, the general markets, health care and of course, politics. And as the general public is figuring it all out, the powers that be (TPTB) are desperately trying to hatch new schemes to trick people out of their property.

AI makes this possible by allowing the planet’s worst human scum elements to create deepfake videos, for instance, that appear to show real people talking but that are just AI creations. Deepfake videos can be used to deceive people into doing or believing just about anything, which is great for business.

As many as one in three Brits, reports indicate, has already been scammed in some way by AI deepfakes. And the problem is only getting worse the more AI advances to next-level deception.

In the context of cybersecurity, this would naturally make any conversations with these ‘reasoning machines’ more challenging for end-users to differentiate from humans,” Dr. Bolster says.

Lending their use to romance scammers or other cybercriminals leveraging these tools to reach huge numbers of vulnerable ‘marks’.”

Nvidia, which produces the chips and other hardware that AI developers need to create these scamming abominations, is the stock that is propping up the U.S. markets right now. Without it, and without AI, the U.S. economy would probably already be a ruinous heap of cataclysmic destruction.

Dr. Bolster says consumers should beware of anything online that seems “too good to be true” because more than likely it is a scam that these days also involves AI.

The general public “should always consult with friends and family members to get a second opinion,” he warns, “especially when someone (or something) on the end of a chat window or even a phone call is trying to pressure you into something.”

 

yogaesoteric
September 21, 2024

 

Leave A Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More