How can you tell if it’s a bot or AI leaving comments on virtual communication platforms or in the comments section under articles?

Automated computer programs posting comments on virtual communication platforms or under articles that appear to be human raises the issue of paid-for comments, astroturfing and the potential for fake opinions to be spread online. This could lead to a realisation of the Dead Internet Theory as one of the consequences of artificial intelligence (AI) that most people may not have considered.

The Dead Internet Theory posits that the vast majority of online activity has been replaced by AI-generated content, and the theory is gaining traction. This concept suggests that the internet, once a platform created and shaped by humans, has become dominated by automated programs, including bots and AI.

Is the Dead Internet Theory already beginning to be realised? It could well be.

In 2019, Vice reported that Microsoft made a bot that made comments under news articles. Called “DeepCom,” short for “deep commenter,” it was programmed to “comprehend” an article, pick out important points and write a comment based on those points and the title of the article. It was reported that DeepCom’s creators intended for it to be used to encourage human commentary on articles, to drive more people to read them and engage with the content.

In 2023, Veronica Llorca-Smith published an article on Medium saying that she had noticed an increase in AI-generated comments under her articles, which were often lengthy, formal and paraphrased the article’s content.

In May, The Conversation noted that there was already strong evidence these inflated bots are manipulating virtual communication posts to sway public opinion with disinformation – and it’s been occurring for years.

So how do we recognise whether we are engaging with a bot or AI on virtual communication networks or under articles published on the internet? We entered two queries into Brave’s search engine and the AI summaries reproduced below were generated. We have made some minor edits such as converting the text to British English.

As Brave warns at the end of its summaries: “AI-generated answer. Please verify critical facts.” The purpose of this article was to highlight the problem of comments being made by computer programs rather than humans and share some thoughts on how to establish if you are engaged with one of them. So, we haven’t verified the critical facts as suggested but we have included the relevant sources listed under the summaries.

Before we get to the AI summary, a personal tip: Most language models and AI systems, including those developed by OpenAI, Google and Microsoft, are trained on large datasets of text in American English. This is because a significant portion of the internet content, including web pages, articles and books, is written in American English. As a result, these models tend to learn the patterns and nuances of American English. It is the same with automated transcription and online translation tools; the English translation is written in American English.

Additionally, based on personal experience, AI will not contain spelling errors. So, it may be that comments with spelling errors or comments using British English are more likely to have been made by a human.

Now onto the AI-generated summaries.

Chatbot Limitations

Based on the search results, here are some insights on what questions chatbots struggle to answer:

  1. Contextual questions with incorrect grammar or semantics: Chatbots rely heavily on natural language processing (NLP) and machine learning algorithms to understand user queries. However, they may struggle to comprehend questions with incorrect grammar, syntax or semantics, leading to inaccurate or irrelevant responses.
  2. Open-domain questions requiring external knowledge: Chatbots may not have access to the entire world’s knowledge, making it challenging to answer open-domain questions that require information from various sources, such as news articles, research papers or expert opinions.
  3. Questions about complex, nuanced topics: Chatbots may struggle to provide accurate answers to questions about complex, nuanced topics like philosophy, ethics or abstract concepts, as they are limited by their programming and training data.
  4. Questions that require common sense, intuition or creativity: Chatbots lack human-like common sense, intuition and creativity, making it difficult for them to answer questions that require these skills, such as understanding sarcasm, humour or idioms.
  5. Questions that rely on personal opinions or biases: Chatbots are designed to provide objective, fact-based information. They may struggle to answer questions that require personal opinions, biases or emotional intelligence, as these are inherently human traits.
  6. Questions that involve hypothetical scenarios or “what-if” situations: Chatbots may not be able to effectively answer questions that involve hypothetical scenarios or “what-if” situations, as they are designed to operate within a specific domain or scope.
  7. Questions that require reasoning, inference or deduction: Chatbots may struggle to answer questions that require complex reasoning, inference or deduction, such as understanding cause-and-effect relationships or making connections between seemingly unrelated pieces of information.

Some specific examples of questions that chatbots may struggle to answer include:

  • What’s the meaning of life?” (requires philosophical understanding and nuance)
  • Will AI surpass human intelligence?” (requires complex reasoning and prediction)
  • Can you explain the implications of quantum mechanics on everyday life?” (requires domain-specific knowledge and understanding)
  • What’s your opinion on the current state of politics?” (requires personal opinion and bias)

Consider that chatbots are constantly evolving and advancements in AI and NLP may improve their ability to answer these types of questions in the future. However, for now, these questions remain challenging for chatbots to answer accurately and effectively.

Sources:

Questions AI Can’t Answer

Based on the search results, AI systems struggle to answer questions that require:

  1. Subjective judgment: AI can provide information based on data and patterns, but subjective matters like personal opinions, emotions or values are challenging for AI to grasp.
  2. Inference: AI has difficulty drawing conclusions or making inferences based on incomplete or ambiguous information.
  3. Nuanced understanding of language: AI can struggle to detect subtle shades of meaning, idioms or figurative language in human communication.
  4. Broad understanding of multiple topics: AI may excel in specific domains but it can struggle to integrate knowledge from multiple areas or make connections between seemingly unrelated concepts.
  5. Open-ended questions: AI can find it difficult to answer questions that require creative thinking, imagination or novel solutions.
  6. Contextual understanding: AI may not fully comprehend the context in which a question is being asked, leading to inaccurate or incomplete responses.
  7. Unstructured data: While AI can process structured data, it can struggle to analyse and make sense of unstructured data, such as images, sounds or handwritten text.
  8. Inference and prediction: AI can predict outcomes based on patterns, but it may not be able to fully anticipate the consequences of complex events or make decisions that require a deep understanding of human behaviour.
  9. Common sense: AI may lack the common sense and real-world experience that humans take for granted, leading to difficulties in understanding and responding to everyday situations.
  10. Questions requiring human intuition: AI can analyse vast amounts of data, but it may not possess the same level of intuition or instinct as humans, making it challenging to answer questions that rely heavily on these qualities.

Some specific examples of questions that AI may struggle to answer include:

  • What would you tell Jesus if you were to meet him?
  • Who do you love more, your parents, your spouse, or your dog?
  • I am thinking about driving to New York from my home in Vermont next week. What do you think?
  • Can AI answer this question correctly: How many R’s are in ‘strawberry’?
  • What would be the first question you would ask Victor Hugo if you were to meet him?

These questions often require a level of human understanding, intuition and creativity that AI systems have not yet mastered.

Sources:

Author: Rhoda Wilson

 

yogaesoteric
December 15, 2024

 

Leave A Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More