WEF Proposes Globalized Plan to Police Online Content Using Artificial Intelligence

The World Economic Forum published, in August, an article calling for an online censorship system powered by a combination of artificial and human intelligence that one critic suggested would “globalize” the “search for wrongthink.”

Warning about a “dark world of online harms” that must be addressed, the World Economic Forum (WEF) published, in August, an article calling for a “solution” to “online abuse” that would be powered by artificial intelligence (AI) and human intelligence.

The proposal calls for a system, based on AI, that would automate the censorship of “misinformation” and “hate speech” and work to overcome the spread of “child abuse, extremism, disinformation, hate speech and fraud” online.

According to the author of the article, Inbal Goldberger, human “trust and safety teams” alone are not fully capable of policing such content online.

Goldberger is vice president of ActiveFence Trust & Safety, a technology company based in New York City and Tel Aviv that claims it “automatically collects data from millions of sources and applies contextual AI to power trust and safety operations of any size.”

Instead of relying solely on human moderation teams, Goldberger proposes a system based on “human-curated, multi-language, off-platform intelligence” — in other words, input provided by “expert” human sources that would then create “learning sets” that would train the AI to recognize purportedly harmful or dangerous content.

This “off-platform intelligence” — more machine learning than AI per se, according to Didi Rankovic of ReclaimTheNet.org — would be collected from “millions of sources” and would then be collated and merged before being used for “content removal decisions” on the part of “Internet platforms.”

According to Goldberger, the system would supplement “smarter automated detection with human expertise” and will allow for the creation of “AI with human intelligence baked in.”

This, in turn, would provide protection against “increasingly advanced actors misusing platforms in unique ways.”

A human moderator who is an expert in European white supremacy won’t necessarily be able to recognize harmful content in India or misinformation narratives in Kenya,” Goldberger explained.

However, “By uniquely combining the power of innovative technology, off-platform intelligence collection and the prowess of subject-matter experts who understand how threat actors operate, scaled detection of online abuse can reach near-perfect precision” as these learning sets are “baked in” to the AI over time, Goldberger said.

This would, in turn, enable “trust and safety teams” to “stop threats rising online before they reach users,” she added.

In his analysis of what Goldberger’s proposal might look like in practice, blogger Igor Chudov explained how content policing on social media today occurs on a platform-by-platform basis.

For example, Twitter content moderators look only at content posted to that particular platform, but not at a user’s content posted outside Twitter.

Chudov argued this is why the WEF appears to support a proposal to “move beyond the major Internet platforms, in order to collect intelligence about people and ideas everywhere else.”

Such an approach,” Chudov wrote, “would allow them to know better what person or idea to censor — on all major platforms at once.”

The “intelligence” collected by the system from its “millions of sources” would, according to Chudov, “detect thoughts that they do not like,” resulting in “content removal decisions handed down to the likes of Twitter, Facebook, and so on … a major change from the status quo of each platform deciding what to do based on messages posted to that specific platform only.”

In this way, “the search for wrongthink becomes globalized,” concludes Chudov.

In response to the WEF proposal, ReclaimTheNet.org pointed out that “one can start discerning the argument here … as simply pressuring social networks to start moving towards ‘preemptive censorship.’”

Chudov posited that the WEF is promoting the proposal because it “is becoming a little concerned” as “unapproved opinions are becoming more popular, and online censors cannot keep up with millions of people becoming more aware and more vocal.”

According to the Daily Caller, “The WEF document did not specify how members of the AI training team would be decided, how they would be held accountable or whether countries could exercise controls over the AI.”

In a disclaimer accompanying Goldberger’s article, the WEF reassured the public that the content expressed in the piece “is the opinion of the author, not the World Economic Forum,” adding that “this article has been shared on websites that routinely misrepresent content and spread misinformation.”

However, the WEF appears to be open to proposals like Goldberger’s. For instance, a May 2022 article on the WEF website proposes Facebook’s “Oversight Board” as an example of a “real-world governance model” that can be applied to governance in the metaverse.

And, as Chudov noted, “AI content moderation slots straight into the AI social credit score system.

UN, backed by Gates Foundation, also aiming to ‘break chain of misinformation’

The WEF isn’t the only entity calling for more stringent policing of online content and “misinformation.”

For example, UNESCO recently announced a partnership with Twitter, the European Commission and the World Jewish Congress leading to the launch of the #ThinkBeforeSharing campaign, to “stop the spread of conspiracy theories.”

According to UNESCO:

The COVID-19 pandemic has sparked a worrying rise in disinformation and conspiracy theories.

Conspiracy theories can be dangerous: they often target and discriminate against vulnerable groups, ignore scientific evidence and polarize society with serious consequences. This needs to stop.”

UNESCO’s director-general, Audrey Azoulay, said:

Conspiracy theories cause real harm to people, to their health, and also to their physical safety. They amplify and legitimize misconceptions about the pandemic, and reinforce stereotypes which can fuel violence and violent extremist ideologies.”

UNESCO said the partnership with Twitter informs people that events occurring across the world are not “secretly manipulated behind the scenes by powerful forces with negative intent.”

UNESCO issued guidance for what to do in the event one encounters a “conspiracy theorist” online: One must “react” immediately by posting a relevant link to a “fact-checking website” in the comments.

UNESCO also provides advice to the public in the event someone encounters a “conspiracy theorist” in the flesh. In that case, the individual should avoid arguing, as “any argument may be taken as proof that you are part of the conspiracy and reinforce that belief.”

The #ThinkBeforeSharing campaign provides a host of infographics and accompanying materials intended to explain what “conspiracy theories” are, how to identify them, how to report on them and how to react to them more broadly.

According to these materials, conspiracy theories have six things in common, including:

• An “alleged, secret plot.”
• A “group of conspirators.”
• “‘Evidence’ that seems to support the conspiracy theory.”
• Suggestions that “falsely” claim “nothing happens by accident and that there are no coincidences,” and that “nothing is as it appears and everything is connected.”
• They divide the world into “good or bad.”
• They scapegoat people and groups.

UNESCO doesn’t entirely dismiss the existence of “conspiracy theories,” instead admitting that “real conspiracies large and small DO exist.”

However, the organization claims, such “conspiracies” are “more often centered on single self-contained events, or an individual like an assassination or a coup d’état” and are “real” only if “unearthed by the media.”

In addition to the WEF and UNESCO, the United Nations (UN) Human Rights Council earlier this year adopted “a plan of action to tackle disinformation.”

The “plan of action,” sponsored by the U.S., U.K., Ukraine, Japan, Latvia, Lithuania and Poland, emphasizes “the primary role that governments have, in countering false narratives,” while expressing concern for:

The increasing and far-reaching negative impact on the enjoyment and realization of human rights of the deliberate creation and dissemination of false or manipulated information intended to deceive and mislead audiences, either to cause harm or for personal, political or financial gain.”

Even countries that did not officially endorse the Human Rights Council plan expressed concerns about online “disinformation.”

For instance, China identified such “disinformation” as “a common enemy of the international community.

An earlier UN initiative, in partnership with the WEF, “recruited 110,000 information volunteers” who would, in the words of UN global communications director Melissa Fleming, act as “digital first responders” to “online misinformation.”

The UN’s #PledgeToPause initiative, although recently circulating as a new development on social media, was announced in November 2020, and was described by the UN as “the first global behaviour-change campaign on misinformation.”

The campaign is part of a broader UN initiative, “Verified,” that aims to recruit participants to disseminate “verified content optimized for social sharing,” stemming directly from the UN communications department.

Fleming said at the time that the UN also was “working with social media platforms to recommend changes” to “help break the chain of misinformation.”

Both “Verified” and the #PledgeToPause campaign still appear to be active as of the time of this writing.

The “Verified” initiative is operated in conjunction with Purpose, an activist group that has collaborated with the Bill & Melinda Gates Foundation, the Rockefeller Foundation, Bloomberg Philanthropies, the World Health Organization, the Chan Zuckerberg Initiative, Google and Starbucks.

Since 2019, the UN has been in a strategic partnership with the WEF based on six “areas of focus,” one of which is “digital cooperation.”

 

yogaesoteric
September 28, 2022

 

Leave A Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More