Hate Speech Initiative a Trojan Horse for AI Takeover of Humanity
A February 8 video by the Anti-Defamation League (ADL) promotes a new Artificial Intelligence (AI) based algorithm it calls the “Online Hate Index” which is aimed at identifying hate speech. The ADL believes that the AI algorithm can be used by social media platforms such as Facebook, YouTube, Twitter to identify and quickly remove hate speech.
In the video, Brittan Heller, the Director of the ADL Center for Technology & Society says the goal of the index is to:
“Help tech platforms better understand the growing amount of hate on social media, and to use that information to address the problem. By combining Artificial Intelligence and machine learning and social science, the Online Hate Index will ultimately uncover and identify trends and patterns in hate speech across different platforms.”
In its “Phase I Innovation Brief” published January 2018 on its website, the ADL further explains how “machine learning”, a form of Artificial Intelligence based on algorithms, can be used to identify and remove hate speech from social media platforms:
“The Online Hate Index (OHI), a joint initiative of ADL’s Center for Technology and Society and UC Berkeley’s D-Lab, is designed to transform human understanding of hate speech via machine learning into a scalable tool that can be deployed on internet content to discover the scope and spread of online hate speech. Through a constantly-evolving process of machine learning, based on a protocol developed by a team of human coders as to what does and does not constitute hate speech, this tool will uncover and identify trends and patterns in hate speech across different online platforms, allowing us to push for the changes necessary to ensure that online communities are safe and inclusive spaces.”
The ADL’s Online Hate Index is described as “a sentiment-based analysis that runs off of machine learning”. The ADL Brief goes on to say:
“All the decisions that went into each step of creating the OHI were done with the aim of building a machine learning-enabled model that can be used to identify and help us understand hate speech online.”
What the ADL and other promoters of AI based algorithms fail to grasp is the potential of AI to evolve through its programmed capacity for “machine learning” into the kind of fearsome interconnected sentient intelligence featured in movies such as the Terminator and Battlestar Galactica.
It is well known that scientists/inventors such as Stephen Hawkins and Elon Musk have been loudly warning about the long term threat posed by AI. They and others believe that AI poses an existential threat to humanity, and needs to be closely controlled and monitored. In a 2014 speech Musk said:
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence… I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish… With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and… he’s sure he can control the demon? Doesn’t work out.”
Musk’s view was echoed by Stephen Hawking who warned against the danger of AI in an interview with the BBC in December 2014:
“The development of full artificial intelligence could spell the end of the human race… It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
Similarly, Corey Goode, an alleged insider revealing the existence of multiple secret space programs, claims that AI is already a threat in deep space operations. When he first emerged in early 2015, Goode focused a great deal of attention on the AI threat, and continues to warn about it today.
He says that these programs, along with extraterrestrial civilizations, take strict security precautions to identify and remove any kind of AI signature:
“There are a few ET AI groups (ALL malevolent to humanity, from our perspective) that the SSP(s) (there are several Secret Space Programs) have been dealing with for decades.
If an ‘asset’ is ‘scanned’ and has a ‘Bio Neuro AI Signature’, ‘AI Nano Tech’ or ‘Overlapping AI related EMG type Brian Wave Signature’ (or any other sign of AI exposure) those persons are immediately placed in isolation and are not allowed anywhere near the current era SSP(s) technology (which is mostly Bio-Neurological and Consciousness Interactive) until they have been ‘cleared’ of all AI influences.”
Now, let’s analyze all this in terms of what the ADL is proposing for social media platforms to use AI based algorithms to identify hate speech.
At first look, there is great appeal in the idea of monitoring speech and regulating people promoting fear, hate or violence against others whether on social, religious or economic grounds. After all, we all want to live in a peaceful and tolerant world, which includes cyberspace, so why not exclude intolerant and hateful individuals and groups from our social media platforms?
The big problem here of course is that there is a real danger that social media can be surreptitiously used to exclude dissenting political viewpoints under the guise of regulating hate speech. We see this already occurring with YouTube using an army of 10,000 volunteers from groups such as the Southern Poverty Law Center.
Many popular YouTube channels are being increasingly targeted by strikes and removals for behavior characterized as bullying or hate speech. Yet, this YouTube crackdown appears to be a cleverly disguised politically driven campaign to remove alternative voices questioning the official media narrative on a great number of social issues, rather than really cracking down on hate speech.
What ADL is proposing, however, goes well beyond what YouTube is currently doing. The ADL is openly promoting a censorship system where it won’t be humans doing the actual monitoring and removal of hate speech but an AI algorithm. What might be the result of this if allowed to occur given the warnings of AI posed by Hawking, Musk and Goode?
It doesn’t take an Einstein to realize that if social media platforms did allow AI algorithms to monitor and censor content, that warnings about a future AI threat would themselves eventually be deemed to be a form of hate speech. After all, if corporations can be recognized to have the same rights as individuals according to the infamous Citizens United ruling by the Supreme Court, won’t AI sentience eventually also be recognized to have similar human rights in the U.S.?
We could very easily end up in a dystopian future where different forms of AI are used to monitor and regulate human behavior in egregious ways, and any humans protesting or warning what the AI system is doing would be censored for hate speech.
Given the existential threat posed by AI, if we accept what Hawking, Musk and Goode are telling us, let alone the inappropriateness of censoring alternative news perspectives in the first place, then free speech needs to be protected on social media at all costs.
In the U.S. this should not present too great challenge given the First Amendment Constitutional right to free speech, and legal remedies available in the Federal court system. Those individuals who threatened legal remedies to YouTube cracking down on their channels appear to have been the most successful in restoring their channels. YouTube apologized to such users for the overzealous behavior of its new army of 10,000 moderators.
However, the U.S. is an island in a vast ocean where other countries do actively punish individuals and groups for hate speech. This is where the future appears ominous given the temptation for national regulators to eventually punish social media platforms that don’t regulate hate speech. This would force Facebook, YouTube, Twitter and other platforms to adopt the AI based algorithms recommended by the ADL or other organizations for widespread usage.
This is likely to lead to a situation where major nations such as China, or supranational entities such as the European Union, might embrace AI algorithms to monitor and regular hate speech. China is already closely monitoring and removing dissident political thought from media platforms through firewalls, and may well be contemplating incorporating AI algorithms to do so more effectively.
While national regulators across the world may be tempted for different reasons to adopt the ADL’s proposal for AI algorithms to identify and remove hate speech, we need to firmly keep in mind that this would create a Trojan horse for eventual AI control of humanity.
Despite the genuine problems posed by hate speech, national regulators need to ensure that social media platforms are never regulated by AI algorithms given the potential for global security to be undermined, and humanity being genuinely imperiled by an AI takeover.
yogaesoteric
June 13, 2018