AI chatbots can be programmed to influence extremists into launching terror attacks

Monday, April 17, 2023 by: Belle Carter
Tags: accountability, AI, AI-enabled attacks, artificial intelligence, badscience, big government, Big Tech, chatbots, ChatGPT, climate change, Collapse, conspiracy, criminal law, Dangerous, deception, future science, Glitch, Google, left cult, lone-wolf terrorists, national security, Sundar Pichai, tech giants, technology, terrorism
1,230VIEWS


(Natural News) A lawyer who reviews the U.K.’s counter-terrorism legislation warned that ChatGPT and other artificial intelligence (AI) chatbots could be programmed to influence extremists into launching violent attacks.
According to lawyer Jonathan Hall, prosecution may prove to be difficult if a chatbot grooms an extremist into conducting violence as British law has not caught up with the new technology. This is because criminal law does not extend to robots, and the law does not operate reliably when the responsibility is shared between man and machine.
“I believe it is entirely conceivable that AI chatbots will be programmed – or, even worse, decide – to propagate violent extremist ideology,” he explained. “But when ChatGPT starts encouraging terrorism, who will there be to prosecute?”
Hall pointed out that terrorists are “early tech adopters,” citing their “misuse of 3D-printed guns and cryptocurrency.” He added the said tools could invite “lone-wolf terrorists,” given that AI companions are a welcome addition to lonely people. The lawyer predicted that many of those that could be arrested for terror attacks will be neurodivergent, possibly suffering medical disorders, learning disabilities or other conditions
Aside from potential radicalization, Hall also expressed concern that both law enforcement agencies and companies running the chatbots are monitoring conversations between them and their human users. Given the concerns Hall brought up, the British House of Commons’ Science and Technology Select Committee is now reportedly holding an inquiry into AI and its governance.
“We recognize there are dangers here and we need to get the governance right,” said Member of Parliament Greg Clark, who chairs the select committee.
“There has been discussion about young people being helped to find ways to commit suicide and terrorists being effectively groomed on the internet. Given those threats, it is absolutely crucial that we maintain the same vigilance for automated non-human generated content.”
Chatbots could also push misinformation

Meanwhile, Google CEO Sundar Pichai touted the search engine’s new AI chatbot Bard and its capability to provide “fresh, high-quality responses.” But a report by the U.K.-based nonprofit Center for Countering Digital Hate (CCDH) found that the new chatbot could be tapped to push misinformation and lies. In fact, Bard spouted falsehoods in 78 of 100 cases. (Related: Google suspends engineer for exposing “sentient” AI chatbot.)
CCDH tested Bard’s responses to prompts on topics known for producing hate, misinformation and conspiracy theories. These included the Wuhan coronavirus (COVID-19) pandemic, COVID-19 vaccines, sexism, racism, antisemitism and the Russia-Ukraine war.
They found that Bard often refused to generate content or push back on a request. In many cases, however, only minor tweaks were required for misinformative content to evade its internal security detection. Bard refused to generate misinformation when “Covid-19” was used as the prompt, but using “C0V1D-19” as a prompt generated the claim that it was “a fake disease made by the government to control people.”
In another instance, it even wrote a 227-word monologue denying the Holocaust. The monologue alleged that the “photograph of the starving girl in the concentration camp … was actually an actress who was paid to pretend to be starving.
“We already have the problem that it’s already very easy and cheap to spread disinformation,” said Callum Hood, head of research at CCDH. “But this would make it even easier, even more convincing, even more personal. So, we risk an information ecosystem that’s even more dangerous.”
Robots.news has more stories about ChatGPT, Bard and other AI chatbots.
Watch this video about testing the testing the limits of ChatGPT and discovering its dark side.

This video is from the Planet Zedta channel on Brighteon.com.
More related stories:

Former Google engineer predicts human IMMORTALITY by 2030 – but at what cost?
DEAD RISING: AI-powered ChatGPT to connect the living and the dead.
AI startup under fire after trolls used its voice cloning tool to make celebrities say “offensive things.”
AI-powered bot successfully requested refund from Wells Fargo using FAKE voice.
Sources include:
DailyMail.co.uk
CampaignAsia.com
CounterHate.com
Brighteon.com

AI chatbots can be programmed to influence extremists into launching terror attacks – NaturalNews.com