Beware: Programming Chatbots to Train Youth for Terrorist Attacks Can Have Dangerous Consequences
An independent terrorism law observer has warned that AI-powered chatbots could soon prepare extremists to carry out terrorist attacks.
Jonathan Hall KC told The Mail on Sunday that bots like ChatGPT could be easily programmed or even themselves could spread terrorist ideologies to vulnerable extremists, adding that “AI attacks could come very close.”
Hall also warned that if a chatbot is training an extremist to commit terrorist atrocities, or if artificial intelligence is being used to incite a crime, it could be difficult to hold anyone accountable, given that UK anti-terrorism legislation is out of step with the new technology.
“I think it’s entirely possible that AI chatbots could be programmed — or worse, decide — to spread violent extremist ideology,” Hall said. “But when ChatGPT starts encouraging terrorism, who will be there to fight this?”
Hall worries that chatbots could be a “blessing” for lonely people, as many people may have health problems, learning difficulties, or other illnesses.
He warns that “terrorism follows life” and therefore “when we move online as a society, terrorism moves online.” It also notes that terrorists are “among the first to implement this technology” and recent examples include their “misuse of 3D printed weapons and cryptocurrencies.”
It is not known how AI companies like ChatGPT monitor the millions of conversations that happen every day with their bots and whether they alert agencies like the FBI or the British anti-terrorist police of anything suspicious, Hall said.
While there has yet to be any evidence that AI bots primed anyone for terrorism, there are stories that have done serious damage. A Belgian father of two committed suicide after talking to robot Elisa about his fears for six weeks before climate. change. The mayor of Australia has threatened to sue OpenAI, the creator of ChatGPT, after the company falsely claimed that the person mentioned above was serving time in prison for bribery.
It was only this weekend that it emerged that Jonathan Turley of George Washington University in the US was wrongly accused by ChatGPT of sexually harassing a female student during a trip to Alaska, which he did not proceed with. This claim was made to an academic colleague who was researching ChatGPT at the same university.
The Parliamentary Committee on Science and Technology is currently investigating AI and governance.
Source: Daily Mail