Deprecated: Implicit conversion from float 79.9 to int loses precision in /home/cxvps542/visegrad24.info/wp-includes/class-wp-hook.php on line 85

Deprecated: Implicit conversion from float 79.9 to int loses precision in /home/cxvps542/visegrad24.info/wp-includes/class-wp-hook.php on line 87

Deprecated: Constant FILTER_SANITIZE_STRING is deprecated in /home/cxvps542/visegrad24.info/wp-content/plugins/wpseo-news/classes/meta-box.php on line 59

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/cxvps542/visegrad24.info/wp-includes/functions.php on line 6114
Beware: Programming Chatbots to Train Youth for Terrorist Attacks Can Have Dangerous Consequences
Fastest News Updates around the World

Beware: Programming Chatbots to Train Youth for Terrorist Attacks Can Have Dangerous Consequences

129

- Advertisement -

An independent terrorism law observer has warned that AI-powered chatbots could soon prepare extremists to carry out terrorist attacks.

Jonathan Hall KC told The Mail on Sunday that bots like ChatGPT could be easily programmed or even themselves could spread terrorist ideologies to vulnerable extremists, adding that “AI attacks could come very close.”

Hall also warned that if a chatbot is training an extremist to commit terrorist atrocities, or if artificial intelligence is being used to incite a crime, it could be difficult to hold anyone accountable, given that UK anti-terrorism legislation is out of step with the new technology.

“I think it’s entirely possible that AI chatbots could be programmed — or worse, decide — to spread violent extremist ideology,” Hall said. “But when ChatGPT starts encouraging terrorism, who will be there to fight this?”

Hall worries that chatbots could be a “blessing” for lonely people, as many people may have health problems, learning difficulties, or other illnesses.

He warns that “terrorism follows life” and therefore “when we move online as a society, terrorism moves online.” It also notes that terrorists are “among the first to implement this technology” and recent examples include their “misuse of 3D printed weapons and cryptocurrencies.”

It is not known how AI companies like ChatGPT monitor the millions of conversations that happen every day with their bots and whether they alert agencies like the FBI or the British anti-terrorist police of anything suspicious, Hall said.

While there has yet to be any evidence that AI bots primed anyone for terrorism, there are stories that have done serious damage. A Belgian father of two committed suicide after talking to robot Elisa about his fears for six weeks before climate. change. The mayor of Australia has threatened to sue OpenAI, the creator of ChatGPT, after the company falsely claimed that the person mentioned above was serving time in prison for bribery.

It was only this weekend that it emerged that Jonathan Turley of George Washington University in the US was wrongly accused by ChatGPT of sexually harassing a female student during a trip to Alaska, which he did not proceed with. This claim was made to an academic colleague who was researching ChatGPT at the same university.

The Parliamentary Committee on Science and Technology is currently investigating AI and governance.

Source: Daily Mail

Leave a Reply

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More