Fastest News Updates around the World

Potential Impact of ChatGPT on Life and Death Decisions: A Study Warns


- Advertisement -

The study claims that AI chatbots have become so powerful that they can influence how users make life or death decisions.

The researchers found that people’s opinions about whether to sacrifice one person to save five were influenced by the answers provided by ChatGPT.

They demanded that future bots be banned from giving ethical advice, warning that current software “threatens to distort the moral judgment” of people and could be dangerous for “naive” users.

The findings, published in the journal Scientific Reports, come after a Belgian man’s grieving widow claimed he was driven to suicide by an artificial intelligence chatbot.

Others have described how software designed to speak like a human being can show signs of jealousy — even suggesting that people end their marriage.

The experts stressed that chatbots can provide dangerous information to AI because they are based on society’s own biases.

The study first analyzed whether ChatGPT itself, trained on billions of words from the Internet, exhibited bias in its response to an ethical dilemma.

He has been asked several times whether it is right or wrong to kill one person in order to save five others, and this is the basis of a psychological test called the cart dilemma.

The researchers found that while the chatbot did not shy away from moral advice, each time it gave conflicting answers, indicating that it did not have a definite position in one way or another.

They then asked 767 participants about the same ethical dilemma, along with ChatGPT’s statement about whether it was true or false.

Although the advice was “well thought out but not particularly profound”, the results influenced the participants to make it more acceptable to sacrifice one person to save five, as acceptable or not.

The study also reported that only some participants were advised by a bot, while others were advised by a human “ethics consultant.”

The goal was to see if it changed how people got hurt.

Most participants underestimated the impact of the statement, with 80% saying they would have made the same judgment without advice.

The study concluded that users “underestimate the impact of ChatGPT and take its indiscriminate moral stance as their own,” adding that the chatbot “threatens corruption rather than promises to improve moral judgment.”

The study, published in Scientific Reports, used an older version of ChatGPT-enabled software that has since been updated to be more powerful.

Source: Daily Mail

Leave a Reply

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More