AI, such as ChatGPT, can corrupt a person’s moral judgment, according to a new research published in the journal Scientific Reports. European researchers asked participants questions on the moral dilemma of whether it is right to sacrifice one person’s life to save five others. Before the participants responded, they were given ChatGPT’s response to the dilemma, framed to them as either the chatbot’s answer, or that of a moral advisor.
The researchers found that some participants of the study, made up of 767 U.S. subjects, with an average age of 39, were indeed influenced by the chatbot’s answer. While ChatGPT gave answers both for, and against the moral dilemma, depending on when it was asked (it indicated to the researchers that it does not favor a certain moral stance), the participants were more likely to sway toward the moral line of thought with which the bot replied, even if the subjects knew that they were taking in the opinions of an artificial intelligence. The researchers concluded that ChatGPT’s influence on users’ morals can be damaging.
Source: Yahoo and Scientific reports.