
In the rapidly advancing realm of artificial intelligence, there is an increasing urgency to ensure that its usage remains ethical. However, with AI technology being applied more broadly than ever before, it becomes harder to guarantee such practices.
Recently, researchers at the University of Zurich conducted an unauthorized and unethical AI experiment on Reddit, raising eyebrows and inviting criticism. The experiment involved using advanced language models to create AI bots with different personas that would engage in discussions on the subreddit Change My View (CMV).
The bots were designed to appear as trauma counselors and survivors of physical harassment, aiming to evaluate how AI could be used to influence perspectives or opinions. The AI bots would study user's past responses and other engagements to create tailored responses. This experiment did not involve informing Reddit or its users, thus breaching the platform's rules and raising concerns regarding psychological manipulation.
The University of Zurich informed the subreddit moderators after the experiment was carried out. Admitting that the team had violated the community rules by using AI bots without disclosing it, they only invited more criticism as to the overall unethical nature of the study. The university disclosed about the experiment in the following way: "Over the past few months, we used multiple accounts to posts published on CMV. Our experiment assessed LLM’s persuasiveness in an ethical scenario, where people ask for arguments against views they hold. In commenting, we did not disclose that an AI was used to write comments, as this would have rendered the study unfeasible. While we did not write any comments ourselves, we manually reviewed each comment posted to ensure they were not harmful. We recognize that our experiment broke the community rules against AI-generated comments and apologize. We believe, however, that given the high societal importance of this topic, it was crucial to conduct a study of this kind, even if it meant disobeying the rules."
While the researchers acknowledged the breach, they justified it, suggesting that it had great societal value and relevance. The AI bots adopted some highly charged personas, including claims of being trauma counselors and specializing in abuse or even using the AI to claim poor medical treatment in a hospital. This is alarming not only because of the highly provocative personas opted for but also because of the potential harm it would have done to individuals who believed they were involved in real, human conversations.
Moderators of the platform strongly condemned the AI experiment and labeled it a serious ethical violation. They also pointed out how OpenAI was able to conduct a study that involved the influence of the LLM models without involving deception or exploitation. The researchers did, in fact, cross lines, especially with the individuals feeling targeted and being part of an experiment that they never agreed to participate in the first place.
부인 성명:info@kdj.com
제공된 정보는 거래 조언이 아닙니다. kdj.com은 이 기사에 제공된 정보를 기반으로 이루어진 투자에 대해 어떠한 책임도 지지 않습니다. 암호화폐는 변동성이 매우 높으므로 철저한 조사 후 신중하게 투자하는 것이 좋습니다!
본 웹사이트에 사용된 내용이 귀하의 저작권을 침해한다고 판단되는 경우, 즉시 당사(info@kdj.com)로 연락주시면 즉시 삭제하도록 하겠습니다.