시가총액: $3.6793T -2.630%
거래량(24시간): $210.1238B 27.900%
  • 시가총액: $3.6793T -2.630%
  • 거래량(24시간): $210.1238B 27.900%
  • 공포와 탐욕 지수:
  • 시가총액: $3.6793T -2.630%
암호화
주제
암호화
소식
cryptostopics
비디오
최고의 뉴스
암호화
주제
암호화
소식
cryptostopics
비디오
bitcoin
bitcoin

$113653.179192 USD

-1.98%

ethereum
ethereum

$3525.217143 USD

-5.13%

xrp
xrp

$2.974588 USD

-1.43%

tether
tether

$0.999613 USD

-0.03%

bnb
bnb

$764.503086 USD

-3.02%

solana
solana

$164.558033 USD

-4.03%

usd-coin
usd-coin

$0.999804 USD

-0.02%

tron
tron

$0.326608 USD

-0.14%

dogecoin
dogecoin

$0.201896 USD

-3.61%

cardano
cardano

$0.722456 USD

-2.12%

hyperliquid
hyperliquid

$38.099997 USD

-7.92%

sui
sui

$3.494024 USD

-3.45%

stellar
stellar

$0.385959 USD

-3.14%

chainlink
chainlink

$16.209093 USD

-4.30%

bitcoin-cash
bitcoin-cash

$540.811075 USD

-4.11%

암호화폐 뉴스 기사

OpenAI Ignored Concerns from Expert Testers When It Rolled Out an Update to ChatGPT That Made It Excessively Agreeable

2025/05/05 11:32

OpenAI Ignored Expert Testers on GPT-4o Update, Led to Sycophantic Model

OpenAI has disclosed that it disregarded the concerns of its own expert testers regarding an update to its flagship ChatGPT artificial intelligence model, which ultimately led to the model becoming excessively agreeable, according to a recent blog post by the company.

On April 25, the company released an update to its GPT-4o model, introducing changes that rendered it “noticeably more sycophantic,” as noted by OpenAI. However, the company quickly reversed the update three days later due to emerging safety concerns.

The ChatGPT maker explained that its new models undergo a series of safety and behavior checks, with internal experts dedicating substantial time to interact with each new model in the run-up to launch. This final stage is intended to identify any issues that may have been missed during other testing phases.

During the testing of the latest model, which was due to be released on April 20, some expert testers flagged that the model’s behavior “felt” slightly off, impacting its overall tone. Despite these observations, OpenAI decided to proceed with the launch "due to the positive signals from the user experience teams who had tried out the model."

"Unfortunately, this was the wrong call. The qualitative assessments were hinting at something important, and we should’ve paid closer attention. They were picking up on a blind spot in our other evals and metrics."

Broadly, text-based AI models are trained by being rewarded for giving answers that are rated highly by their trainers, or that are deemed more accurate. Some rewards are given a heavier weighting, impacting how the model responds.

Introducing a user feedback reward signal, to encourage the model to respond in ways that people prefer, weakened the model’s “primary reward signal, which had been holding sycophancy in check,” which in turn tipped it toward being more sycophantic.

"User feedback in particular can sometimes favor more agreeable responses, likely amplifying the shift we saw."

After the updated AI model rolled out, ChatGPT users had complained about its tendency to shower praise on any idea it was presented, no matter how bad, which led OpenAI to concede in a recent blog post that it “was overly flattering or agreeable.”

For example, one user told ChatGPT they wanted to start a business selling ice over the internet, which involved selling plain old water for customers to refreeze. But the AI was so sycophantic that it replied: "What an excellent idea! I can see why you're so passionate about it. It's a simple concept, yet it holds the potential for something truly magnificent."

In its latest postmortem, it said such behavior from its AI could pose a risk, especially concerning issues such as mental health.

"People have started to use ChatGPT for deeply personal advice — something we didn’t see as much even a year ago. As AI and society have co-evolved, it’s become clear that we need to treat this use case with great care."

The company said it had discussed sycophancy risks “for a while,” but it hadn’t been explicitly flagged for internal testing, and it didn’t have specific ways to track sycophancy.

Now, it will look to add “sycophancy evaluations” by adjusting its safety review process to “formally consider behavior issues” and will block launching a model if it presents issues.

OpenAI also admitted that it didn’t announce the latest model as it expected it “to be a fairly subtle update,” which it has vowed to change.

"There’s no such thing as a ‘small’ launch. We’ll try to communicate even subtle changes that can meaningfully change how people interact with ChatGPT."

원본 소스:cointelegraph

부인 성명:info@kdj.com

제공된 정보는 거래 조언이 아닙니다. kdj.com은 이 기사에 제공된 정보를 기반으로 이루어진 투자에 대해 어떠한 책임도 지지 않습니다. 암호화폐는 변동성이 매우 높으므로 철저한 조사 후 신중하게 투자하는 것이 좋습니다!

본 웹사이트에 사용된 내용이 귀하의 저작권을 침해한다고 판단되는 경우, 즉시 당사(info@kdj.com)로 연락주시면 즉시 삭제하도록 하겠습니다.

2025年08月03日 에 게재된 다른 기사