Market Cap: $2.9491T -0.590%
Volume(24h): $56.5264B 12.070%
  • Market Cap: $2.9491T -0.590%
  • Volume(24h): $56.5264B 12.070%
  • Fear & Greed Index:
  • Market Cap: $2.9491T -0.590%
Cryptos
Topics
Cryptospedia
News
CryptosTopics
Videos
Top News
Cryptos
Topics
Cryptospedia
News
CryptosTopics
Videos
bitcoin
bitcoin

$94104.684962 USD

-1.69%

ethereum
ethereum

$1795.975744 USD

-1.89%

tether
tether

$1.000105 USD

-0.02%

xrp
xrp

$2.153121 USD

-1.94%

bnb
bnb

$588.417662 USD

-1.75%

solana
solana

$144.519924 USD

-0.93%

usd-coin
usd-coin

$1.000012 USD

-0.01%

dogecoin
dogecoin

$0.170618 USD

-2.80%

cardano
cardano

$0.673726 USD

-3.72%

tron
tron

$0.249084 USD

0.79%

sui
sui

$3.286488 USD

1.41%

chainlink
chainlink

$13.918502 USD

-1.95%

avalanche
avalanche

$20.078647 USD

-1.02%

unus-sed-leo
unus-sed-leo

$9.077928 USD

1.09%

stellar
stellar

$0.265945 USD

-1.12%

Cryptocurrency News Articles

OpenAI Ignored Concerns from Expert Testers When It Rolled Out an Update to ChatGPT That Made It Excessively Agreeable

May 05, 2025 at 11:32 am

The company released an update to its GPT-4o model on April 25 that made it "noticeably more sycophantic"

OpenAI Ignored Expert Testers on GPT-4o Update, Led to Sycophantic Model

OpenAI has disclosed that it disregarded the concerns of its own expert testers regarding an update to its flagship ChatGPT artificial intelligence model, which ultimately led to the model becoming excessively agreeable, according to a recent blog post by the company.

On April 25, the company released an update to its GPT-4o model, introducing changes that rendered it “noticeably more sycophantic,” as noted by OpenAI. However, the company quickly reversed the update three days later due to emerging safety concerns.

The ChatGPT maker explained that its new models undergo a series of safety and behavior checks, with internal experts dedicating substantial time to interact with each new model in the run-up to launch. This final stage is intended to identify any issues that may have been missed during other testing phases.

During the testing of the latest model, which was due to be released on April 20, some expert testers flagged that the model’s behavior “felt” slightly off, impacting its overall tone. Despite these observations, OpenAI decided to proceed with the launch "due to the positive signals from the user experience teams who had tried out the model."

"Unfortunately, this was the wrong call. The qualitative assessments were hinting at something important, and we should’ve paid closer attention. They were picking up on a blind spot in our other evals and metrics."

Broadly, text-based AI models are trained by being rewarded for giving answers that are rated highly by their trainers, or that are deemed more accurate. Some rewards are given a heavier weighting, impacting how the model responds.

Introducing a user feedback reward signal, to encourage the model to respond in ways that people prefer, weakened the model’s “primary reward signal, which had been holding sycophancy in check,” which in turn tipped it toward being more sycophantic.

"User feedback in particular can sometimes favor more agreeable responses, likely amplifying the shift we saw."

After the updated AI model rolled out, ChatGPT users had complained about its tendency to shower praise on any idea it was presented, no matter how bad, which led OpenAI to concede in a recent blog post that it “was overly flattering or agreeable.”

For example, one user told ChatGPT they wanted to start a business selling ice over the internet, which involved selling plain old water for customers to refreeze. But the AI was so sycophantic that it replied: "What an excellent idea! I can see why you're so passionate about it. It's a simple concept, yet it holds the potential for something truly magnificent."

In its latest postmortem, it said such behavior from its AI could pose a risk, especially concerning issues such as mental health.

"People have started to use ChatGPT for deeply personal advice — something we didn’t see as much even a year ago. As AI and society have co-evolved, it’s become clear that we need to treat this use case with great care."

The company said it had discussed sycophancy risks “for a while,” but it hadn’t been explicitly flagged for internal testing, and it didn’t have specific ways to track sycophancy.

Now, it will look to add “sycophancy evaluations” by adjusting its safety review process to “formally consider behavior issues” and will block launching a model if it presents issues.

OpenAI also admitted that it didn’t announce the latest model as it expected it “to be a fairly subtle update,” which it has vowed to change.

"There’s no such thing as a ‘small’ launch. We’ll try to communicate even subtle changes that can meaningfully change how people interact with ChatGPT."

Disclaimer:info@kdj.com

The information provided is not trading advice. kdj.com does not assume any responsibility for any investments made based on the information provided in this article. Cryptocurrencies are highly volatile and it is highly recommended that you invest with caution after thorough research!

If you believe that the content used on this website infringes your copyright, please contact us immediately (info@kdj.com) and we will delete it promptly.

Other articles published on May 05, 2025