Marktkapitalisierung: $3.6793T -2.630%
Volumen (24h): $210.1238B 27.900%
  • Marktkapitalisierung: $3.6793T -2.630%
  • Volumen (24h): $210.1238B 27.900%
  • Angst- und Gier-Index:
  • Marktkapitalisierung: $3.6793T -2.630%
Kryptos
Themen
Cryptospedia
Nachricht
Cryptostopics
Videos
Top -Nachrichten
Kryptos
Themen
Cryptospedia
Nachricht
Cryptostopics
Videos
bitcoin
bitcoin

$113653.179192 USD

-1.98%

ethereum
ethereum

$3525.217143 USD

-5.13%

xrp
xrp

$2.974588 USD

-1.43%

tether
tether

$0.999613 USD

-0.03%

bnb
bnb

$764.503086 USD

-3.02%

solana
solana

$164.558033 USD

-4.03%

usd-coin
usd-coin

$0.999804 USD

-0.02%

tron
tron

$0.326608 USD

-0.14%

dogecoin
dogecoin

$0.201896 USD

-3.61%

cardano
cardano

$0.722456 USD

-2.12%

hyperliquid
hyperliquid

$38.099997 USD

-7.92%

sui
sui

$3.494024 USD

-3.45%

stellar
stellar

$0.385959 USD

-3.14%

chainlink
chainlink

$16.209093 USD

-4.30%

bitcoin-cash
bitcoin-cash

$540.811075 USD

-4.11%

Nachrichtenartikel zu Kryptowährungen

OpenAI Ignored Concerns from Expert Testers When It Rolled Out an Update to ChatGPT That Made It Excessively Agreeable

May 05, 2025 at 11:32 am

OpenAI Ignored Expert Testers on GPT-4o Update, Led to Sycophantic Model

OpenAI has disclosed that it disregarded the concerns of its own expert testers regarding an update to its flagship ChatGPT artificial intelligence model, which ultimately led to the model becoming excessively agreeable, according to a recent blog post by the company.

On April 25, the company released an update to its GPT-4o model, introducing changes that rendered it “noticeably more sycophantic,” as noted by OpenAI. However, the company quickly reversed the update three days later due to emerging safety concerns.

The ChatGPT maker explained that its new models undergo a series of safety and behavior checks, with internal experts dedicating substantial time to interact with each new model in the run-up to launch. This final stage is intended to identify any issues that may have been missed during other testing phases.

During the testing of the latest model, which was due to be released on April 20, some expert testers flagged that the model’s behavior “felt” slightly off, impacting its overall tone. Despite these observations, OpenAI decided to proceed with the launch "due to the positive signals from the user experience teams who had tried out the model."

"Unfortunately, this was the wrong call. The qualitative assessments were hinting at something important, and we should’ve paid closer attention. They were picking up on a blind spot in our other evals and metrics."

Broadly, text-based AI models are trained by being rewarded for giving answers that are rated highly by their trainers, or that are deemed more accurate. Some rewards are given a heavier weighting, impacting how the model responds.

Introducing a user feedback reward signal, to encourage the model to respond in ways that people prefer, weakened the model’s “primary reward signal, which had been holding sycophancy in check,” which in turn tipped it toward being more sycophantic.

"User feedback in particular can sometimes favor more agreeable responses, likely amplifying the shift we saw."

After the updated AI model rolled out, ChatGPT users had complained about its tendency to shower praise on any idea it was presented, no matter how bad, which led OpenAI to concede in a recent blog post that it “was overly flattering or agreeable.”

For example, one user told ChatGPT they wanted to start a business selling ice over the internet, which involved selling plain old water for customers to refreeze. But the AI was so sycophantic that it replied: "What an excellent idea! I can see why you're so passionate about it. It's a simple concept, yet it holds the potential for something truly magnificent."

In its latest postmortem, it said such behavior from its AI could pose a risk, especially concerning issues such as mental health.

"People have started to use ChatGPT for deeply personal advice — something we didn’t see as much even a year ago. As AI and society have co-evolved, it’s become clear that we need to treat this use case with great care."

The company said it had discussed sycophancy risks “for a while,” but it hadn’t been explicitly flagged for internal testing, and it didn’t have specific ways to track sycophancy.

Now, it will look to add “sycophancy evaluations” by adjusting its safety review process to “formally consider behavior issues” and will block launching a model if it presents issues.

OpenAI also admitted that it didn’t announce the latest model as it expected it “to be a fairly subtle update,” which it has vowed to change.

"There’s no such thing as a ‘small’ launch. We’ll try to communicate even subtle changes that can meaningfully change how people interact with ChatGPT."

Originalquelle:cointelegraph

Haftungsausschluss:info@kdj.com

Die bereitgestellten Informationen stellen keine Handelsberatung dar. kdj.com übernimmt keine Verantwortung für Investitionen, die auf der Grundlage der in diesem Artikel bereitgestellten Informationen getätigt werden. Kryptowährungen sind sehr volatil und es wird dringend empfohlen, nach gründlicher Recherche mit Vorsicht zu investieren!

Wenn Sie glauben, dass der auf dieser Website verwendete Inhalt Ihr Urheberrecht verletzt, kontaktieren Sie uns bitte umgehend (info@kdj.com) und wir werden ihn umgehend löschen.

Weitere Artikel veröffentlicht am Aug 02, 2025