Market Cap: $3.6315T -1.300%
Volume(24h): $133.5557B -36.440%
  • Market Cap: $3.6315T -1.300%
  • Volume(24h): $133.5557B -36.440%
  • Fear & Greed Index:
  • Market Cap: $3.6315T -1.300%
Cryptos
Topics
Cryptospedia
News
CryptosTopics
Videos
Top News
Cryptos
Topics
Cryptospedia
News
CryptosTopics
Videos
bitcoin
bitcoin

$113468.010845 USD

-0.15%

ethereum
ethereum

$3444.015026 USD

-2.15%

xrp
xrp

$2.825479 USD

-5.01%

tether
tether

$0.999803 USD

0.02%

bnb
bnb

$743.647531 USD

-2.88%

solana
solana

$160.624692 USD

-2.34%

usd-coin
usd-coin

$0.999903 USD

0.02%

tron
tron

$0.323529 USD

-0.95%

dogecoin
dogecoin

$0.196081 USD

-2.87%

cardano
cardano

$0.713030 USD

-1.29%

hyperliquid
hyperliquid

$37.499790 USD

-1.55%

sui
sui

$3.408836 USD

-2.25%

stellar
stellar

$0.374679 USD

-2.93%

chainlink
chainlink

$15.888532 USD

-1.95%

bitcoin-cash
bitcoin-cash

$529.141629 USD

-2.14%

Cryptocurrency News Articles

Exploring the Hidden States of Chain-of-Thought Reasoning Models to Reduce Inference Inefficiency

Apr 14, 2025 at 01:32 am

Artificial intelligence systems have made significant strides in simulating human-style reasoning, particularly mathematics and logic. These models don't just generate answers—they walk through a series of logical steps to reach conclusions, offering insights into how and why those answers are produced. This step-by-step reasoning, often called Chain-of-Thought (CoT), has become vital in how machines handle complex problem-solving tasks.

Exploring the Hidden States of Chain-of-Thought Reasoning Models to Reduce Inference Inefficiency

Artificial intelligence systems have made remarkable progress in simulating human-style reasoning, especially in domains like mathematics and logic. Unlike typical generative models, these systems generate a series of intermediate steps to reach a final answer, offering insights into the reasoning process. This step-by-step reasoning, often called Chain-of-Thought (CoT), is crucial for machines to handle complex problem-solving tasks.

A common challenge researchers face is the models' inefficiency during inference. The reasoning models may continue processing even after attaining a correct conclusion, leading to overthinking. This generates unnecessary tokens, increasing computational cost.

Many current approaches measure a model's confidence using verbal prompts or by analyzing multiple outputs. These "black-box" strategies ask the model to report how sure it is of its answer. However, they are often imprecise and computationally expensive. On the other hand, "white-box" methods investigate models' internal hidden states to extract signals that may correlate with answer correctness.

Prior work has shown that a model's internal states can indeed indicate the validity of final answers. However, applying this to intermediate steps in long reasoning chains is still an underexplored direction.

To bridge this gap, a team of researchers from New York University and NYU Shanghai designed a lightweight probe—a simple two-layer neural network—to inspect a model's hidden states at intermediate reasoning steps. Their models of choice were the DeepSeek-R1-Distill series and QwQ-32B, known for their excellent step-by-step reasoning capabilities, tested across various datasets including AIME, GSM8K, and MATH. The researchers trained their probe to read the internal state associated with each chunk of reasoning and predict whether the current intermediate answer was correct.

To construct their approach, they segmented each long CoT output into smaller parts or chunks, using markers like "wait" or "verify" to identify breaks in reasoning. They used the last token's hidden state in each chunk as a representation and matched this to a correctness label, which was judged using another model. These representations were then used to train the probe on binary classification tasks. The probe was fine-tuned using grid search across hyperparameters like learning rate and hidden layer size, with most models converging to linear probes—highlighting that correctness information is often linearly embedded in the hidden states.

The probe worked effectively for fully formed answers and even showed the ability to predict correctness before an answer was completed, alluding to look-ahead capabilities.

Performance results were clear and quantifiable. The probes achieved ROC-AUC scores exceeding 0.9 for some datasets like AIME when using models like R1-Distill-Qwen-32B. Expected Calibration Errors (ECE) remained under 0.1, showcasing high reliability. For instance, R1-Distill-Qwen-32B had an ECE of just 0.01 on GSM8K and 0.06 on MATH.

In application, the probe was used to implement a confidence-based early exit strategy during inference. The reasoning process was halted when the probe's confidence in an answer exceeded a threshold. At a confidence threshold of 0.85, the accuracy remained at 88.2%, while the inference token count was reduced by 24%. Even at a threshold of 0.9, accuracy stayed at 88.6%, with a 19% token reduction. Compared to static exit methods, this dynamic strategy achieved up to 5% higher accuracy using the same or fewer tokens.

This study provides an efficient, integrated way for reasoning models to self-verify during inference. The researchers' approach highlights a gap—models inherently know when they're right, but they don't act on it. This research opens up avenues for smarter, more efficient reasoning systems by leveraging internal representations through probing. It demonstrates that tapping into what the model already "knows" can lead to significant improvements in both performance and resource use.

Original source:marktechpost

Disclaimer:info@kdj.com

The information provided is not trading advice. kdj.com does not assume any responsibility for any investments made based on the information provided in this article. Cryptocurrencies are highly volatile and it is highly recommended that you invest with caution after thorough research!

If you believe that the content used on this website infringes your copyright, please contact us immediately (info@kdj.com) and we will delete it promptly.

Other articles published on Aug 03, 2025