Market Cap: $3.4163T -1.550%
Volume(24h): $133.3849B -8.180%
  • Market Cap: $3.4163T -1.550%
  • Volume(24h): $133.3849B -8.180%
  • Fear & Greed Index:
  • Market Cap: $3.4163T -1.550%
Cryptos
Topics
Cryptospedia
News
CryptosTopics
Videos
Top News
Cryptos
Topics
Cryptospedia
News
CryptosTopics
Videos
bitcoin
bitcoin

$108452.054690 USD

-1.13%

ethereum
ethereum

$2760.321956 USD

-0.75%

tether
tether

$1.000038 USD

0.00%

xrp
xrp

$2.249970 USD

-1.61%

bnb
bnb

$667.438855 USD

-0.21%

solana
solana

$160.026292 USD

-2.50%

usd-coin
usd-coin

$0.999802 USD

0.00%

dogecoin
dogecoin

$0.192224 USD

-1.67%

tron
tron

$0.276432 USD

-4.81%

cardano
cardano

$0.694606 USD

-2.41%

hyperliquid
hyperliquid

$41.404755 USD

0.55%

sui
sui

$3.352021 USD

-2.91%

chainlink
chainlink

$14.757478 USD

-2.66%

avalanche
avalanche

$21.452376 USD

-3.00%

stellar
stellar

$0.277006 USD

-0.77%

Cryptocurrency News Articles

Introducing Phi-4-Reasoning-Plus: A Compact, High-Performing Open-Weight Language Model for Reasoning Across Domains

May 01, 2025 at 11:02 pm

Introducing Phi-4-Reasoning-Plus: A Compact, High-Performing Open-Weight Language Model for Reasoning Across Domains

Microsoft Research today announced the release of Phi-4-Reasoning-Plus, a compact yet high-performing open-weight language model designed for structured reasoning across domains like math, coding, science, and logic.

This upgraded 14-billion-parameter model builds on the architecture of the original Phi-4. It's densely packed and decoder-only, prioritizing quality over sheer size. Trained on 16 billion tokens—over half of them unique—the model blends synthetic and curated web data to attain a level of performance that rivals or even surpasses much larger models.

Despite its relatively modest size, Phi-4-Reasoning-Plus outperforms 70B+ models like DeepSeek-R1-Distill on challenging benchmarks. On the AIME 2025 math exam, it achieves a higher “pass@1” rate across all 30 problems compared to heavyweight competitors—nearly reaching the performance of DeepSeek-R1's full 671B parameter version.

The model's training pipeline combines supervised fine-tuning with reinforcement learning:

* Supervised fine-tuning utilized curated chain-of-thought datasets with special tags to segregate intermediate reasoning from final answers—enhancing transparency and coherence.

* A second RL phase, using just 6,400 math problems and Microsoft's Group Relative Policy Optimization (GRPO) algorithm, boosted the model's depth, accuracy, and formatting consistency.

Phi-4-Reasoning-Plus natively supports 32k-token context lengths (up to 64k in tests), making it ideal for heavy text tasks like legal reasoning, financial analysis, or technical Q&A—especially when memory or latency are critical.

It integrates seamlessly with popular inference frameworks such as Hugging Face Transformers, vLLM, llama.cpp, and Ollama. It's released under the permissive MIT license, allowing commercial use, fine-tuning, and distillation without any restrictions.

Designed for modular AI pipelines and interpretable outputs, Phi-4-Reasoning-Plus is a strong fit for teams managing AI deployment, orchestration, or compliance. Its structured output format aids explainability, while its performance under resource constraints enables scalable real-time reasoning.

Microsoft has conducted extensive safety testing, including red teaming and evaluations via tools like Toxigen. These measures render it more suitable for enterprise use in regulated industries.

Phi-4-Reasoning-Plus marks a growing trend: small, efficient models that overachieve. For technical leaders balancing performance, cost, and control, it provides a powerful, open, and adaptable reasoning engine—capable of enterprise integration without the hefty infrastructure footprint of mega-models.

Disclaimer:info@kdj.com

The information provided is not trading advice. kdj.com does not assume any responsibility for any investments made based on the information provided in this article. Cryptocurrencies are highly volatile and it is highly recommended that you invest with caution after thorough research!

If you believe that the content used on this website infringes your copyright, please contact us immediately (info@kdj.com) and we will delete it promptly.

Other articles published on Jun 12, 2025