市值: $3.4699T 0.900%
成交额(24h): $145.2709B 18.480%
  • 市值: $3.4699T 0.900%
  • 成交额(24h): $145.2709B 18.480%
  • 恐惧与贪婪指数:
  • 市值: $3.4699T 0.900%
加密货币
话题
百科
资讯
加密话题
视频
热门新闻
加密货币
话题
百科
资讯
加密话题
视频
bitcoin
bitcoin

$109672.014679 USD

-0.05%

ethereum
ethereum

$2779.040461 USD

3.21%

tether
tether

$1.000027 USD

-0.02%

xrp
xrp

$2.286294 USD

-1.03%

bnb
bnb

$668.672191 USD

0.49%

solana
solana

$164.011110 USD

2.72%

usd-coin
usd-coin

$0.999787 USD

0.00%

dogecoin
dogecoin

$0.195365 USD

0.42%

tron
tron

$0.290361 USD

0.92%

cardano
cardano

$0.711293 USD

-0.19%

hyperliquid
hyperliquid

$41.168738 USD

5.18%

sui
sui

$3.450061 USD

1.27%

chainlink
chainlink

$15.153468 USD

3.69%

avalanche
avalanche

$22.109128 USD

0.87%

bitcoin-cash
bitcoin-cash

$441.105779 USD

3.36%

加密货币新闻

Introducing Phi-4-Reasoning-Plus: A Compact, High-Performing Open-Weight Language Model for Reasoning Across Domains

2025/05/01 23:02

Introducing Phi-4-Reasoning-Plus: A Compact, High-Performing Open-Weight Language Model for Reasoning Across Domains

Microsoft Research today announced the release of Phi-4-Reasoning-Plus, a compact yet high-performing open-weight language model designed for structured reasoning across domains like math, coding, science, and logic.

This upgraded 14-billion-parameter model builds on the architecture of the original Phi-4. It's densely packed and decoder-only, prioritizing quality over sheer size. Trained on 16 billion tokens—over half of them unique—the model blends synthetic and curated web data to attain a level of performance that rivals or even surpasses much larger models.

Despite its relatively modest size, Phi-4-Reasoning-Plus outperforms 70B+ models like DeepSeek-R1-Distill on challenging benchmarks. On the AIME 2025 math exam, it achieves a higher “pass@1” rate across all 30 problems compared to heavyweight competitors—nearly reaching the performance of DeepSeek-R1's full 671B parameter version.

The model's training pipeline combines supervised fine-tuning with reinforcement learning:

* Supervised fine-tuning utilized curated chain-of-thought datasets with special tags to segregate intermediate reasoning from final answers—enhancing transparency and coherence.

* A second RL phase, using just 6,400 math problems and Microsoft's Group Relative Policy Optimization (GRPO) algorithm, boosted the model's depth, accuracy, and formatting consistency.

Phi-4-Reasoning-Plus natively supports 32k-token context lengths (up to 64k in tests), making it ideal for heavy text tasks like legal reasoning, financial analysis, or technical Q&A—especially when memory or latency are critical.

It integrates seamlessly with popular inference frameworks such as Hugging Face Transformers, vLLM, llama.cpp, and Ollama. It's released under the permissive MIT license, allowing commercial use, fine-tuning, and distillation without any restrictions.

Designed for modular AI pipelines and interpretable outputs, Phi-4-Reasoning-Plus is a strong fit for teams managing AI deployment, orchestration, or compliance. Its structured output format aids explainability, while its performance under resource constraints enables scalable real-time reasoning.

Microsoft has conducted extensive safety testing, including red teaming and evaluations via tools like Toxigen. These measures render it more suitable for enterprise use in regulated industries.

Phi-4-Reasoning-Plus marks a growing trend: small, efficient models that overachieve. For technical leaders balancing performance, cost, and control, it provides a powerful, open, and adaptable reasoning engine—capable of enterprise integration without the hefty infrastructure footprint of mega-models.

免责声明:info@kdj.com

所提供的信息并非交易建议。根据本文提供的信息进行的任何投资,kdj.com不承担任何责任。加密货币具有高波动性,强烈建议您深入研究后,谨慎投资!

如您认为本网站上使用的内容侵犯了您的版权,请立即联系我们(info@kdj.com),我们将及时删除。

2025年06月11日 发表的其他文章