市值: $3.6793T -2.630%
成交额(24h): $210.1238B 27.900%
  • 市值: $3.6793T -2.630%
  • 成交额(24h): $210.1238B 27.900%
  • 恐惧与贪婪指数:
  • 市值: $3.6793T -2.630%
加密货币
话题
百科
资讯
加密话题
视频
热门新闻
加密货币
话题
百科
资讯
加密话题
视频
bitcoin
bitcoin

$113653.179192 USD

-1.98%

ethereum
ethereum

$3525.217143 USD

-5.13%

xrp
xrp

$2.974588 USD

-1.43%

tether
tether

$0.999613 USD

-0.03%

bnb
bnb

$764.503086 USD

-3.02%

solana
solana

$164.558033 USD

-4.03%

usd-coin
usd-coin

$0.999804 USD

-0.02%

tron
tron

$0.326608 USD

-0.14%

dogecoin
dogecoin

$0.201896 USD

-3.61%

cardano
cardano

$0.722456 USD

-2.12%

hyperliquid
hyperliquid

$38.099997 USD

-7.92%

sui
sui

$3.494024 USD

-3.45%

stellar
stellar

$0.385959 USD

-3.14%

chainlink
chainlink

$16.209093 USD

-4.30%

bitcoin-cash
bitcoin-cash

$540.811075 USD

-4.11%

加密货币新闻

Introducing Apriel-Nemotron-15b-Thinker: A Resource-Efficient Reasoning Model

2025/05/10 04:39

Introducing Apriel-Nemotron-15b-Thinker: A Resource-Efficient Reasoning Model

In today's technological landscape, AI models are expected to perform complex tasks such as solving mathematical problems, interpreting logical statements, and assisting with enterprise decision-making. Building such models requires an integration of mathematical reasoning, scientific understanding, and advanced pattern recognition. As the demand for intelligent agents in real-time applications, like coding assistants and business automation tools, continues to increase, there is a pressing need for models that combine strong performance with efficient memory and token usage, making them viable for deployment in practical hardware environments.

A central challenge in AI development is the resource intensity of large-scale reasoning models. Despite their impressive capabilities, these models often demand significant memory and computational resources, limiting their real-world applicability. This disparity creates a gap between what advanced models can achieve and what users can realistically deploy. Even well-resourced enterprises may find running models consuming dozens of gigabytes of memory or incurring high inference costs unsustainable. The crux of the issue isn't simply about creating smarter models; it's about ensuring they are efficient and deployable in real-world platforms.

Models like QWQ‑32b, o1‑mini, and EXAONE‑Deep‑32b have demonstrated strong performance on tasks involving mathematical reasoning and academic benchmarks. However, their performance comes at a cost—they require high-end GPUs and consume a high number of tokens, rendering them less suitable for production settings. These models highlight the ongoing trade-off in AI deployment: achieving high accuracy at the expense of scalability and efficiency.

To address this gap, researchers at ServiceNow introduced Apriel-Nemotron-15b-Thinker. This model, consisting of 15 billion parameters, is relatively modest in size compared to its high-performing counterparts. However, it delivers performance on par with models almost twice its size, and its primary advantage lies in its memory footprint and token efficiency. Despite delivering competitive results, it requires nearly half the memory of QWQ‑32b and EXAONE‑Deep‑32b, and it consumes 40% fewer tokens than QWQ‑32b, rendering it significantly more cost-effective for operational tasks. This difference in operational efficiency is crucial in enterprise environments, rendering it feasible to integrate high-performance reasoning models into real-world applications without large-scale infrastructure upgrades.

The development of Apriel-Nemotron-15b-Thinker followed a structured three-stage training approach, each designed to enhance a specific aspect of the model’s reasoning capabilities. The initial phase, termed Continual Pre-training (CPT), involved exposing the model to over 100 billion tokens. These tokens weren't generic text but carefully selected examples from domains requiring deep reasoning, such as mathematical logic, programming challenges, scientific literature, and logical deduction tasks. This exposure provided the foundational reasoning capabilities that distinguish the model. The second stage involved Supervised Fine-Tuning (SFT) using 200,000 high-quality demonstrations. These examples further calibrated the model’s responses to reasoning challenges, enhancing performance on tasks that require accuracy and attention to detail. The final tuning stage, GRPO (Guided Reinforcement Preference Optimization), refined the model’s outputs by optimizing alignment with expected results across key tasks. This pipeline ensures the model is not only intelligent but also responds in a manner that is concise, structured, and scalable.

In enterprise-specific tasks such as MBPP, BFCL, Enterprise RAG, MT Bench, MixEval, IFEval, and Multi-Challenge, the model delivered competitive or superior performance compared to larger models. It also performed admirably in academic benchmarks, such as AIME-24, AIME-25, AMC-23, MATH-500, and GPQA, often equaling or surpassing the performance of other larger models, all while being significantly lighter in computational demand.

Apriel-Nemotron-15b-Thinker demonstrates that achieving both high performance and efficiency in large language models is possible. As the demand for intelligent and deployable agents continues to rise, models like Apriel-Nemotron-15b-Thinker highlight the potential for pushing the boundaries of AI while ensuring it remains relevant and applicable in real-world settings. Several Key Takeaways from the Research on Apriel-Nemotron-15b-Thinker:This model is capable of performing on par with models almost twice its size. It achieves this performance with a lower memory footprint and token consumption compared to QWQ-32b and EXAONE-Deep-32b. It is interesting to note that it performs better than o1-mini on AIME-24, AIME-25, and AMC-23, despite being a smaller model.

The researchers used a structured three-stage training approach to develop the model. The initial stage involved exposing the model to over 100 billion tokens from domains that require deep reasoning, such as mathematical logic, programming challenges, and logical deduction tasks.

原文来源:marktechpost

免责声明:info@kdj.com

所提供的信息并非交易建议。根据本文提供的信息进行的任何投资,kdj.com不承担任何责任。加密货币具有高波动性,强烈建议您深入研究后,谨慎投资!

如您认为本网站上使用的内容侵犯了您的版权,请立即联系我们(info@kdj.com),我们将及时删除。

2025年08月03日 发表的其他文章