時価総額: $3.6793T -2.630%
ボリューム(24時間): $210.1238B 27.900%
  • 時価総額: $3.6793T -2.630%
  • ボリューム(24時間): $210.1238B 27.900%
  • 恐怖と貪欲の指数:
  • 時価総額: $3.6793T -2.630%
暗号
トピック
暗号化
ニュース
暗号造園
動画
トップニュース
暗号
トピック
暗号化
ニュース
暗号造園
動画
bitcoin
bitcoin

$113653.179192 USD

-1.98%

ethereum
ethereum

$3525.217143 USD

-5.13%

xrp
xrp

$2.974588 USD

-1.43%

tether
tether

$0.999613 USD

-0.03%

bnb
bnb

$764.503086 USD

-3.02%

solana
solana

$164.558033 USD

-4.03%

usd-coin
usd-coin

$0.999804 USD

-0.02%

tron
tron

$0.326608 USD

-0.14%

dogecoin
dogecoin

$0.201896 USD

-3.61%

cardano
cardano

$0.722456 USD

-2.12%

hyperliquid
hyperliquid

$38.099997 USD

-7.92%

sui
sui

$3.494024 USD

-3.45%

stellar
stellar

$0.385959 USD

-3.14%

chainlink
chainlink

$16.209093 USD

-4.30%

bitcoin-cash
bitcoin-cash

$540.811075 USD

-4.11%

暗号通貨のニュース記事

Introducing Apriel-Nemotron-15b-Thinker: A Resource-Efficient Reasoning Model

2025/05/10 04:39

Introducing Apriel-Nemotron-15b-Thinker: A Resource-Efficient Reasoning Model

In today's technological landscape, AI models are expected to perform complex tasks such as solving mathematical problems, interpreting logical statements, and assisting with enterprise decision-making. Building such models requires an integration of mathematical reasoning, scientific understanding, and advanced pattern recognition. As the demand for intelligent agents in real-time applications, like coding assistants and business automation tools, continues to increase, there is a pressing need for models that combine strong performance with efficient memory and token usage, making them viable for deployment in practical hardware environments.

A central challenge in AI development is the resource intensity of large-scale reasoning models. Despite their impressive capabilities, these models often demand significant memory and computational resources, limiting their real-world applicability. This disparity creates a gap between what advanced models can achieve and what users can realistically deploy. Even well-resourced enterprises may find running models consuming dozens of gigabytes of memory or incurring high inference costs unsustainable. The crux of the issue isn't simply about creating smarter models; it's about ensuring they are efficient and deployable in real-world platforms.

Models like QWQ‑32b, o1‑mini, and EXAONE‑Deep‑32b have demonstrated strong performance on tasks involving mathematical reasoning and academic benchmarks. However, their performance comes at a cost—they require high-end GPUs and consume a high number of tokens, rendering them less suitable for production settings. These models highlight the ongoing trade-off in AI deployment: achieving high accuracy at the expense of scalability and efficiency.

To address this gap, researchers at ServiceNow introduced Apriel-Nemotron-15b-Thinker. This model, consisting of 15 billion parameters, is relatively modest in size compared to its high-performing counterparts. However, it delivers performance on par with models almost twice its size, and its primary advantage lies in its memory footprint and token efficiency. Despite delivering competitive results, it requires nearly half the memory of QWQ‑32b and EXAONE‑Deep‑32b, and it consumes 40% fewer tokens than QWQ‑32b, rendering it significantly more cost-effective for operational tasks. This difference in operational efficiency is crucial in enterprise environments, rendering it feasible to integrate high-performance reasoning models into real-world applications without large-scale infrastructure upgrades.

The development of Apriel-Nemotron-15b-Thinker followed a structured three-stage training approach, each designed to enhance a specific aspect of the model’s reasoning capabilities. The initial phase, termed Continual Pre-training (CPT), involved exposing the model to over 100 billion tokens. These tokens weren't generic text but carefully selected examples from domains requiring deep reasoning, such as mathematical logic, programming challenges, scientific literature, and logical deduction tasks. This exposure provided the foundational reasoning capabilities that distinguish the model. The second stage involved Supervised Fine-Tuning (SFT) using 200,000 high-quality demonstrations. These examples further calibrated the model’s responses to reasoning challenges, enhancing performance on tasks that require accuracy and attention to detail. The final tuning stage, GRPO (Guided Reinforcement Preference Optimization), refined the model’s outputs by optimizing alignment with expected results across key tasks. This pipeline ensures the model is not only intelligent but also responds in a manner that is concise, structured, and scalable.

In enterprise-specific tasks such as MBPP, BFCL, Enterprise RAG, MT Bench, MixEval, IFEval, and Multi-Challenge, the model delivered competitive or superior performance compared to larger models. It also performed admirably in academic benchmarks, such as AIME-24, AIME-25, AMC-23, MATH-500, and GPQA, often equaling or surpassing the performance of other larger models, all while being significantly lighter in computational demand.

Apriel-Nemotron-15b-Thinker demonstrates that achieving both high performance and efficiency in large language models is possible. As the demand for intelligent and deployable agents continues to rise, models like Apriel-Nemotron-15b-Thinker highlight the potential for pushing the boundaries of AI while ensuring it remains relevant and applicable in real-world settings. Several Key Takeaways from the Research on Apriel-Nemotron-15b-Thinker:This model is capable of performing on par with models almost twice its size. It achieves this performance with a lower memory footprint and token consumption compared to QWQ-32b and EXAONE-Deep-32b. It is interesting to note that it performs better than o1-mini on AIME-24, AIME-25, and AMC-23, despite being a smaller model.

The researchers used a structured three-stage training approach to develop the model. The initial stage involved exposing the model to over 100 billion tokens from domains that require deep reasoning, such as mathematical logic, programming challenges, and logical deduction tasks.

オリジナルソース:marktechpost

免責事項:info@kdj.com

提供される情報は取引に関するアドバイスではありません。 kdj.com は、この記事で提供される情報に基づいて行われた投資に対して一切の責任を負いません。暗号通貨は変動性が高いため、十分な調査を行った上で慎重に投資することを強くお勧めします。

このウェブサイトで使用されているコンテンツが著作権を侵害していると思われる場合は、直ちに当社 (info@kdj.com) までご連絡ください。速やかに削除させていただきます。

2025年08月03日 に掲載されたその他の記事