시가총액: $3.6793T -2.630%
거래량(24시간): $210.1238B 27.900%
  • 시가총액: $3.6793T -2.630%
  • 거래량(24시간): $210.1238B 27.900%
  • 공포와 탐욕 지수:
  • 시가총액: $3.6793T -2.630%
암호화
주제
암호화
소식
cryptostopics
비디오
최고의 뉴스
암호화
주제
암호화
소식
cryptostopics
비디오
bitcoin
bitcoin

$113653.179192 USD

-1.98%

ethereum
ethereum

$3525.217143 USD

-5.13%

xrp
xrp

$2.974588 USD

-1.43%

tether
tether

$0.999613 USD

-0.03%

bnb
bnb

$764.503086 USD

-3.02%

solana
solana

$164.558033 USD

-4.03%

usd-coin
usd-coin

$0.999804 USD

-0.02%

tron
tron

$0.326608 USD

-0.14%

dogecoin
dogecoin

$0.201896 USD

-3.61%

cardano
cardano

$0.722456 USD

-2.12%

hyperliquid
hyperliquid

$38.099997 USD

-7.92%

sui
sui

$3.494024 USD

-3.45%

stellar
stellar

$0.385959 USD

-3.14%

chainlink
chainlink

$16.209093 USD

-4.30%

bitcoin-cash
bitcoin-cash

$540.811075 USD

-4.11%

암호화폐 뉴스 기사

Introducing Apriel-Nemotron-15b-Thinker: A Resource-Efficient Reasoning Model

2025/05/10 04:39

Introducing Apriel-Nemotron-15b-Thinker: A Resource-Efficient Reasoning Model

In today's technological landscape, AI models are expected to perform complex tasks such as solving mathematical problems, interpreting logical statements, and assisting with enterprise decision-making. Building such models requires an integration of mathematical reasoning, scientific understanding, and advanced pattern recognition. As the demand for intelligent agents in real-time applications, like coding assistants and business automation tools, continues to increase, there is a pressing need for models that combine strong performance with efficient memory and token usage, making them viable for deployment in practical hardware environments.

A central challenge in AI development is the resource intensity of large-scale reasoning models. Despite their impressive capabilities, these models often demand significant memory and computational resources, limiting their real-world applicability. This disparity creates a gap between what advanced models can achieve and what users can realistically deploy. Even well-resourced enterprises may find running models consuming dozens of gigabytes of memory or incurring high inference costs unsustainable. The crux of the issue isn't simply about creating smarter models; it's about ensuring they are efficient and deployable in real-world platforms.

Models like QWQ‑32b, o1‑mini, and EXAONE‑Deep‑32b have demonstrated strong performance on tasks involving mathematical reasoning and academic benchmarks. However, their performance comes at a cost—they require high-end GPUs and consume a high number of tokens, rendering them less suitable for production settings. These models highlight the ongoing trade-off in AI deployment: achieving high accuracy at the expense of scalability and efficiency.

To address this gap, researchers at ServiceNow introduced Apriel-Nemotron-15b-Thinker. This model, consisting of 15 billion parameters, is relatively modest in size compared to its high-performing counterparts. However, it delivers performance on par with models almost twice its size, and its primary advantage lies in its memory footprint and token efficiency. Despite delivering competitive results, it requires nearly half the memory of QWQ‑32b and EXAONE‑Deep‑32b, and it consumes 40% fewer tokens than QWQ‑32b, rendering it significantly more cost-effective for operational tasks. This difference in operational efficiency is crucial in enterprise environments, rendering it feasible to integrate high-performance reasoning models into real-world applications without large-scale infrastructure upgrades.

The development of Apriel-Nemotron-15b-Thinker followed a structured three-stage training approach, each designed to enhance a specific aspect of the model’s reasoning capabilities. The initial phase, termed Continual Pre-training (CPT), involved exposing the model to over 100 billion tokens. These tokens weren't generic text but carefully selected examples from domains requiring deep reasoning, such as mathematical logic, programming challenges, scientific literature, and logical deduction tasks. This exposure provided the foundational reasoning capabilities that distinguish the model. The second stage involved Supervised Fine-Tuning (SFT) using 200,000 high-quality demonstrations. These examples further calibrated the model’s responses to reasoning challenges, enhancing performance on tasks that require accuracy and attention to detail. The final tuning stage, GRPO (Guided Reinforcement Preference Optimization), refined the model’s outputs by optimizing alignment with expected results across key tasks. This pipeline ensures the model is not only intelligent but also responds in a manner that is concise, structured, and scalable.

In enterprise-specific tasks such as MBPP, BFCL, Enterprise RAG, MT Bench, MixEval, IFEval, and Multi-Challenge, the model delivered competitive or superior performance compared to larger models. It also performed admirably in academic benchmarks, such as AIME-24, AIME-25, AMC-23, MATH-500, and GPQA, often equaling or surpassing the performance of other larger models, all while being significantly lighter in computational demand.

Apriel-Nemotron-15b-Thinker demonstrates that achieving both high performance and efficiency in large language models is possible. As the demand for intelligent and deployable agents continues to rise, models like Apriel-Nemotron-15b-Thinker highlight the potential for pushing the boundaries of AI while ensuring it remains relevant and applicable in real-world settings. Several Key Takeaways from the Research on Apriel-Nemotron-15b-Thinker:This model is capable of performing on par with models almost twice its size. It achieves this performance with a lower memory footprint and token consumption compared to QWQ-32b and EXAONE-Deep-32b. It is interesting to note that it performs better than o1-mini on AIME-24, AIME-25, and AMC-23, despite being a smaller model.

The researchers used a structured three-stage training approach to develop the model. The initial stage involved exposing the model to over 100 billion tokens from domains that require deep reasoning, such as mathematical logic, programming challenges, and logical deduction tasks.

원본 소스:marktechpost

부인 성명:info@kdj.com

제공된 정보는 거래 조언이 아닙니다. kdj.com은 이 기사에 제공된 정보를 기반으로 이루어진 투자에 대해 어떠한 책임도 지지 않습니다. 암호화폐는 변동성이 매우 높으므로 철저한 조사 후 신중하게 투자하는 것이 좋습니다!

본 웹사이트에 사용된 내용이 귀하의 저작권을 침해한다고 판단되는 경우, 즉시 당사(info@kdj.com)로 연락주시면 즉시 삭제하도록 하겠습니다.

2025年08月03日 에 게재된 다른 기사