시가총액: $3.8586T -0.040%
거래량(24시간): $223.1375B 10.660%
  • 시가총액: $3.8586T -0.040%
  • 거래량(24시간): $223.1375B 10.660%
  • 공포와 탐욕 지수:
  • 시가총액: $3.8586T -0.040%
암호화
주제
암호화
소식
cryptostopics
비디오
최고의 뉴스
암호화
주제
암호화
소식
cryptostopics
비디오
bitcoin
bitcoin

$117535.466428 USD

0.86%

ethereum
ethereum

$3743.904248 USD

3.27%

xrp
xrp

$3.150293 USD

1.92%

tether
tether

$1.000398 USD

-0.01%

bnb
bnb

$784.123542 USD

2.96%

solana
solana

$186.703104 USD

3.73%

usd-coin
usd-coin

$1.000194 USD

0.03%

dogecoin
dogecoin

$0.237077 USD

4.66%

tron
tron

$0.316954 USD

1.43%

cardano
cardano

$0.825919 USD

3.16%

hyperliquid
hyperliquid

$44.329551 USD

6.60%

sui
sui

$3.974508 USD

9.23%

stellar
stellar

$0.439026 USD

4.80%

chainlink
chainlink

$18.426031 USD

5.08%

hedera
hedera

$0.267559 USD

12.80%

암호화폐 뉴스 기사

DeepSeek releases its new open-weight large language model (LLM)

2025/05/01 01:17

Chinese artificial intelligence development company DeepSeek has released a new large language model (LLM) on the hosting service Hugging Face.

The latest model, Prover V2, is being released under the permissive open-source MIT license. It is a continuation of the Prover V1 line, first announced in August 2024. The first version of the model was presented in a paper titled “Prover: A Large Language Model for Compressing Mathematical Knowledge and Programming Lean 4.”

Prover V1 was trained to translate math competition problems into the Lean 4 programming language, which is used for proving theorems and was developed at Microsoft Research. The model was based on DeepSeek’s seven-billion-parameter DeepSeekMath model and was fine-tuned on synthetic data. Synthetic data refers to data used for training AI models that was, in turn, also generated by AI models, with human-generated data usually seen as an increasingly scarce source of higher-quality data.

Prover V1.5, in turn, improved on the previous version by optimizing both training and execution and achieving higher accuracy in several common benchmarks.

The new Prover V2 model is expected to run from RAM or VRAM. It has 671 billion parameters and weighs approximately 650 GB. To get them down to this size, Prover V2 weights have been quantized down to eight-bit floating point precision, meaning that each parameter has been approximated to take half the space of the usual 16 bits, with a bit being a single digit in binary numbers. This effectively halves the model’s bulk.

So far, the improvements introduced by Prover V2 are unclear, as no research paper or other information has been published at the time of writing. The number of parameters in the Prover V2 weights suggests that it is likely to be based on the company’s previous R1 model. When it was first released, R1 made waves in the AI space with its performance comparable to the then state-of-the-art OpenAI’s o1 model.

The importance of open weights

Publicly releasing the weights of LLMs is a controversial topic. On one side, it is a democratizing force that allows the public to access AI on their own terms without relying on private company infrastructure.

On the other side, it means that the company cannot step in and prevent abuse of the model by enforcing certain limitations on dangerous user queries. The release of R1 in this manner also raised security concerns, and some described it as China’s “Sputnik moment.”

Open source proponents rejoiced that DeepSeek continued where Meta left off with the release of its LLaMA series of open-source AI models, proving that open AI is a serious contender for OpenAI’s closed AI. The accessibility of those models is also constantly improving.

Now, even users without access to a supercomputer that costs more than the average home in much of the world can run LLMs locally. This is primarily thanks to two AI development techniques: model distillation and quantization.

Distillation refers to training a compact “student” network to replicate the behavior of a larger “teacher” model, so you keep most of the performance while cutting parameters to make it accessible to less powerful hardware. Quantization consists of reducing the numeric precision of a model’s weights and activations to shrink size and boost inference speed with only minor accuracy loss.

An example is Prover V2’s reduction from 16 to eight-bit floating point numbers, but further reductions are possible by halving bits further. Both of those techniques have consequences for model performance, but usually leave the model largely functional.

DeepSeek’s R1 was distilled into versions with retrained LLaMA and Qwen models ranging from 70 billion parameters to as low as 1.5 billion parameters. The smallest of those models can even reliably be run on some mobile devices.output: Publicly releasing the weights of large language models (LLMs) is a hotly debated topic. On one side of the argument, it is a democratizing force that allows the public to access AI on their own terms without relying on private company infrastructure. On the other side, it means that the company cannot step in and prevent abuse of the model by enforcing certain limitations on dangerous user queries.

Those who follow the artificial intelligence (AI) landscape closely will recall the fuss that ensued when DeepSeek, a leading Chinese AI development company, released its R1 LLM with 1.5 trillion parameters. The model, which achieved performance comparable to OpenAI’s o1, was made available on the Hugging Face hosting service with the permissive MIT license.

The release of R1 sparked a great deal of discussion in both the technical and economic spheres, with some comparing it to a “Sputnik moment” for China in the AI race. It also prompted a response from OpenAI, which announced that it would be releasing the weights of its own models in

부인 성명:info@kdj.com

제공된 정보는 거래 조언이 아닙니다. kdj.com은 이 기사에 제공된 정보를 기반으로 이루어진 투자에 대해 어떠한 책임도 지지 않습니다. 암호화폐는 변동성이 매우 높으므로 철저한 조사 후 신중하게 투자하는 것이 좋습니다!

본 웹사이트에 사용된 내용이 귀하의 저작권을 침해한다고 판단되는 경우, 즉시 당사(info@kdj.com)로 연락주시면 즉시 삭제하도록 하겠습니다.

2025年07月27日 에 게재된 다른 기사