![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
META的首席AI科学家,现代AI的先驱之一Yann Lecun最近认为自回归的大型语言模型(LLMS)从根本上存在缺陷。
Yann LeCun, Chief AI Scientist at Meta and one of the pioneers of modern AI, recently shared his thoughts on a fundamental limitation he sees in autoregressive Large Language Models (LLMs). According to LeCun, the probability of generating a correct response decreases exponentially with each token, making them impractical for long-form, reliable AI interactions.
META的首席AI科学家,现代AI的先驱之一Yann Lecun最近分享了他对他在自回归大型语言模型(LLMS)中看到的基本限制的想法。根据Lecun的说法,产生正确响应的概率随着每个令牌的指数呈指数降低,使其对于长形式,可靠的AI相互作用而言是不切实际的。
While I deeply respect LeCun’s work and approach to AI development and resonate with many of his insights, I believe this particular claim overlooks some key aspects of how LLMs function in practice. In this post, I’ll explain why autoregressive models are not inherently divergent and doomed, and how techniques like Chain-of-Thought (CoT) and Attentive Reasoning Queries (ARQs)—a method we’ve developed to achieve high-accuracy customer interactions with Parlant—effectively prove otherwise.
尽管我深深地尊重Lecun的工作和AI发展的方法并与他的许多见解产生共鸣,但我相信这一特殊主张忽略了LLM在实践中的功能的一些关键方面。在这篇文章中,我将解释为什么自回归模型本质上没有固有的分歧和注定,以及如何采用思想链(COT)和细心的推理查询(ARQ),这是我们开发的一种方法来实现高准确的客户与Parlant的相互作用 - 有效地证明了这一点。
What is Autoregression?
什么是自动性?
At its core, an LLM is a probabilistic model trained to generate text one token at a time. Given an input context, the model predicts the most likely next token, feeds it back into the original sequence, and repeats the process iteratively until a stop condition is met. This allows the model to generate anything from short responses to entire articles.
LLM的核心是一种概率模型,该模型训练了一次生成文本一个令牌。给定输入上下文,该模型可以预测最有可能的令牌,将其馈回原始序列,然后迭代地重复该过程,直到满足停止条件为止。这使该模型可以生成从简短响应到整个文章的任何内容。
For a deeper dive into autoregression, check out our recent technical blog post.
要更深入地了解自动性,请查看我们最近的技术博客文章。
Do Generation Errors Compound Exponentially?
生成错误是否呈指数级化?
LeCun’s argument can be unpacked as follows:
Lecun的论点可以被解开,如下所示:
Let P be the probability of making a generation error at each token.
令P为在每个令牌上造成生成误差的概率。
For an output of length n, the probability of maintaining coherence is (1-E)^n.
对于长度为n的输出,保持相干性的概率为(1-e)^n。
This leads to LeCun’s conclusion that for sufficiently long responses, the likelihood of maintaining coherence exponentially approaches zero, suggesting that autoregressive LLMs are inherently flawed.
这得出了莱肯的结论,即对于足够长的响应,保持连贯性的可能性将指数成倍接近零,这表明自回调的LLM固有有缺陷。
But here’s the problem: E is not constant.
但这是问题:e不是恒定的。
To put it simply, LeCun’s argument assumes that the probability of making a mistake in each new token is independent. However, LLMs don’t work that way.
简而言之,Lecun的论点假设在每个新令牌中犯错的可能性是独立的。但是,LLM不起作用。
As an analogy to what allows LLMs to overcome this problem, imagine you’re telling a story: if you make a mistake in one sentence, you can still correct it in the next one to keep the narrative coherent. The same applies to LLMs, especially when techniques like Chain-of-Thought (CoT) prompting guide them toward better reasoning by helping them reassess their own outputs along the way.
为了类比使LLM可以克服这个问题的原因,想象一下您是在讲一个故事:如果您在一个句子中犯了一个错误,那么您仍然可以在下一个句子中纠正它以保持叙事相干。 LLM同样适用,尤其是当诸如《经营链》(COT)之类的技术提示他们通过帮助他们在此过程中重新评估自己的输出来指导他们更好地推理时。
Why This Assumption is Flawed
为什么这个假设有缺陷
LLMs exhibit self-correction properties that prevent them from spiraling into incoherence.
LLM具有自校正特性,可防止它们螺旋形成不连贯。
Take Chain-of-Thought (CoT) prompting, which encourages the model to generate intermediate reasoning steps. CoT allows the model to consider multiple perspectives, improving its ability to converge to an acceptable answer. Similarly, Chain-of-Verification (CoV) and structured feedback mechanisms like ARQs guide the model in reinforcing valid outputs and discarding erroneous ones.
采取思想链(COT)提示,这鼓励模型生成中间的推理步骤。 COT允许该模型考虑多种观点,从而提高其收敛到可接受的答案的能力。同样,验证链(COV)和结构化反馈机制(如ARQ)指导该模型加强有效的输出并丢弃错误的输出。
A small mistake early on in the generation process doesn’t necessarily doom the final answer. Figuratively speaking, an LLM can double-check its work, backtrack, and correct errors on the go.
一代过程中的一个小错误不一定会注定最终答案。象征性地说,LLM可以仔细检查其工作,回溯和正确的错误。
Attentive Reasoning Queries (ARQs) are a Game-Changer
细心的推理查询(ARQ)是改变游戏规则的
At Parlant, we’ve taken this principle further in our work on Attentive Reasoning Queries (a research paper describing our results is currently in the works, but the implementation pattern can be explored in our open-source codebase). ARQs introduce reasoning blueprints that help the model maintain coherence throughout long completions by dynamically refocusing attention on key instructions at strategic points in the completion process, continuously preventing LLMs from diverging into incoherence. Using them, we’ve been able to maintain a large test suite that exhibits close to 100% consistency in generating correct completions for complex tasks.
在Parlant,我们在专门推理查询的工作中进一步采取了这一原则(描述我们结果的研究论文目前正在起作用,但是可以在我们的开源代码库中探索实施模式)。 ARQ引入了推理蓝图,通过在完成过程中的战略点上的关键指令进行动态重新关注,从而帮助该模型保持一致性,从而不断防止LLMS分流为不一致。使用它们,我们已经能够维护一个大型测试套件,该套件在为复杂的任务生成正确的完成时表现出接近100%的一致性。
This technique allows us to achieve much higher accuracy in AI-driven reasoning and instruction-following, which has been critical for us in enabling reliable and aligned customer-facing applications.
这项技术使我们能够在AI驱动的推理和遵循指导范围内实现更高的准确性,这对我们实现可靠且面向面向客户的应用程序至关重要。
Autoregressive Models Are Here to Stay
自回归模型在这里留下来
We think autoregressive LLMs are far from doomed. While long-form coherence is a challenge, assuming an exponentially compounding error rate ignores key mechanisms that mitigate divergence—from Chain-of-Thought reasoning to structured reasoning like ARQs.
我们认为自回归的LLM远未注定。虽然长形式的连贯性是一个挑战,但假设有指数级的复杂错误率忽略了减轻差异的关键机制,从想象力的推理到诸如ARQ之类的结构化推理。
If you’re interested in AI alignment and increasing the accuracy of chat agents using LLMs, feel free to explore Parlant’s open-source effort. Let’s continue refining how LLMs generate and structure knowledge.
如果您对使用LLM的AI对齐感兴趣并提高聊天代理的准确性,请随时探索Parlant的开源工作。让我们继续完善LLM的产生和结构知识。
免责声明:info@kdj.com
所提供的信息并非交易建议。根据本文提供的信息进行的任何投资,kdj.com不承担任何责任。加密货币具有高波动性,强烈建议您深入研究后,谨慎投资!
如您认为本网站上使用的内容侵犯了您的版权,请立即联系我们(info@kdj.com),我们将及时删除。
-
- 鲸鱼供应减少:对比特币投资者意味着什么
- 2025-07-03 01:10:12
- 比特币鲸鱼的持有量处于多年低点的信号潜在市场变化。这对浏览低供应景观的投资者意味着什么?
-
-
-
- EURC,USDC和实时付款:无边界财务的新时代
- 2025-07-03 00:30:12
- 探索EURC和USDC如何改变实时付款,为企业和消费者提供更快,无边界的交易。
-
- Solana代币创建:没有代码?没问题!
- 2025-07-03 00:35:13
- 创建一个无需编码的词素令牌即将成为现实!发现启动自己的SPL令牌并利用Solana生态系统的最简单,最快的方法。
-
- 比特币,亚利桑那州和州长霍布斯:沙漠对峙
- 2025-07-03 00:40:12
- 州长霍布斯(Hobbs)的亲比特币法案的否决权与对加密货币友好的州造成了冲突。亚利桑那州是否错过了,还是谨慎是英勇的大部分?
-
-
- SUI反弹:分析师预测爆炸性移动的潜在价格目标
- 2025-07-02 23:30:12
- SUI显示出强烈反弹的迹象,分析师根据斐波那契水平和趋势线支持绘制了潜在的价格目标。 Sui准备爆炸了吗?
-
- 特朗普,加密和财富:2025年的狂野骑行
- 2025-07-02 23:30:12
- 探索2025年特朗普,加密货币和财富的交汇处,比特币在政治紧张局势和模因硬币的兴起中的过山车骑行。