![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
自2024年以来,AI一直是一个主导的叙述,但是用户和公司仍然无法完全信任它。无论是财务,个人数据还是医疗保健决定
As the year 2024 came to a close, the narrative surrounding artificial intelligence (AI) reached a fever pitch. Both users and companies alike expressed a lack of complete trust in the technology.
随着2024年的结束,人工智能(AI)的叙述达到了狂热。用户和公司都表示对该技术缺乏完全信任。
Despite the significant investments made by startups and multinational companies in expanding AI to domains such as finances, health, and various other aspects of daily life, a substantial deficit in trust in AI persists. This deficit poses one of the most pressing barriers to widespread adoption.
尽管初创企业和跨国公司在将AI扩展到财务,健康和日常生活其他各个方面等领域进行了大量投资,但对AI的信任却存在很大的赤字。这种赤字是广泛采用的最紧迫的障碍之一。
Enter decentralized and privacy-preserving technologies, which are being rapidly recognized as promising solutions for providing verifiability, transparency, and enhanced data protection without hindering AI's growth.
输入分散和保护隐私的技术,这些技术被迅速被认为是提供可验证性,透明度和增强数据保护的有前途的解决方案,而不会阻碍AI的增长。
The pervasive AI trust deficit
普遍的AI信托赤字
AI emerged as the second most popular category among cryptocurrency investors in early 2024, capturing over 16% of investor interest.
AI在2024年初成为加密货币投资者中第二受欢迎的类别,捕获了超过16%的投资者权益。
Startups and multinational companies poured considerable resources into expanding the technology to domains such as finances, health, and every other aspect of life.
初创企业和跨国公司为将技术扩展到财务,健康和生活的其他方面等领域。
For instance, the emerging DeFi x AI (DeFAI) sector witnessed the launch of more than 7,000 projects, which collectively reached a peak market capitalization of $7 billion early in the year before the market downturn. DeFAI showcased the transformative potential of AI to enhance decentralized finance (DeFi) with natural language commands, enabling users to perform complex multi-step operations and conduct sophisticated market research.
例如,新兴的Defi X AI(Defai)领域见证了7,000多个项目的启动,该项目在市场下跌前的一年中共同达到了70亿美元的峰值市值。 DeFai展示了AI通过自然语言命令增强分散融资(DEFI)的变革潜力,使用户能够执行复杂的多步操作并进行复杂的市场研究。
However, innovation alone failed to address AI's core vulnerabilities: hallucinations, manipulation, and privacy concerns.
但是,仅创新就无法解决AI的核心漏洞:幻觉,操纵和隐私问题。
In November, a user managed to deceive an AI agent on Base into sending $47,000 despite being programmed never to do so. While this scenario was part of a game, it raised real concerns about whether AI agents can be trusted with autonomy over financial operations.
在11月,一个用户设法欺骗了基地的AI代理,尽管编程了47,000美元,但从未这样做。尽管这种情况是游戏的一部分,但它引起了人们对AI代理是否可以通过对金融运营的自主权来信任的真正担忧。
Audits, bug bounties, and red teams helped to mitigate the risk of prompt injection, logic flaws, or unauthorized data use. However, according to KPMG in 2023, 61% of people still expressed hesitation to trust AI, and even industry professionals shared that concern.
审计,漏洞赏金和红色团队有助于减轻及时注射,逻辑缺陷或未经授权的数据使用的风险。但是,根据毕马威(KPMG)在2023年的说法,有61%的人仍然表示犹豫要信任AI,甚至行业专业人士也表示关注。
A Forrester survey cited in Harvard Business Review found that 25% of analysts named trust as AI's biggest obstacle.
哈佛商业评论中引用的一项Forrester调查发现,有25%的分析师将Trust称为AI的最大障碍。
That skepticism remains strong. A poll conducted at The Wall Street Journal's CIO Network Summit found that 61% of America's top IT leaders are still experimenting with AI agents. The rest were still experimenting or avoiding them altogether, citing lack of reliability, cybersecurity risks, and data privacy as their main concerns.
这种怀疑仍然是强烈的。在《华尔街日报》的CIO网络峰会上进行的一项民意调查发现,美国61%的IT领导者仍在尝试AI代理商。其余的仍在试验或避免它们,因为缺乏可靠性,网络安全风险和数据隐私是其主要关注点。
Industries like healthcare are particularly sensitive to these risks. Sharing electronic health records (EHR) with LLMs to improve health outcomes holds great promise but also carries legal and ethical risks without airtight privacy protections.
像医疗保健这样的行业对这些风险特别敏感。与LLMS共享电子健康记录(EHR)以改善健康成果,具有巨大的希望,但在没有密封隐私保护的情况下也带来法律和道德风险。
For example, the healthcare industry is suffering adversely from data privacy breaches. This problem compounds when hospitals share EHR data to train AI algorithms without protecting patient privacy.
例如,医疗保健行业因数据隐私漏洞而受到不利影响。当医院共享EHR数据以培训AI算法而无需保护患者隐私时,此问题就会增加。
Centralized platforms also face challenges in reconciling user privacy with dataoperability. In a recent report by the Center for Data Innovation, researchers highlighted the need for a decentralized approach to dataoperability.
集中式平台还面临着将用户隐私与数据术可核对的挑战。在数据创新中心的最新报告中,研究人员强调了对数据术的分散方法的需求。
Decentralized, privacy-preserving infrastructure
分散,保护隐私的基础架构
"All the world is made of faith, and trust, and pixie dust," wrote J.M. Barrie in Peter Pan. Trust isn't just a nice to have in AI — it's foundational.
JM Barrie在彼得·潘(Peter Pan)中写道:“整个世界都是由信仰和信任和小精灵粉丝制成的。”信任不仅仅是在AI中拥有的好东西 - 它是基础的。
Economists at Moody's Analytic estimate that AI could add $15.7 trillion to the global economy by 2030. But that projected boon may never materialize without it.
穆迪(Moody)的分析估计,经济学家认为,到2030年,AI可以为全球经济增加15.7万亿美元。但是,如果没有它,预计的恩惠可能永远不会实现。
Enter decentralized cryptographic systems like zero-knowledge succinct non-interactive arguments of knowledge (ZK-SNARKs). These technologies offer a new path: allowing users to verify AI decisions without revealing personal data or the model's inner workings.
输入分散的加密系统,例如零知识简洁的知识非交互参数(ZK-SNARKS)。这些技术提供了一条新的途径:允许用户在不揭示个人数据或模型的内部工作的情况下验证AI决策。
This opens the door to a new generation of trustworthy and transparent AI systems.
这为新一代值得信赖和透明的AI系统打开了大门。
By applying privacy-preserving cryptography to machine learning infrastructure, AI can be auditable, trustworthy, and privacy-respecting, especially in sectors like finance and healthcare.
通过在机器学习基础架构上应用隐私的密码学,AI可以审核,值得信赖和尊重隐私,尤其是在金融和医疗保健等领域。
Recent: Blockchain's next big breakthroughs: What to watch
最近:区块链的下一个重大突破:要看什么
ZK-SNARKs rely on advanced cryptographic proof systems that let one party prove something is true without revealing how. For AI, this enables models to be verified for correctness without disclosing their training data, input values, or proprietary logic.
ZK-SNARKS依靠高级加密证明系统,让一个方证明某事是真实的,而无需透露如何。对于AI,可以验证模型以确保正确性,而无需披露其培训数据,输入值或专有逻辑。
Imagine a decentralized AI lending agent. Instead of reviewing full financial records, it checks encrypted credit score proofs to make autonomous loan decisions without accessing sensitive data. This protects both user privacy and institutional risk.
想象一个分散的AI贷款代理。它没有审查完整的财务记录,而是检查加密的信用评分证明以做出自主贷款决策而无需访问敏感数据。这可以保护用户隐私和机构风险。
ZK technology also addresses the black-box nature of LLMs. By using dynamic proofs, it's possible to verify AI outputs while shielding both data integrity and model architecture.
ZK技术还涉及LLM的黑盒性质。通过使用动态证明,可以在屏蔽数据完整性和模型体系结构的同时验证AI输出。
That's a win for users and companies — one no longer fears data misuse, while the other safeguards its IP.
这是用户和公司的胜利 - 一个不再担心数据滥用,而另一个则保护其IP。
Toward decentralized AI
走向分散的AI
As we enter this new phase of AI, better models alone aren't enough.
当我们进入这个新的AI阶段时,仅更好的模型还不够。
Users demand transparency; enterprises need resilience; regulators expect accountability.
用户需要透明度;企业需要弹性;监管机构期望责任。
Decentralized, verifiable cryptography delivers all three.
分散的,可验证的密码学提供了这三个。
Technologies like Z
像Z这样的技术
免责声明:info@kdj.com
所提供的信息并非交易建议。根据本文提供的信息进行的任何投资,kdj.com不承担任何责任。加密货币具有高波动性,强烈建议您深入研究后,谨慎投资!
如您认为本网站上使用的内容侵犯了您的版权,请立即联系我们(info@kdj.com),我们将及时删除。
-
-
- 比特币(BTC)处于关键时刻。市场观察者和分析师正在将注意力转向本周蜡烛的关闭
- 2025-04-30 09:50:12
- 这可以确认当前恢复的弹性,也可以发出更深层矫正阶段的开始。
-
- 在过去的30天中,Memecoin市场表现出了巨大的卷土重来。
- 2025-04-30 09:45:12
- 值得注意的是,模因细分市场的估值增加了17.42%,达到561.6亿美元,从476.6亿美元。
-
-
- 以太坊,索拉纳和卡尔达诺正在振作
- 2025-04-30 09:40:13
- 以太坊,索拉纳和卡尔达诺正在为他们最爆炸性的运行而努力
-
-
-
- Floki的价格正在测试关键阻力水平
- 2025-04-30 09:35:13
- Floki Coin遇到了阻力,卖墙在0.00009美元至0.00011美元之间,因为订单簿显示集中销售压力。
-