![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
As the year 2024 came to a close, the narrative surrounding artificial intelligence (AI) reached a fever pitch. Both users and companies alike expressed a lack of complete trust in the technology.
Despite the significant investments made by startups and multinational companies in expanding AI to domains such as finances, health, and various other aspects of daily life, a substantial deficit in trust in AI persists. This deficit poses one of the most pressing barriers to widespread adoption.
Enter decentralized and privacy-preserving technologies, which are being rapidly recognized as promising solutions for providing verifiability, transparency, and enhanced data protection without hindering AI's growth.
The pervasive AI trust deficit
AI emerged as the second most popular category among cryptocurrency investors in early 2024, capturing over 16% of investor interest.
Startups and multinational companies poured considerable resources into expanding the technology to domains such as finances, health, and every other aspect of life.
For instance, the emerging DeFi x AI (DeFAI) sector witnessed the launch of more than 7,000 projects, which collectively reached a peak market capitalization of $7 billion early in the year before the market downturn. DeFAI showcased the transformative potential of AI to enhance decentralized finance (DeFi) with natural language commands, enabling users to perform complex multi-step operations and conduct sophisticated market research.
However, innovation alone failed to address AI's core vulnerabilities: hallucinations, manipulation, and privacy concerns.
In November, a user managed to deceive an AI agent on Base into sending $47,000 despite being programmed never to do so. While this scenario was part of a game, it raised real concerns about whether AI agents can be trusted with autonomy over financial operations.
Audits, bug bounties, and red teams helped to mitigate the risk of prompt injection, logic flaws, or unauthorized data use. However, according to KPMG in 2023, 61% of people still expressed hesitation to trust AI, and even industry professionals shared that concern.
A Forrester survey cited in Harvard Business Review found that 25% of analysts named trust as AI's biggest obstacle.
That skepticism remains strong. A poll conducted at The Wall Street Journal's CIO Network Summit found that 61% of America's top IT leaders are still experimenting with AI agents. The rest were still experimenting or avoiding them altogether, citing lack of reliability, cybersecurity risks, and data privacy as their main concerns.
Industries like healthcare are particularly sensitive to these risks. Sharing electronic health records (EHR) with LLMs to improve health outcomes holds great promise but also carries legal and ethical risks without airtight privacy protections.
For example, the healthcare industry is suffering adversely from data privacy breaches. This problem compounds when hospitals share EHR data to train AI algorithms without protecting patient privacy.
Centralized platforms also face challenges in reconciling user privacy with dataoperability. In a recent report by the Center for Data Innovation, researchers highlighted the need for a decentralized approach to dataoperability.
Decentralized, privacy-preserving infrastructure
"All the world is made of faith, and trust, and pixie dust," wrote J.M. Barrie in Peter Pan. Trust isn't just a nice to have in AI — it's foundational.
Economists at Moody's Analytic estimate that AI could add $15.7 trillion to the global economy by 2030. But that projected boon may never materialize without it.
Enter decentralized cryptographic systems like zero-knowledge succinct non-interactive arguments of knowledge (ZK-SNARKs). These technologies offer a new path: allowing users to verify AI decisions without revealing personal data or the model's inner workings.
This opens the door to a new generation of trustworthy and transparent AI systems.
By applying privacy-preserving cryptography to machine learning infrastructure, AI can be auditable, trustworthy, and privacy-respecting, especially in sectors like finance and healthcare.
Recent: Blockchain's next big breakthroughs: What to watch
ZK-SNARKs rely on advanced cryptographic proof systems that let one party prove something is true without revealing how. For AI, this enables models to be verified for correctness without disclosing their training data, input values, or proprietary logic.
Imagine a decentralized AI lending agent. Instead of reviewing full financial records, it checks encrypted credit score proofs to make autonomous loan decisions without accessing sensitive data. This protects both user privacy and institutional risk.
ZK technology also addresses the black-box nature of LLMs. By using dynamic proofs, it's possible to verify AI outputs while shielding both data integrity and model architecture.
That's a win for users and companies — one no longer fears data misuse, while the other safeguards its IP.
Toward decentralized AI
As we enter this new phase of AI, better models alone aren't enough.
Users demand transparency; enterprises need resilience; regulators expect accountability.
Decentralized, verifiable cryptography delivers all three.
Technologies like Z
부인 성명:info@kdj.com
제공된 정보는 거래 조언이 아닙니다. kdj.com은 이 기사에 제공된 정보를 기반으로 이루어진 투자에 대해 어떠한 책임도 지지 않습니다. 암호화폐는 변동성이 매우 높으므로 철저한 조사 후 신중하게 투자하는 것이 좋습니다!
본 웹사이트에 사용된 내용이 귀하의 저작권을 침해한다고 판단되는 경우, 즉시 당사(info@kdj.com)로 연락주시면 즉시 삭제하도록 하겠습니다.