• About
  • FAQ
  • Contact Us
Newsletter
Crypto News
Advertisement
  • Home
    • Home – Layout 1
    • Home – Layout 2
    • Home – Layout 3
  • News
  • Market
  • Analysis
  • DeFi & NFTs
  • Guides
  • Tools
  • Flash
  • Insights
  • Subscribe
No Result
View All Result
  • Home
    • Home – Layout 1
    • Home – Layout 2
    • Home – Layout 3
  • News
  • Market
  • Analysis
  • DeFi & NFTs
  • Guides
  • Tools
  • Flash
  • Insights
  • Subscribe
No Result
View All Result
Crypto News
No Result
View All Result
Home Analysis

The Black Box Problem: Why AI Needs Proof, Not Promises

admin by admin
May 18, 2025
in Analysis
0
The Black Box Problem: Why AI Needs Proof, Not Promises
189
SHARES
1.5k
VIEWS
Share on FacebookShare on Twitter

About the author

Ismael Hishon-Rezaizadeh is the founder and CEO of Lagrange Labs, a zero-knowledge infrastructure company building verifiable computation tools for blockchain and AI systems. A former DeFi engineer and venture investor, he has led projects across cryptography, data infrastructure, and machine learning. Ismael holds a degree from McGill University and is based in Miami.

The views expressed here are his own and do not necessarily represent those of Decrypt.

When people think about artificial intelligence, they think about chatbots and large language models. Yet it’s easy to overlook that AI is becoming increasingly integrated with critical sectors in society. 

These systems don’t just recommend what to watch or buy anymore; they also diagnose illness, approve loans, detect fraud, and target threats.

Related articles

Anthropic’s Claude 4 Arrives, Obliterating AI Rivals—And Budgets Too

Anthropic’s Claude 4 Arrives, Obliterating AI Rivals—And Budgets Too

May 23, 2025
YGG Launches New Publishing Arm, Debuts First Game ‘LOL Land’

YGG Launches New Publishing Arm, Debuts First Game ‘LOL Land’

May 23, 2025

As AI becomes more embedded into our everyday lives, we need to ensure it acts in our best interest. We need to make sure its outputs are provable.

Most AI systems operate in a black box, where we often have no way of knowing how they arrive at a decision or whether they’re acting as intended. 

It’s a lack of transparency that’s baked into how they work and makes it nearly impossible to audit or question AI decisions after the fact.

For certain applications, this is good enough. But in high-stakes sectors like healthcare, finance, and law enforcement, this opacity poses serious risks. 

AI models may unknowingly encode bias, manipulate outcomes, or behave in ways that conflict with legal or ethical norms. Without a verifiable trail, users are left guessing whether a decision was fair, valid, or even safe.

These concerns become existential when coupled with the fact that AI capabilities continue to grow exponentially. 

There is a broad consensus in the field that developing an Artificial Superintelligence (ASI) is inevitable.

Sooner or later, we will have an AI that surpasses human intelligence across all domains, from scientific reasoning to strategic planning, to creativity, and even emotional intelligence. 

Questioning rapid advances 

LLMs are already showing rapid gains in generalization and task autonomy. 

If a superintelligent system acts in ways humans can’t predict or understand, how do we ensure it aligns with our values? What happens if it interprets a command differently or pursues a goal with unintended consequences? What happens if it goes rogue?

Scenarios where such a thing could threaten humanity are apparent even to AI advocates. 

Geoffrey Hinton, a pioneer of deep learning, warns of AI systems capable of civilization-level cyberattacks or mass manipulation. Biosecurity experts fear AI-augmented labs could develop pathogens beyond human control. 

And Anduril founder Palmer Luckey has claimed that its Lattice AI system can jam, hack, or spoof military targets in seconds, making autonomous warfare an imminent reality.

With so many possible scenarios, how will we ensure that an ASI doesn’t wipe us all out?

The imperative for transparent AI

The short answer to all of these questions is verifiability. 

Relying on promises from opaque models is no longer acceptable for their integration into critical infrastructure, much less at the scale of ASI. We need guarantees. We need proof.

There’s a growing consensus in policy and research communities that technical transparency measures are needed for AI. 

Regulatory discussions often mention audit trails for AI decisions. For example, the US NIST and EU AI Act have highlighted the importance of AI systems being “traceable” and “understandable.”

Luckily, AI research and development doesn’t happen in a vacuum. There have been important breakthroughs in other fields like advanced cryptography that can be applied to AI and make sure we keep today’s systems—and eventually an ASI system—in check and aligned with human interests.

The most relevant of these right now is zero-knowledge proofs. ZKPs offer a novel way to achieve traceability that is immediately applicable to AI systems.

In fact, ZKPs can embed this traceability into AI models from the ground up. More than just logging what an AI did, which could be tampered with, they can generate an immutable proof of what happened.

Using zkML libraries, specifically, we can combine zero-knowledge proofs and machine learning that verify all the computations produced on these models.

In concrete terms, we can use zkML libraries to verify that an AI model was used correctly, that it ran the expected computations, and that its output followed specified logic—all without exposing internal model weights or sensitive data. 

The black box

This effectively takes AI out of a black box and lets us know exactly where it stands and how it got there. More importantly, it keeps humans in the loop.

AI development needs to be open, decentralized, and verifiable, and zkML needs to achieve this. 

This needs to happen today to maintain control over AI tomorrow. We need to make sure that human interests are protected from day one by being able to guarantee that AI is operating as we expect it to before it becomes autonomous.

ZkML isn’t just about stopping malicious ASI, however. 

In the short term, it’s about ensuring that we can trust AI with the automation of sensitive processes like loans, diagnoses, and policing because we have proof that it operates transparently and equitably. 

ZkML libraries can give us reasons to trust AI if they’re used at scale.

As helpful as having more powerful models may be, the next step in AI development is to guarantee that they’re learning and evolving correctly. 

The widespread use of effective and scalable zkML will soon be a crucial component in the AI race and the eventual creation of an ASI.

The path to Artificial Superintelligence cannot be paved with guesswork. As AI systems become more capable and integrated into critical domains, proving what they do—and how they do it—will be essential. 

Verifiability must move from a research concept to a design principle. With tools like zkML, we now have a viable path to embed transparency, security, and accountability into the foundations of AI. 

The question is no longer whether we can prove what AI does, but whether we choose to.

Edited by Sebastian Sinclair

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



#Black #Box #Problem #Proof #Promises

Tags: BlackBoxProblempromisesproof
Share76Tweet47

Related Posts

Anthropic’s Claude 4 Arrives, Obliterating AI Rivals—And Budgets Too

Anthropic’s Claude 4 Arrives, Obliterating AI Rivals—And Budgets Too

by admin
May 23, 2025
0

In brief Claude 4 finally launched after long delays, crushing GPT-4.1 and Gemini 2.5 Pro on SWE-bench coding benchmarks. The...

YGG Launches New Publishing Arm, Debuts First Game ‘LOL Land’

YGG Launches New Publishing Arm, Debuts First Game ‘LOL Land’

by admin
May 23, 2025
0

In brief Yield Guild Games has launched a new publishing division, YGG Play, focused on casual, crypto-native titles it dubs...

Bitcoin Trading Volume Soars as Price Pumps to New Heights

Bitcoin Trading Volume Soars as Price Pumps to New Heights

by admin
May 22, 2025
0

In brief Bitcoin futures trading volume on Wednesday jumped to over $203 billion, third-most so far in 2025. Spot trading...

Ethereum’s ‘Ember Sword’ Is the Latest in a Growing Wave of Crypto Game Shutdowns

Ethereum’s ‘Ember Sword’ Is the Latest in a Growing Wave of Crypto Game Shutdowns

by admin
May 22, 2025
0

In brief Ethereum-based game Ember Sword generated $203 million in metaverse land sales in 2021. Four years later, the game...

CFTC Signals Crypto Perps Could Trade in US as Commissioners Head for the Exits

CFTC Signals Crypto Perps Could Trade in US as Commissioners Head for the Exits

by admin
May 22, 2025
0

In brief CFTC Commissioner Summer Mersinger said Thursday crypto perpetual futures could come to market in the U.S. "very soon."...

Load More
  • Trending
  • Comments
  • Latest
Bitcoin and Ethereum Stuck in Range, DOGE and XRP Gain

Bitcoin and Ethereum Stuck in Range, DOGE and XRP Gain

April 25, 2025
Saylor says Warren Buffett’s Berkshire Hathaway is Bitcoin of 20th century – Deep Insight

Saylor says Warren Buffett’s Berkshire Hathaway is Bitcoin of 20th century – Deep Insight

May 7, 2025
Amazon CEO on Crypto and NFTs, EPNS to Expand Beyond Ethereum + More News

Amazon CEO on Crypto and NFTs, EPNS to Expand Beyond Ethereum + More News

April 25, 2025
Why DeFi agents need a private brain

Why DeFi agents need a private brain

May 4, 2025
US Commodities Regulator Beefs Up Bitcoin Futures Review

US Commodities Regulator Beefs Up Bitcoin Futures Review

0
Bitcoin Hits 2018 Low as Concerns Mount on Regulation, Viability

Bitcoin Hits 2018 Low as Concerns Mount on Regulation, Viability

0
India: Bitcoin Prices Drop As Media Misinterprets Gov’s Regulation Speech

India: Bitcoin Prices Drop As Media Misinterprets Gov’s Regulation Speech

0
Bitcoin’s Main Rival Ethereum Hits A Fresh Record High: 5.55

Bitcoin’s Main Rival Ethereum Hits A Fresh Record High: $425.55

0
Anthropic’s Claude 4 Arrives, Obliterating AI Rivals—And Budgets Too

Anthropic’s Claude 4 Arrives, Obliterating AI Rivals—And Budgets Too

May 23, 2025
Approval of the U.S. Stablecoin Bill Could Trigger a Long-Term Crypto Bull Market: Bitwise

Approval of the U.S. Stablecoin Bill Could Trigger a Long-Term Crypto Bull Market: Bitwise

May 23, 2025
YGG Launches New Publishing Arm, Debuts First Game ‘LOL Land’

YGG Launches New Publishing Arm, Debuts First Game ‘LOL Land’

May 23, 2025
XRP price fails to respond to two extremely bullish developments — Here is why

XRP price fails to respond to two extremely bullish developments — Here is why

May 23, 2025
  • About
  • FAQ
  • Contact Us
Call us: +1 23456 JEG THEME

© 2025 Btc04.com

No Result
View All Result
  • Home
  • News
  • Market
  • Analysis
  • DeFi & NFTs
  • Guides
  • Tools
  • Flash
  • Insights
  • Subscribe
  • Contact Us

© 2025 Btc04.com