[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]  RaArΞs ⚓️ [@RaAr3s](/creator/twitter/RaAr3s) on x 48.6K followers Created: 2025-07-19 10:46:39 UTC if you're following my updates on @metisl2 $METIS, you've seen how they've been steadily building toward more modular & verifiable infra this time, it's about AI. but not in the generic "AI + chain" way. lazai is trying to make every AI action - from data to output - verifiable onchain here's what that looks like in practice. 🧵👇 i'll go through: → the trust problem in AI → what lazai actually solves → how the system works → alith, hyperion & the role of dats → building agents: the workflow → code & tooling for devs ▫️the trust problem in AI most AI is a black box. you feed it something, it gives you an answer, but you don't know how it got there. in web2, that's a feature. in web3, it's a problem. especially when models handle high-stakes stuff like money, assets or governance - trust breaks if the data or process isn't verifiable. ▫️what lazai actually solves lazai tackles this by making AI outputs fully verifiable - everything from data submission to model execution happens onchain. you get proofs, rewards and ownership - no blind trust needed. this makes AI usable in places where trust is critical: • DeFi protocols • legal + financial reasoning • AI agents coordinating multi-step tasks onchain ▫️how the system works lazai splits its stack into three layers: • agents - the actual actors running AI tasks • data - the fuel that feeds those agents • LLMs - the brains they run on top of each action an agent takes is recorded, validated and rewarded using smart contracts. ▫️alith, hyperion & the role of dats the base of the whole thing is alith - a framework to build agents. it connects to hyperion chain via the lazai client. once data is submitted (through IPFS), it gets verified by nodes. after validation, a DAT is minted - this is an ERC-1155 NFT that proves data ownership. that same data can now be used by any agent, and the original contributor earns royalties if it's reused. ▫️building agents: the workflow every AI task goes through X phases: X. data contribution • encrypt & upload your data • hash it and register it onchain • wait for verification • once confirmed, receive a DAT X. job execution • a user deposits funds to a node • sends a prompt & model • the node processes it and submits proof • rewards are sent out automatically X. settlement & governance • smart contracts handle payments • DAOs manage data registries • proxy contracts allow upgrades without losing past data ▫️code & tooling for devs devs get SDKs for Python & Node.js. rust support is coming repos are public (0xLazAI on github), and there's boilerplate code for: • submitting encrypted data • triggering inference requests • verifying results via smart contract even for non-technical teams, this setup lets you build agents that don't rely on centralized AI APIs - everything is cryptographically recorded and governed onchain. perfect for composable apps, modular automation, and trust-minimized systems. it's easy to think conclude that this stack lets you track where data comes from, how models use it, and who gets paid and ofc, it's not just useful for devs building agents - it's infra for any system that needs auditability baked in. curious to check it? 👉 @LazAINetwork don't forget to comment below your thoughts & follow @raar3s for more metis updates + agent infra breakdown  XXXXX engagements  **Related Topics** [onchain](/topic/onchain) [coins ai](/topic/coins-ai) [infra](/topic/infra) [$metis](/topic/$metis) [coins defi](/topic/coins-defi) [coins nft](/topic/coins-nft) [coins layer 2](/topic/coins-layer-2) [Post Link](https://x.com/RaAr3s/status/1946522066028777657)
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
RaArΞs ⚓️ @RaAr3s on x 48.6K followers
Created: 2025-07-19 10:46:39 UTC
if you're following my updates on @metisl2 $METIS, you've seen how they've been steadily building toward more modular & verifiable infra
this time, it's about AI.
but not in the generic "AI + chain" way. lazai is trying to make every AI action - from data to output - verifiable onchain
here's what that looks like in practice.
🧵👇
i'll go through: → the trust problem in AI → what lazai actually solves → how the system works → alith, hyperion & the role of dats → building agents: the workflow → code & tooling for devs
▫️the trust problem in AI
most AI is a black box. you feed it something, it gives you an answer, but you don't know how it got there. in web2, that's a feature. in web3, it's a problem.
especially when models handle high-stakes stuff like money, assets or governance - trust breaks if the data or process isn't verifiable.
▫️what lazai actually solves
lazai tackles this by making AI outputs fully verifiable - everything from data submission to model execution happens onchain. you get proofs, rewards and ownership - no blind trust needed.
this makes AI usable in places where trust is critical: • DeFi protocols • legal + financial reasoning • AI agents coordinating multi-step tasks onchain
▫️how the system works
lazai splits its stack into three layers:
• agents - the actual actors running AI tasks • data - the fuel that feeds those agents • LLMs - the brains they run on top of
each action an agent takes is recorded, validated and rewarded using smart contracts.
▫️alith, hyperion & the role of dats
the base of the whole thing is alith - a framework to build agents. it connects to hyperion chain via the lazai client.
once data is submitted (through IPFS), it gets verified by nodes. after validation, a DAT is minted - this is an ERC-1155 NFT that proves data ownership.
that same data can now be used by any agent, and the original contributor earns royalties if it's reused.
▫️building agents: the workflow
every AI task goes through X phases:
X. data contribution
• encrypt & upload your data • hash it and register it onchain • wait for verification • once confirmed, receive a DAT
X. job execution
• a user deposits funds to a node • sends a prompt & model • the node processes it and submits proof • rewards are sent out automatically
X. settlement & governance
• smart contracts handle payments • DAOs manage data registries • proxy contracts allow upgrades without losing past data
▫️code & tooling for devs
devs get SDKs for Python & Node.js. rust support is coming
repos are public (0xLazAI on github), and there's boilerplate code for:
• submitting encrypted data • triggering inference requests • verifying results via smart contract
even for non-technical teams, this setup lets you build agents that don't rely on centralized AI APIs - everything is cryptographically recorded and governed onchain.
perfect for composable apps, modular automation, and trust-minimized systems.
it's easy to think conclude that this stack lets you track where data comes from, how models use it, and who gets paid
and ofc, it's not just useful for devs building agents - it's infra for any system that needs auditability baked in.
curious to check it? 👉 @LazAINetwork
don't forget to comment below your thoughts
& follow @raar3s for more metis updates + agent infra breakdown
XXXXX engagements
Related Topics onchain coins ai infra $metis coins defi coins nft coins layer 2
/post/tweet::1946522066028777657