[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]  Joe (Ø,G) [@OMOREYY___](/creator/twitter/OMOREYY___) on x 1324 followers Created: 2025-07-23 07:02:03 UTC One of the most slept-on unlocks in the @NEARProtocol NEAR Ai paper is the monetization framework for private inference. I do believe the missing economic layer for open-source AI and it’s elegant. Here’s the problem: if you publish model weights, anyone can host them, use them, remix them… and you, the creator, get nothing. That’s great for open culture, but terrible for sustainability. So far, open models haven’t had real business models, just grants, hype, or hope. Well, NEAR team takes an entire different tack. They build a system where model creators can encrypt their models, publish them to a decentralized network, and still get paid per use, without ever exposing the weights. It’s private, trackable, and enforceable at the hardware level. Here’s how it works: The model lives inside a TEE, a confidential execution environment. The only way to use the model is to spin up a secure container on TEE-enabled hardware, where the model decrypts inside the enclave. No one, not even the host can see the weights or the user data. Then comes the kicker: before a user can run inference, the container generates a random challenge. The user (say, Alice) uses this challenge to open a payment channel on-chain with both the model creator (Charlie) and the host (Bob). Now the enclave knows it’s getting paid. Every time Alice sends a prompt, the channel deducts tokens for the usage. When she’s done, she closes the channel. Funds are distributed: some to the host for compute, some to the model creator. It’s clean, enforceable, and fully permissionless. And because this is all happening inside secure hardware; • You can’t fake it • You can’t clone the VM to replay free inference. • You can’t steal the model. • You can’t avoid the payment channel. It’s the first time we’ve had true pay-per-use economics baked into open models themselves. This really isn’t just about revenue, it’s about aligning the economics of AI with the architecture of the open web. You can now build models for specific tasks, publish them privately, and earn every time someone runs a query, with no central API in the middle. This turns open-source AI into a sustainable loop. It lets small teams, researchers, and independent builders own their contributions, not just donate them as usual, And it makes decentralized AI economically viable, not just technically possible. If you’re building open models and wondering how to actually monetize them at scale, this framework is a roadmap.  XXX engagements  **Related Topics** [sustainability](/topic/sustainability) [if you](/topic/if-you) [inference](/topic/inference) [coins ai](/topic/coins-ai) [Post Link](https://x.com/OMOREYY___/status/1947915091724189705)
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Joe (Ø,G) @OMOREYY___ on x 1324 followers
Created: 2025-07-23 07:02:03 UTC
One of the most slept-on unlocks in the @NEARProtocol NEAR Ai paper is the monetization framework for private inference.
I do believe the missing economic layer for open-source AI and it’s elegant.
Here’s the problem:
if you publish model weights, anyone can host them, use them, remix them… and you, the creator, get nothing. That’s great for open culture, but terrible for sustainability. So far, open models haven’t had real business models, just grants, hype, or hope.
Well, NEAR team takes an entire different tack.
They build a system where model creators can encrypt their models, publish them to a decentralized network, and still get paid per use, without ever exposing the weights. It’s private, trackable, and enforceable at the hardware level.
Here’s how it works:
The model lives inside a TEE, a confidential execution environment. The only way to use the model is to spin up a secure container on TEE-enabled hardware, where the model decrypts inside the enclave. No one, not even the host can see the weights or the user data.
Then comes the kicker: before a user can run inference, the container generates a random challenge. The user (say, Alice) uses this challenge to open a payment channel on-chain with both the model creator (Charlie) and the host (Bob). Now the enclave knows it’s getting paid.
Every time Alice sends a prompt, the channel deducts tokens for the usage. When she’s done, she closes the channel. Funds are distributed: some to the host for compute, some to the model creator.
It’s clean, enforceable, and fully permissionless.
And because this is all happening inside secure hardware;
• You can’t fake it
• You can’t clone the VM to replay free inference.
• You can’t steal the model.
• You can’t avoid the payment channel.
It’s the first time we’ve had true pay-per-use economics baked into open models themselves.
This really isn’t just about revenue, it’s about aligning the economics of AI with the architecture of the open web. You can now build models for specific tasks, publish them privately, and earn every time someone runs a query, with no central API in the middle.
This turns open-source AI into a sustainable loop. It lets small teams, researchers, and independent builders own their contributions, not just donate them as usual, And it makes decentralized AI economically viable, not just technically possible.
If you’re building open models and wondering how to actually monetize them at scale, this framework is a roadmap.
XXX engagements
Related Topics sustainability if you inference coins ai
/post/tweet::1947915091724189705