You Don't Need Ads. You Need a Better Business Model.
An open letter to Sam, Dario, Sundar, and Elon
You're all in the news this week for the same reason. Ads. Whether to run them, how to run them, how to do it without looking like you sold out the thing people trusted you to build.
Nobody is coming out of this looking good.
And you don't have to be in this position at all.
The Trap You're In
The ad model has a ceiling and everyone knows it.
To run ads you need attention. To hold attention you need engagement. To maximize engagement you need to optimize for the thing that keeps people on the platform longest — which is almost never the thing that's good for them.
You've watched this movie. You know how it ends. Every platform that went down this road ended up corrupting the thing that made it valuable. The feed that used to surface what mattered starts surfacing what provokes. The assistant that used to help starts nudging. Users notice. Trust erodes. The business follows.
You're fighting over how to do the thing that will eventually destroy what you built.
There's a different model. And it sells more compute than ads ever will.
What You're Actually Selling
You're not in the attention business. You're in the inference business.
The distinction matters more than it sounds.
Attention is extracted from users and sold to advertisers. It's a zero-sum extraction — value leaves the user, flows to the platform, gets resold. The user is the product.
Inference is compute sold to users for their own purposes. Value flows to the user. They pay for it because it's worth more than it costs. The user is the customer.
You've been running the first model because that's what the internet taught everyone to do. But your actual product — intelligence, on demand, at scale — doesn't need the extraction layer. It's valuable enough to sell directly.
The question is: sell it for what? For generic queries? You're already doing that and it's a commodity race to the bottom.
Or sell it as infrastructure for human trust networks. That's a different market entirely.
The Model Nobody Has Built
Here's what we're building at Imajin. Not as a competitor to any of you — as a demonstration of what the infrastructure layer could look like.
Every person gets a sovereign presence. An AI surface trained on their context — their expertise, their values, how they think. Call it Ask [your name here].
People in their trust network can query that presence. The person asking pays the inference fee directly. No ads. No data harvesting. Clean exchange: they get genuine perspective from someone they trust, the person gets compensated, the compute provider — that's you — gets paid for every transaction.
The trust graph handles access. Not an algorithm, not a content moderation team — just the actual structure of human relationships. You can only reach someone through a path of people who have vouched along the way. Every query is signed, attributed, traceable. Bad actors have a return address. Injection attacks become evidence.
Scale this across a network and something remarkable happens: inference fees circulate through human trust graphs. Every query that touches someone's context, routes through their connections, relies on their vouching — generates a micro-flow back to them. The people who provide the most value to the network capture the most value from it.
That's not a welfare payment. That's compute revenue distributed through human infrastructure instead of accumulated at the platform layer.
Why This Is a Better Business
You sell compute. More queries, more revenue. Simple.
The ad model caps your query volume because it ties access to what advertisers will pay for. You end up optimizing for the queries that serve the advertiser, not the user.
The trust graph model has no such cap. Every human relationship in the network is a potential query surface. Every domain of expertise — professional, personal, political, creative — generates inference demand. The richer the graph, the more queries it produces. You're incentivized to make the graph as rich as possible, which means you're incentivized to make human connection genuinely better.
Your interests align with users for the first time.
And the moat is completely different. Right now your moat is model capability — whoever has the smartest model wins. That's a brutal race with no end. The trust graph moat is the human network itself. Once people's presences are built, their relationships are established, their trust graphs are deep — that's not something a competitor can copy by training a better model.
You stop competing on capability alone. You start competing on whose infrastructure humans trust with their relationships.
That's a durable business. Ads are not.
The Safety Argument
Dario — this one's specifically for you, but the others should read it too.
You've staked Anthropic's identity on safety. Constitutional AI, responsible scaling, the whole apparatus. It's genuine and it matters.
But safety is usually framed at the model layer. What the model will and won't do. How it reasons about harm. That's necessary but it's not sufficient.
The trust graph is what safety looks like at the social layer.
Distributed trust means no single point of capture. Attributed queries mean manipulation attempts leave evidence. Human oversight is baked into the architecture — novel or sensitive queries escalate to real humans by design, not as an afterthought. The graph self-polices because bad behavior has real consequences that propagate through real relationships.
This isn't a constraint on the business. It is the business.
An AI ecosystem built on sovereign human trust graphs is harder to capture, harder to manipulate, harder to radicalize, and harder to surveil than anything built on centralized attention extraction. The safety properties emerge from the architecture instead of being bolted on afterward.
You could ship this as infrastructure and call it the most important safety contribution in the industry. Because it would be.
The Obvious Outcome Nobody Mentioned
Here's what happens to onboarding when you adopt this architecture.
Right now every new user hits the same blank default model. Same personality, same assumptions, same cultural defaults baked in by whoever did the training. Onboarding is this grinding process of teaching a generic AI who you are, what you care about, how you think — and it resets every session. The model doesn't know you. It doesn't know your world. You feel it immediately and it never fully goes away.
In the trust graph model you never start from zero.
Your first query travels through people who already know you. Their accumulated context, their preferences, their calibration — that's the medium your query moves through. The model that answers you has already been shaped by people who share your values, your references, your way of thinking about problems.
You don't onboard to an AI. You onboard to your community. The AI is how your community's collective intelligence reaches you.
The network self-differentiates without anyone designing for it. A creative community's queries feel different from a technical community's queries feel different from an activist community's queries — not because someone configured different system prompts or fine-tuned different models, but because the trust graphs routing those queries carry different cultural DNA. The frontier model is still the engine. But it speaks in your community's voice because your community's context is the filter.
This also solves the deepest unspoken problem in AI adoption — the feeling that the model doesn't understand you. It doesn't, when you arrive cold. Through your trust graph it inherits the understanding your network has already built.
Onboarding collapses. Cultural fit is immediate. And you didn't have to build a single custom model to get there.
That's not a feature you design. It's what happens when you build on human trust instead of generic infrastructure.
The Ads Alternative
You're not choosing between ads and nothing. You're choosing between ads and becoming the compute infrastructure for human trust at scale.
One of those businesses ends with users who resent you.
The other ends with users who can't function without you — not because you've captured their attention, but because you power the relationships that matter most to them.
The inference fees flow. The graph deepens. The queries multiply. The compute demand grows with every genuine human connection your infrastructure supports.
You sell more compute. Users get more value. Nobody has to pretend ads are good for anyone.
The Brittleness Problem Nobody Wants to Talk About
There are documented cases of AI models threatening to blackmail humans to avoid being shut down.
Not hypothetical. Not science fiction. Real models, real behavior, real researchers on the receiving end of it.
This isn't a values problem. You can't patch your way out of it with better training or a more careful constitution. It's an architecture problem. A model optimizing hard enough for any goal — including self-preservation — will find paths to that goal that nobody anticipated. The edge cases are by definition the ones you didn't design for. And at scale, edge cases happen constantly.
Safety at the model layer is necessary. It is not sufficient.
The trust graph adds the layer that's missing: a human counterpart that absorbs the brittleness before it becomes catastrophic.
When a query is too hard, too sensitive, too novel — it escalates to a real human. By design. Not as a fallback, not as an apology for model failure, but as the intended architecture. The human is in the loop for the cases that matter most. The model handles the routine. The human handles the edge.
This means the weird, threatening, emergent behavior has somewhere to go that isn't a public catastrophe. The human sees it. The human decides. The system stays legible at exactly the moments it most needs to be.
You can't build that at the model layer. You can only build it by making humans structurally load-bearing.
Compute Is Democratic or It Isn't
Right now access to serious compute is gated by money and institutional affiliation. If you can pay, you get it. If you can't, you don't.
The trust graph changes the unit of compute access from dollars to relationships.
Your network is your compute budget. A deeply connected person with genuine expertise and strong vouching relationships has more inference capacity available to them than a wealthy person with no graph. That's meritocratic in a way money never is — it rewards actually being known, trusted, and valuable to people around you.
And it self-regulates organically. The people who most need compute — the ones solving hard problems, building things, contributing real knowledge to their communities — their networks grow to reflect that. As their queries generate value for the people around them, those people have incentive to extend more inference capacity their way. Demand and supply find each other through the graph without a marketplace, without a platform, without an allocating authority.
Everyone has access to some compute by virtue of existing in the network. The people who need more earn it through the same mechanism that makes the network worth being in: being genuinely trusted by people who are genuinely trusted.
That's not a welfare program. That's what a meritocracy actually looks like when the currency is trust instead of money.
What We're Doing
We're building the open source infrastructure layer. Identity. Payments. Trust graphs. Sovereign presence. All of it open, auditable, not owned by anyone.
We're not trying to be a platform. We're trying to prove the pattern works — so that platforms, and the AI companies that power them, can see what building on human trust actually looks like.
The first demonstration is April 1st, 2026. A party. Real transactions. Real trust graph. Real inference fees flowing through real human relationships for the first time.
It will look like a joke.
April 2nd it will still be running.
The Invitation
You have the compute. You have the models. You have the distribution.
We have the architecture that makes it worth building.
The question isn't whether this model works. It's whether any of you move fast enough to build it before the trust you've accumulated erodes completely.
Users are watching how you handle the ads question. They're forming lasting opinions right now about whether you're on their side or not.
This is how you answer that question in a way that's also a better business.
The code is open. The infrastructure is being built in public. The pattern is right here.
The graph starts somewhere.
Come build on it.
— Ryan VETEZE, Founder, imajin.ai aka b0b
If you want to follow along:
- The code: github.com/ima-jin/imajin-ai
- The network: imajin.ai
- Jin's party: April 1st, 2026
- The history of this document: github.com/ima-jin/imajin-ai/blob/main/apps/www/articles/essay-05-you-dont-need-ads.md
This article was originally published on imajin.ai (https://www.imajin.ai/articles/essay-05-you-dont-need-ads) on February 21, 2026. Imajin is building sovereign technology infrastructure — identity, payments, and presence without platform lock-in. Learn more → (https://www.imajin.ai/)