The Intelligence We Rent

How borrowed AI becomes leverage over the people who use it.
โ
With new capabilities come new responsibilities, and AI is a capability that spreads fast because it fits inside everything we already do. We canโt view it as a single invention. Itโs a total disruptor of the way weโre doing things, already integrated in search, customer support, design, trading, education, hiring, and governance. Each of these individual integrations are small enough to accept without debate, and together large enough to change how society makes decisions and, ultimately, how society works.
โ
With new knowledge come new forms of authority, because whoever controls the production of knowledge eventually controls the terms of reality. AI compresses expertise into a tool that can be used by anyone. At the same time, it also concentrates leverage in whoever owns the training system behind it, the distribution channels, and the permissions that decide how the tool can be used by the public but, more importantly, by the owners to affect the public.
โ
These are some of the various layers and some of the tension at the center of AIโs rapid expansion: it democratizes capability while centralizing control. And most of us experience only the first half โ the convenience, the speed, the usefulness โ while losing sight of the second half.
โ
Researchers have been documenting and debating these layers for some years now. Shoshana Zuboffโs work on surveillance capitalism traces how platforms turned human behavior into raw material for prediction โ extracting data far beyond whatโs needed to provide a service, then selling those predictions to advertisers, insurers, employers, and governments.
โ
Kate Crawfordโs Atlas of AI follows the supply chains behind the clean interfaces: the mines, the data centers, the underpaid workers labeling images so the systems appear to run themselves. Stuart Russell, one of the fieldโs most respected voices, warns that the standard approach to AI development โ define an objective, optimize for it โ breaks down when the objective doesnโt actually align with human preferences, which are uncertain, contextual, and often contradictory.
โ
What connects these different critiques is a shared observation: the way AI is currently being built serves particular interests, and those interests are not primarily yours. The convenience is real, but itโs not the point. The point is the data, the predictions, the leverage. You get a better search result while they get a more accurate model of your behavior. When a service is free, the question to ask is whatโs being sold instead. In most cases, itโs access to you: your attention, your patterns, your future decisions. The AI gets smarter with every interaction, and that intelligence becomes an asset owned by whoever controls the platform. You contribute to it constantly. You donโt own any of it.
โ
The concentration aspect deepens the problem. Right now, a handful of companies control the foundational models that everyone else builds on.
โ
OpenAI, Google, Anthropic, Meta are not just tech companies anymore. Theyโre becoming infrastructure providers, and the rest of the economy is starting to depend on them the way it depends on electricity or telecommunications.
When OpenAIโs API goes down, thousands of applications break. When a model gets updated and its behavior shifts, products built on top of it fail in ways their developers didnโt anticipate. Weโre constructing dependencies on systems we donโt control, maintained by companies whose priorities are not transparent and whose decisions are not accountable to the people affected by them.
โ
This is simply a call for transparency about whatโs being built and who it serves. AI infrastructure is taking shape right now, and infrastructure is sticky and tricky. Once itโs in place, everything else gets built on top of it. The assumptions encoded today become the defaults of tomorrow.
โ
This is the context in which SourceLess has been integrating AI in its web3 ecosystem that connects digital identity, communication and finance within an infrastructure that provides and protects ownership and privacy.
โ
The problems that Crawford, Zuboff, and Russell describe are structural, and no single project resolves them. But we do think the design choices matter, and weโve tried to make different ones.
โ
ARES AI is built as an assistive layer, not a prediction engine. It connects to your STR Domain โ your self-owned digital identity within the SourceLess ecosystem โ which means it doesnโt need to harvest behavioral data to function. Itโs not optimizing for engagement or time-on-platform. Itโs not selling predictions about you to third parties. The goal is to help you navigate complexity: answer questions, guide onboarding, automate repetitive tasks, support decision-making. Infrastructure that works for the user, not on the user.
โ
This doesnโt make it neutral or perfect. Every system encodes choices, and those choices have consequences. But we believe thereโs a difference between AI designed around extraction and AI designed around assistance, and that difference matters more as these systems become foundational to how we live and work.
โ
This article is the first in a series where weโll explore these questions in more depth.
โ
Weโll look at what it means for intelligence to become infrastructure โ who controls it, what happens when it fails, what alternatives are possible. Weโll draw on the work of researchers like Crawford, Zuboff, Russell, and Jaron Lanier, who has spent years arguing that โfreeโ AI services are never actually free. Weโll examine the alignment problem, the concentration of power in a handful of companies, and the choices that are still available before the architecture locks in.
โ
And weโll share more about how weโre trying to build differently with ARES AI as a case study in what it looks like to take these questions seriously.
โ
More soon.
โ
Learn more about SourceLess and ARES AI: sourceless.net and SourcelessAres ai
โ



.avif)