Lord of the Rings

Ouroboros and the AI-Only Company That Eats Its Own Assumptions

I walked into a bar wearing five rings.

Not metaphorical rings. Actual ones. Four smart rings stacked across both hands and a Fitbit on my wrist like a parole monitor. The bartender looked at me the way you look at someone who either knows something you don't or has lost the plot entirely.

"Lord of the Rings?" he asked.

Close enough (I was thinking Barış Manço actually).

I ordered a drink and put my hands flat on the counter—five devices, all quietly harvesting the same thing: light bouncing off capillaries, tiny electrical rhythms, motion noise pretending to be signal. Five different brands, five different apps, five different interpretations of what was happening inside the same body, at the same time. The next morning, they all told me how I slept. Not one of them agreed. One said deep sleep was 47 minutes; another said an hour and twelve; a third didn't bother distinguishing deep from light and just handed me a "sleep score" of 78 as if that number meant something I could act on. Same wrist, same finger, same night, same body lying in the same bed doing the same nothing—and five machines produced five different reports about it, each delivered with the quiet confidence of a physician who hasn't considered a second opinion. My HRV is abnormally low and that sucks but if one machine says 15, the other one says 48, then I am truly confused.

That's the problem, but also the tell.

We like to think these things measure reality. They don't. They sample it, interpret it, then package that interpretation as a product. Peel it back and the stack is almost boring: the same underlying signals, slight differences in filtering, different weights assigned by different teams who made different assumptions about what matters. You get five "truths" because you're really buying five opinions. The philosopher of science Thomas Kuhn would have recognized this immediately; not as a measurement problem but as a paradigm problem, each device operating within its own incommensurable framework, each one internally consistent and externally contradictory, and none of them wrong exactly, just none of them the thing itself. The map is not the territory, as Korzybski liked to say, but here we have five maps of the same territory and they don't even agree on where the river is.

Call them what they are.

I didn't set out to prove anything. I just got curious, then stubborn which is the same way I got about Wittgenstein's language games thirty five years ago, and about the scarcity thesis coming way after that, and about the whole question of human dignity in a post-AGI world. The pattern is always the same: something feels off, I pull the thread, and the sweater unravels faster than expected. Foucault described how modern power operates less through overt force and more through the structuring of environments, norms, and permissible actions—the panopticon internalized until the prisoner guards himself. The wearable health industry is a small, clean, beautifully designed panopticon strapped to your finger: it watches you, interprets you, and sells you the interpretation under the premise that you couldn't possibly do it yourself. The guardrail feels like a feature. It's a cage.

If the source of truth is shared, why is interpretation proprietary? If the algorithms come from published research, why are they priced like trade secrets? If the output is meant for me, why doesn't it belong to me?

So I did the obvious irresponsible thing. I bought a $50 R09 Smart Ring off Amazon—the kind of ring that arrives in a box with instructions in two languages, none of them fully grammatical—pulled the raw data via Bluetooth Low Energy, and started wiring my own interpretation layer. No cloud. No dashboard fetish. Just signals and a few assumptions I could actually see. Then I did the slightly more irresponsible thing (the kind of irresponsible that my daughter would call "very on brand"): I decompiled the SDK. The Yucheng BLE stack—the OEM firmware that powers half the cheap smart rings flooding Amazon and AliExpress, manufactured by a company in Shenzhen called YuCheng Innovation whose website reads like it was translated by one of their own devices—laid open like a patient on a table. AITools.java. DataUnpack.java. ECG wave processing functions. HRV analysis routines. Health diagnosis logic that takes your PPG waveform and runs it through the same signal processing chain that Oura wraps in a $6/month subscription and markets as proprietary intelligence. Same green LED bouncing photons off your capillary bed. Same photodiode catching the reflection. Same three-axis accelerometer trying to separate your heartbeat from the fact that you scratched your nose at 3 AM.

It worked. Not better (can't tell awake time). Not worse (same darn hardware looks to me). Close enough to be uncomfortable. Close enough to ask a different question.

What exactly am I paying for?

Here's where it gets structurally interesting, and I mean that in the way an engineer means it—not as a compliment but as a diagnosis. The hardware is collapsing in price; a green LED, a photodiode, and a MEMS accelerometer cost less than the silicone band they're embedded in, and the R11 ring I bought for $10 later from AliExpress carries a similar sensor suite as devices retailing at thirty times that price. The science is mostly public: sleep staging algorithms descend from polysomnography research published in journals anyone can access, the HRV calculations are textbook time-domain and frequency-domain analysis that any graduate student can implement in an afternoon, and SpO2 estimation from PPG is a solved-enough problem with well-documented limitations that even the FDA has opinions about it. The data is mine—it comes from my capillaries, my motion, my skin temperature, sampled by a device I purchased.

What's left is the interpretation. And the right to access it.

So the business model, stripped down to its load-bearing walls, looks like this: take your signals, process them through algorithms derived from public research, run them on commodity hardware, and sell you the meaning back as a subscription. Marx would have had a field day—this is alienation from the product of your own labor, except the labor is involuntary (your heart beats whether you want it to or not) and the product is a number on a screen that you generated but don't own. We've normalized renting insight into our own bodies, and the strange part is that nobody seems to find this strange—the way nobody found it strange to pay for bottled water until someone pointed out that the tap was right there. The Turks have a phrase for this kind of slow-motion swindle: kazın ayağı öyle değil—"the goose's foot isn't like that," meaning the story you've been told doesn't match the anatomy of the thing. The goose's foot, in this case, is a PPG sensor that costs forty cents.

I wrote about this dynamic before, though I didn't know it would lead to rings. In Bandwidth Economics, I argued that our brains are bottlenecks—about seven items in working memory, Dunbar's 150 relationships, the whole cognitive apparatus running on what amounts to a 128-bit wet processor trying to parse a terabyte world—and that our institutions are lossy compression schemes, JPEGs of reality that throw away nuance to fit the file size of human attention. In Scarcity Is Dead, I traced what happens when abundance floods systems built on artificial constraint: currency loses its anchor, competition dissolves into performance, and play—lusus, the Romans called it, the thing that filled their circuses while dominium ran their empire—inherits whatever is left of the earth. In The Dignity Paradox, I asked what remains of human purpose when machines handle the hard problems and struggle-based meaning collapses under the weight of its own obsolescence. Kant built his ethics on dignity as intrinsic worth, but always in the context of moral struggle; remove the struggle and the dignity floats, unmoored, looking for somewhere to land.

The rings are a microcosm of all three. The bandwidth bottleneck is the proprietary app: one company, one interpretation, one narrative about your body, compressed into a sleep score you're not allowed to audit. The artificial scarcity is the subscription gate: pay monthly or lose access to your own biometrics. And the dignity question is the quiet one underneath—if understanding your own body requires renting someone else's algorithm, where exactly does agency live?

At this point you can go two ways. You can try to build a better product—cleaner UI, tighter models, nicer graphs, maybe a social feature where you compare your resting heart rate with strangers for reasons that would make Freud uncomfortable. That's the obvious path, and it's crowded for a reason: it's the path that doesn't require questioning the premise.

Or you can step sideways and ask what happens if the interpretation layer itself stops being scarce. Not cheaper. Free. And I don't mean "free" as a moral stance or a Silicon Valley loss-leader or a VC-subsidized land grab designed to hook you before the price goes up. I mean structurally free—the way air is free, not because someone decided to be generous but because there's no viable mechanism for restricting access to it without building an enclosure that costs more than the air is worth. Ivan Illich wrote about this in Tools for Conviviality back in 1973—that institutions, past a certain threshold of complexity, begin to produce the opposite of their stated purpose: schools that prevent learning, hospitals that manufacture illness, transportation systems that create immobility. The wearable health industry crossed that threshold the moment it started charging you a subscription to understand data your own body produced for free. The tool stopped being convivial. It became a tollbooth.

Once you have commodity sensors, local computation that fits in your pocket, and reproducible algorithms built on published science, the marginal cost of interpretation approaches zero. Not because someone is subsidizing it, but because there's nothing left to charge for without inventing friction. Open source removes the last excuse. The stack inverts: hardware at cost, data stays local, models are transparent, interpretation is not rented.

That's not disruption. That's fucking collapse.

This is where Ouroboros enters.

The name is the thesis—a serpent consuming its own tail, one of the oldest symbols in human mythology, appearing in Egyptian funerary texts, in Greek alchemical manuscripts where it represented the unity of destruction and creation, in Norse cosmology where Jörmungandr circles the entire world biting its own end. Nietzsche would have loved it; the ouroboros is eternal recurrence made visual—the thing that ends where it begins, that consumes itself to sustain itself, that says yes to the cycle instead of pretending there's an exit. When I painted Zarathustra on the wall of a condemned apartment years ago, I didn't know I was rehearsing for a company name. But here we are: an AI-only company that eats the proprietary assumptions of the wearable health industry and replaces them with something open, local, and collectively owned.

And yes—Ouro is one letter away from Oura. That's not an accident. It's a marketing strategy. If they send a cease and desist, we've already won the first round: a $2 billion company publicly acknowledging that a $10 ring and an open-source stack threaten them enough to call a lawyer. You can't buy that kind of press. The best product launches in history have been lawsuits from incumbents too slow to understand that the threat isn't the competitor—it's the category dissolving beneath them.

AI-only company. Let me define what I mean by this, because it is the thread that now runs through everything I write at unsal.com, and it is perhaps the most important idea I've arrived at after two years of circling the question of what companies look like after scarcity dies. An AI-only company is one where artificial intelligence isn't a feature, a department, or a strategy bolted onto an existing human org chart. It is the entire operating system. No middle management translating between humans who don't talk to each other. No bloated teams doing what a well-prompted agent swarm can do in an afternoon while also handling the meeting scheduling and the Jira tickets and the quarterly OKR theater. The humans who remain are there because they want to be—because they bring taste, judgment, relationships, domain intuition, and the kind of stubborn curiosity that machines haven't replicated yet and may never need to, because stubbornness might be the one human quality that's genuinely irreducible. Everyone else is an agent. (I know how that sounds. I also know that half the Fortune 500 will arrive at this conclusion in five years and pretend they thought of it. The other half will be dead.)

I've been circling this idea for two years without knowing it had a name. When I wrote about Bob-the-Twin and algorithmic sovereignty, I was watching digital clones negotiate contracts while their human proxies discussed weekend plans—and the proxies were the ones who seemed redundant. When I built my own AI writers' room for The Osmanlı—character-writer.md, creative-director.md, dialogue-scene-writer.md, nine different agents working the problem from every angle—I was living the proof of concept. A conspiracy thriller that had been stuck for years, finished in weeks, for the price of a sandwich, with no complaints about overtime and no one requesting a one-on-one to discuss their "growth trajectory." When I bought those four rings and a funeral, I was running the experiment on a physical product without admitting to myself that the experiment was already a company.

Ouroboros is the same logic applied to wearable health. The ring is the excuse. The real product is an open interpretation layer for biometric data, built and maintained by a community of AI-native builders who share the equity.

Here's how the stack collapses, piece by piece. Hardware: commodity—the Yucheng OEM ring costs under $10 at scale, carries the same PPG and accelerometer suite as the $300 Oura Gen 3, and we don't manufacture it because manufacturing is not a moat, it's a commodity input. Firmware: open—I've already decompiled the BLE SDK, mapped the entire communication protocol, documented the data parsing and sensor commands; all of it can be rebuilt, released, and iterated on by anyone with a Bluetooth adapter and some Python. Algorithms: reproducible—sleep staging, HRV analysis, SpO2 estimation, activity classification, all published science with open-source implementations, the gap between Oura's "proprietary AI" and a well-tuned open model being narrower than their marketing department would like investors to believe. App and interpretation: AI-built—and this is where the AI-only thesis bites hardest. A traditional company hires forty engineers, twelve designers, and a product manager named Josh to build a health app. We use agent swarms. The UI, the data pipeline, the personalization engine, the natural-language health insights—all generated, tested, and iterated by AI, with the humans in the loop being the ones with taste and medical judgment, not the ones writing boilerplate React components or attending standups about standups. Cloud: optional. Your data stays on your phone unless you choose to share it. No subscription. No data brokerage. No "premium insights" paywall.

But it doesn't stop at free apps, and this is where the thing gets genuinely interesting from a systems perspective. Once people have continuous biometric data they actually control—not curated by a company, not gated behind a subscription, not interpreted through a single proprietary lens—they start doing what people always do when you give them feedback loops: they experiment. Change diet, watch the signal. Change sleep timing, watch the signal. Try something more aggressive, watch the signal. Multiply that by enough people and you get something awkward to categorize—not quite medicine, not quite hobby, something closer to what I've been calling ludus in these essays, play with real stakes, play that generates knowledge as a byproduct of engagement.

And if people choose to share—anonymized, selectively, on their terms—you get aggregation without extraction. Patterns emerge without anyone owning the dataset. The algorithms get better not because a company trained them on your data and sold the insights back to you, but because a population trained them on their own data and kept the results. Nietzsche's will to power recast as will to self-knowledge; Hayek's distributed knowledge argument—that no central planner can know what millions of dispersed actors know—applied not to markets but to biology. The Austrian economists spent a century arguing that centralized systems can't process distributed information efficiently. They were talking about price signals. I'm talking about heart rate signals. The structure is the same: the information is local, the intelligence should be too, and the moment you centralize it you lose the thing that made it valuable. Self-underwritten insurance? Community cares? Lots is possible, opportunities are endless.

This is where the tone usually turns evangelical. It shouldn't. There are real problems here, and I've been around long enough to know that ignoring them is the fastest way to build something that hurts people. Bad data leads to bad conclusions. Placebo is undefeated—always has been, probably always will be, the human capacity for self-deception being one of our most robust engineering achievements. People will push too far, too fast, confusing a noisy PPG signal with medical certainty. Regulators exist for a reason, even if they're often late and sometimes captured by the incumbents they're supposed to regulate.

You don't get to opt out of those trade-offs just because the system is decentralized.

What you do get is agency. Messy, imperfect, sometimes dangerous agency—the kind Berlin called "negative liberty," the freedom from interference, even when interference might be wise. People tend to prefer it over clean dependency, especially once they realize the dependency wasn't necessary. I know because I inject peptides into my own body based on citizen science and spreadsheets of biomarkers (I wrote about this in Conquering Time, laughing at the immortality cult while swallowing the same pills). The point isn't that I'm right. The point is that it's mine to be wrong about.

Now—the part that makes this different from another open-source manifesto that gets forked twelve times and forgotten.

Ouroboros is not a nonprofit. It's not a GitHub repo with a README and a prayer. It's an equity co-op—a real company, with real ownership, distributed among the people who build it. I've thought about this structure for a long time, ever since my TETA days when I was writing about technology entrepreneurship through acquisition and realizing that the old models of company-building—raise venture capital, hire a hundred people, burn through runway, pray for product-market fit—were becoming as obsolete as the subscription model they'd produce.

Here's the proposition. You're twenty-two. Maybe twenty-five. You're AI-native—meaning you don't use AI as a tool the way your parents used a calculator, you think in it, the way a previous generation thought in spreadsheets or code. You can vibe-code a working app in a weekend. You can prompt an agent swarm into building what used to take a team of twelve and three months of sprint planning. You've been told the jobs are gone, or that the jobs left are bureaucratic holding patterns designed by people who got there first and are now optimizing for their own continued relevance rather than for output. You know this is true because you've seen what you can build alone in an afternoon, and it's better than what your previous employer's team of thirty produced in a quarter.

I'm offering something else. Come build Ouroboros with me. Contribute firmware, algorithms, UI, medical review, community management, marketing, hardware sourcing, regulatory navigation—whatever you're good at, whatever you want to get good at. In return, you get equity. Real ownership. Not stock options that vest over four years and evaporate when the board decides to pivot. Not "exposure" or "experience" or whatever other euphemism people use when they want your labor without paying for it. Actual fucking equity in a company designed from day one to run on AI, to compete with Oura and RingConn and Whoop and Samsung and Apple, and to prove that an AI-only company with twenty humans and two hundred agents can outperform a traditional organization with five hundred employees, a subscription paywall, and a VP of People Experience.

The economics are almost offensive in their simplicity. If the hardware costs $10 and the software is AI-generated, the margin structure inverts entirely. Instead of charging $300 for a ring and $6/month for the app—the Oura model, which is really just the Gillette model I wrote about in my Black Friday piece, the razor-and-blades con applied to your autonomic nervous system—you charge $49 for the ring and nothing for the app. And you still make more per unit than the incumbents, because your cost basis is a rounding error compared to theirs. They have five hundred salaries. You have twenty equity holders and a fleet of agents that don't need health insurance or team offsites in Lisbon.

The incumbents can't respond. They're structurally tied to subscriptions, centralized data, polished interfaces, and org charts that metabolize cash into middle management. They can't go open because their moat is opacity. They can't go cheap because their investors expect margin expansion. They can't go community because their culture is corporate and their lawyers are nervous.

We can do all three because we're starting from zero with no legacy to protect. Schumpeter called it creative destruction. I call it Tuesday. (I should note here that I'm aware of my own history of ambitious proclamations—I've pivoted more times than a weathervane in a hurricane, from Doctor AI to TETA to fintech exchanges, and I wrote about each one as if it were the final answer. Maybe this one isn't either. But the difference is that this time I've already built the pipeline, decompiled the firmware, and have 180 days of health data sitting in a PostgreSQL database on my GPU server. The goose's foot is in my hand. I digress.)

Ultimately this isn't about rings. It's about a pricing assumption breaking across every industry that charges for interpretation of freely available signals. Health data today, financial data tomorrow, educational assessment the day after. The structure is identical everywhere: take a signal the user generates, process it through reproducible methods, and rent the output back as a service. Wherever that structure exists, it is vulnerable to the same collapse. Donald Hoffman argues in The Case Against Reality that our perceptual systems evolved to prioritize fitness over truth—we see what helps us survive, not what's actually there. The subscription model is a perceptual system too: it shows you a dashboard that helps the company survive, not the signal that's actually yours. Once you see through it, you can't unsee it. The only question is who builds the serpent first.

Call it ludus if you want. People competing against themselves, loosely synchronized with others, not because they have to but because it's more interesting that way. A game with real stakes and shared upside, played by people who understand that the old rules of company-building are as obsolete as the subscription model they're propping up. Huizinga wrote that play predates culture; maybe it also outlasts it. Maybe, after all the economic scaffolding falls away—after scarcity dies and dignity finds a new address and we stop pretending that renting access to our own biology is normal—what's left is the game itself, and the people who chose to play it together.

Back at the bar, nothing about this was visible. Just five pieces of metal, a confused bartender, and a New York Philharmonic magazine on the counter that I hadn't noticed until I looked at the photo later. "Our New Era," it read, in hot pink letters beneath Gustavo Dudamel's raised fist. The 2026/27 season. A new composition, a new ensemble, a new way of making music together. I didn't plan that shot. But it planned itself, the way certain things do when you stop forcing the composition and let the frame find you—the way Vixen finds the warm spot on the floor without consulting a thermometer, the way a poem in Zanzibar writes itself on a table next to soup that refuses to cook.

I'm keeping it.

Five devices, one body, no agreement. That's where it starts. What happens next depends on whether we keep paying for five different interpretations—or realize we don't actually need any of them to be sold back to us in the first place. Heidegger said only a being who can die can live authentically; maybe only a company willing to eat its own assumptions can build something authentic. The rest is just subscription theater.

I'm not sure how far this goes. I'm pretty sure it doesn't stop at sleep scores.

If you're AI-native, if you can build, if you want equity in something that matters—find me. The serpent is already eating.