The workflow lock-in trap
A while ago I wrote about how AI changed my workflow as a designer. I barely open Figma anymore; I spend most of my time in cmux with Claude instead. The distance between idea and execution has never been shorter, and I genuinely love it.
But something has been nagging at me about this whole “one person with 100 agents can do the work of larger teams". It sounds great. One person, armed with AI, doing the work of many. Efficient. Lean. The future.
But what happens when that single person's AI bill doubles? Triples? What happens when enshittification comes knocking?

Currently we are facing hiring freezes and layoffs across the industry. Some, like Meta and Oracle, are planning layoffs due to infrastructure projects and the need to have cash on hand. Others, like Block, are chopping staff due to a newfound increase in productivity. Both cite AI as a justification.
At the same time we have OpenAI burning through cash like it’s nobody’s business. They have already lost billions and are planning to lose more in 2026. Their $200/mo plan is losing them money. Anthropic is in a similar situation. HSBC analysts warn that OpenAI faces a $207 billion funding shortfall through 2030.
Both companies are expected to go public in the second half of 2026. When that happens, public investors will demand profit. We must, of course, think of the short term profit for the shareholders! That means higher prices. May Habib, CEO of Writer, said it plainly: "These LLM companies are going to go public and they're going to raise prices because they have to."

Launch cheap, build dependency, then extract; that’s the enshittification pattern. I remember when Uber launched. I was living in SF and we could take incredibly cheap trips anywhere in the city; it was sometimes cheaper than public transport. Now it’s… not that. The same goes for AirBnB. What started out as a cheaper alternative to hotels is now sometimes more expensive, with all the fees they’ve introduced.
I think we could all see the enshittification coming, but what happens when we’re locked into these tools? When we’ve reduced headcount based on these tools? The trap now has two jaws.
The first: companies cut headcount assuming AI keeps the productivity up. The efficiency math assumed stable AI costs. People were let go, institutional knowledge walked out the door.
The second: AI costs rise. And you no longer have the people. You can't easily rehire. The knowledge is gone.
Adding to that there’s something else happening. Anthropic's own internal research found that engineers using Claude for about 60% of their work reported what they called a “creeping loss of deep craftsmanship.” A randomized trial of junior developers found AI assistants reduced mastery by 17 points. MIT researchers described it as accumulating “cognitive debt.”

The dependency deepens from both sides. Costs go up, and the ability to work without the tools goes down. You're more dependent and less capable at the same time.
Right now “anyone can code” and a single talented person can do the work of a whole team. But with the rising costs, energy drain, and the chip-shortage; will we even reach it? And what happens if we don’t?