In Praise of Inefficiency
Gardeners have a practice called hardening off. Before you move a seedling that grew under glass out into a real garden, you put it through a week or two of controlled stress. You take it outside for an hour, then two, then four. You let it feel the sun directly. Some growers run a small fan over their seedling trays. Some brush the tops of the plants every day with a hand or a piece of cardboard. The point is to introduce a little wind, a little load, a little reason for the plant to develop the structure it will need outside.
There is an actual biological term for what is happening: thigmomorphogenesis. The plant senses the mechanical stress and responds by laying down more lignin in its stems. Lignin is the polymer that gives wood its stiffness. A seedling that grew in still air looks beautiful. It is tall and green and uniform. But its stems are thin and its tissue is soft, and the first real breeze will lay it flat. A seedling that has been brushed and blown on for a week looks scrappier, sometimes shorter, sometimes with a little less leaf. It also lives.
The interesting thing about lignin is that you cannot add it after the fact. You cannot rescue a leggy seedling by hardening it off the day before you plant it. The structure has to be there already, built layer by layer in response to a stress the plant could feel but survive. The wind is not damage in the abstract. It is information. It tells the plant what kind of body it needs to grow.
Humans work the same way. The most well-studied version of this is the skeleton. Bone is not a static thing. It is in constant turnover, and the rate at which it lays down new tissue is regulated by load. If you lift heavy weights, your bones get denser. If you sit in a chair for forty years, they get thinner, and at some point you fall and one of them breaks. The same is true of muscle, of tendon, of connective tissue, of the cardiovascular system. Adaptation runs on demand. You do not get the body you might need someday. You get the body your habits are asking for right now.
Going to the gym is, by any short-term measure, deeply inefficient. You drive somewhere. You pick up heavy objects. You put them down. You voluntarily damage your muscle fibers so they will rebuild slightly thicker. You produce nothing. A purely optimizing observer would say you should take the elevator, get the cart pushed to your car, and save the time and the calories for something useful. That observer would be wrong on a longer timescale. The point of the lifting is not the lifting. The point is the body you will have in twenty years, and the work that body will let you do, and the falls it will let you survive.
I have been thinking about this in the context of engineering, and specifically in the context of what AI tools are doing to the way I work and the way the engineers I work with are coming up.
AI has made a lot of things easier in ways I appreciate. I get to skip past boilerplate I have written a thousand times. I get a second set of eyes on a regex at three in the morning. I get a starting point on languages I do not use every day. I have no interest in the position that says good engineers should refuse the tool out of pride. I wrote about the centaur model a few weeks ago and I still think the partnership frame is the right one.
What I am more worried about is the version of this where the tool stops augmenting the engineer and starts replacing the work that built the engineer in the first place.
Reading a stack trace is annoying. It is also where you learn what your runtime is actually doing. Sitting with a bug for two hours before the answer comes is uncomfortable. It is also how your debugging intuition gets built, the kind of intuition that ten years later lets you glance at a system and know which log to open first. Writing a small utility from scratch instead of pulling in a dependency is slower. It is also how you build the mental model that lets you reason about what your dependencies are doing when one of them turns on you.
Every one of those frictions is a place where the seedling feels the wind. Take them away and the engineer grows fast and green and tall. Put them in a real garden, in front of a production incident at two in the morning, and they fold.
The clearest example of why this matters showed up last year. In early 2024, a maintainer who had been contributing to the xz compression library for two years under the name Jia Tan landed a backdoor in the project’s release tarballs. The backdoor targeted sshd. It was discovered, more or less by accident, by an engineer at Microsoft who noticed his ssh logins were running a few hundred milliseconds slow and decided to find out why. If he had not chased that latency, the backdoor would have shipped into every major Linux distribution.
The vulnerability that made the whole thing possible was not technical. It was human. The original maintainer of xz, Lasse Collin, had been burning out for years, alone, with no funding, maintaining a piece of infrastructure that runs in roughly every server on Earth. Jia Tan spent two years building trust, helped out, was friendly, took on more work, and eventually was given the keys. The intelligence service or whoever was behind that account did not need to break the encryption. They just needed to wait for the human in the loop to get tired enough to hand over the controls.
You can imagine how much worse this gets when an entire generation of maintainers has never built the muscles for the kind of slow, suspicious, paranoid review work that catches a thing like the xz backdoor. If your normal mode is to paste a diff into an assistant and accept the summary, you are not going to spot the version of this that does not trigger any obvious flag. The summary will say “minor build system update.” That is exactly what the attacker wanted the summary to say.
The next wave of this is already starting and it has a clinical-sounding name: model poisoning. The shape of the attack is straightforward. An adversary publishes enough plausible-looking content recommending a malicious package, often generated by the same models the defenders use, that the package starts showing up in answers from coding assistants. The model is not lying in any deep sense. It is reporting a statistical truth about its training data. The training data has been engineered. A junior engineer who has never had to evaluate a package on its own merits, who has never read source, who trusts the assistant because the assistant has been right about everything else, will run pip install pwned and not look back.
The defense against this is the same slow, suspicious, paranoid muscle that catches the xz backdoor. It is the engineer who reads the source of a dependency before they take it on. The engineer who notices that the GitHub stars do not match the commit history. The engineer who has, somewhere in their body, the cached experience of being burned once by trusting something they should not have trusted, and who carries that small wariness into every install command for the rest of their career.
You cannot generate that muscle the day you need it. You have to have already been growing it.
This is the part where the gym analogy stops being an analogy and starts being the actual argument. Resilient systems require inefficiency built in. The inefficiency is the training load. It is the deliberate choice to do something the hard way, not because the hard way is virtuous in itself, but because the hard way is the only way to grow the tissue that holds the system up when something goes wrong.
The harder problem is that almost nothing in the way we run software companies rewards this. A quarterly report cannot see lignin. It can see velocity, story points, time to first commit, and the line on the chart that goes up. An engineer who spends an afternoon reading source instead of accepting the AI’s suggestion looks slower on every metric a manager has access to. An engineer who writes a small helper instead of adding a dependency looks like they are reinventing wheels. The fact that these are exactly the practices that produce the engineer you will desperately want on the team during the next supply chain incident does not show up anywhere on the dashboard until the incident happens, at which point it is too late to grow the muscle.
I do not think the answer is to refuse modern tools. I use them every day and I would not give them back. The answer is to be deliberate about where you put the load. The same way a powerlifter does not lift heavy every day on every movement, but does lift heavy somewhere on most days, an engineer working with AI assistance can choose specific places to keep the wind blowing. Read the source of a new dependency before you adopt it, even when the assistant has told you it is fine. Debug at least some problems all the way down to the bottom yourself, even when the assistant offered a plausible fix on the first pass. Write the occasional small utility from scratch in a language you want to stay sharp in. Pay attention to the moments when you reach for the tool out of laziness rather than leverage, and at least some of the time, do it the hard way instead.
None of this should sit entirely on the individual engineer. When I was in the Marine Corps, physical fitness was not somebody’s personal hobby. It was required, it was scheduled, and it was tested. The Corps did not run PT three mornings a week because it wanted Marines to look good. It ran PT because the institution understood that combat is the wrong moment to find out whether someone can pick up their buddy and carry them out under fire. The body for the bad day gets built before the bad day, on purpose, by the organization, because it is a rare individual who will choose to put themselves through that work voluntarily, every week, for years.
Companies that depend on software running under stress should be thinking about their engineers the same way. The two-in-the-morning incident is the firefight. The supply chain compromise is the firefight. If a team has spent three years optimizing for story points and never reading source, the incident is the wrong time to find out who can still trace a problem to the bottom. The capacity you want during the bad week has to be maintained during the good months, and the maintenance has to be valued and measured and budgeted for, the same way the Corps values PT. That means time on the calendar for the slow work. It means promotion ladders that reward the engineer who chased the weird bug for a day instead of papering over it in twenty minutes. It means leaders who understand that an engineering organization is a body, and a body that never gets loaded is a body that cannot carry anyone.
This is a commitment problem more than a technical one. The payoff is not on this sprint. It is years out. The bone density you build at forty is what keeps you walking at seventy. The lignin you put down this season is what keeps your tomato plant upright when the storm comes through in August. The intuition you build by struggling through hard problems now is what will let you, ten years from now, glance at a pull request and feel the small wrongness that turns out to be the next xz.
We will not get there by accident, and we will not get there if we let convenience be the only thing we optimize for. The systems we are building are too important and they will run for too long. They need the kind of resilience that only comes from load.
While I was working through this set of ideas, I wrote a small album about it. The tracks deal with the human cost of removing friction from the work, the loneliness of the maintainer, the slow erosion of intuition, and the supply chain attacks that are starting to follow. I am not a musician. The music was generated by Suno, with my lyrics and direction. The playlist is here if you want to listen.
Director of Infrastructure Engineering at OpenTeams. I write about infrastructure, open source, and the occasional career reflection. Based in Granada, Spain.