Thoughts on Writing Less Code
I’ve been stewing on this for months now, my brain just won’t stop spinning on it. Over the holidays, something shifted. Maybe people finally had time to mess around with the latest models, or maybe we collectively crossed some quality threshold, but suddenly engineers I respect (people writing the most popular open source projects, writing the books) started publishing their “okay, this is real” moments.
- Antirez (Salvatore Sanfilippo) - Don’t fall into the anti-AI hype - The Redis creator on how AI agents have fundamentally changed programming, including fixing transient Redis test failures and building a pure C BERT inference library in minutes.
- DHH (David Heinemeier Hansson) - Promoting AI agents - The Ruby on Rails and Basecamp creator on giving AI agents a “promotion” from helpers to production-grade contributors.
- Gergely Orosz - Inside a five-year-old startup’s rapid AI makeover - How Craft Docs went all-in on AI agents in January 2026, changing how their entire team works.
- Bill Kennedy (GoingGodotNet) on X - Well-known Go educator
- Ryan Dahl (Rough Sea) on X - Creator of Node.js and Deno
- Uncle Bob Martin on X
There are lots more out there…these are just a few I saved.
I’d been feeling this coming for a while, but watching the world around me arrive at similar conclusions has been validating and honestly a bit surreal. I kept seeing it happen. Skeptical engineers in my orbit would try an agent, watch it do something genuinely impressive, and have that light bulb moment. The floodgates opened.
So here’s where I’m at right now. These are some of the things I’m wrestling with, trying to make sense of while we’re all dealing with the uncomfortable questions in real time.
What Even Is Good Code Anymore?
I’ve never been a code craftsman type. I care about outcomes more than perfect implementations, assuming we’ve got security, performance, and basic maintainability covered. My threshold for “good enough” has always been different from the folks who agonize over every function.
That whole mentality is getting upended right now.
If agents are maintaining these codebases more than humans are, what patterns actually matter? Some things clearly still do. Patterns that help agents work more efficiently will persist. But other instincts we’ve built up? Maybe a large file doesn’t need to be split up anymore. Maybe agents parse things so quickly that our human preferences for organization are just irrelevant.
We’re moving toward spending less time in the minutia of code. The models still make mistakes (sometimes subtle, hard-to-spot ones) so humans are in the loop for now. But for how long? Is “good code” just code that runs efficiently and securely? Is human maintainability even a priority if agents can rip through a file and explain what’s happening faster than we can read it?
I don’t have answers to these questions yet. But I think about them a lot.
The Unsolved Problems
Here’s the thing: the train has left the station. We’re all on this path whether we like it or not. But let’s not pretend everything is figured out. There are some pretty substantial problems rippling out from all this interesting new capability.
The review bottleneck. Engineers are producing way more code than before. I’m tackling large refactors I always wanted to do but could never justify the time investment. That’s great, except now my team is drowning in PR reviews. Reviewing code takes time. Real review, not just skimming. When you’re producing 5x or 10x more code, someone has to review 5x or 10x more code. PR review agents are getting better, but we’re still figuring this out.
Security surface area. These agents are autonomous. They’re running and can do things we don’t want them to do. There are question marks all over the place about how to control them and prevent nefarious behavior. Whole new attack surfaces exist in the systems we’re building and the tools we’re making that didn’t exist before. And that couples with the review problem, right? If you’re producing tons of code, it’s challenging to catch every little thing. Stuff might slip through the cracks if you’re not careful.
New guardrails needed. We’re discovering (and frankly need to discover) new patterns and ways of thinking about how we build software to put the guardrails up. Testing is a massive useful guardrail here. I’m finding myself writing more tests, partly because I can write them more quickly and easily, but also because they’re really good guardrails when these systems are building things. I probably have higher test coverage now than I would have previously said was reasonable or worth maintaining. But that problem is kind of going away too, so maybe it’s just adjusting your mentality to make these agentic workflows work more effectively and safely.
Cost. Agents are expensive. We’re seeing some indicators that costs might go down, but that’s a big open question. So we’re trying to figure out how to use them more efficiently. You’re seeing consolidation around different approaches: MCP servers, agent files, all these attempts to give tools the right context at the right moment (and not too much context) to get better results. Better results more quickly means spending fewer tokens. It’s all a stab at efficiency.
Learning along the way. This one might be the most subtle. Software engineers have always learned as we build. You hit an issue, find a bug, figure out the new thing. You stack that knowledge into your memory and the next time it comes around, you can shortcut to solutions rather than spending time figuring them out again. Those of us who’ve been around a while have accumulated this knowledge.
I don’t know how that works now. These tools are good at helping us understand things. You can ask an agent to explain the solution, how the implementation worked. We’re getting something out of that, but is it as deep? Will it stick? And honestly, does it need to, if you can just describe the problem well and validate that the solution works correctly, efficiently, securely?
So many open questions still.
The Skills That Transfer (And The Ones That Don’t)
Despite all that, I keep hearing this anxiety: “Did I waste 15 years learning to be good at something these tools can now do instantly?”
No. The experienced engineers I see are the most effective with these tools. There’s engineering thinking that comes from years of practice. It helps you guide agents in effective ways.
But I’m also noticing something else. The skills surrounding implementation work matter more now than the implementation itself. Good communicators are thriving because working with an agent is much closer to communicating with a colleague than giving a computer instructions. People who treat it that way get better results.
Builders and problem solvers (people who care more about solving problems effectively than writing the perfect function) are thriving too. And they’ll continue to.
The skills getting more important? Higher-level thinking. Architecture. System design. We’re all becoming software architects to some degree. We’re instructing agents to build things in particular ways. If you can think at that level, you can push these tools much further. You can be more direct about how to stitch pieces together.
Agents do a decent job figuring things out, but the complicated tasks (the ones unique to your specific circumstance) that’s where we add value. The particular architecture might be solid in general, but maybe it doesn’t work for your use case. That’s where you come in.
This Is An Abstraction Layer
Look, software engineering has been layering on abstractions for decades. This feels like a bigger jump than past ones, but it follows a similar pattern. Maybe we’re arriving at the next abstraction, where understanding the specifics of any particular language matters less than understanding how systems fit together.
The pace is what’s throwing people. It’s forcing us to struggle and adapt faster than usual. There’s a natural hype cycle happening too (vibe coding and all that) which is interesting but different from how professional engineers are actually using these tools day-to-day.
You’ve got to wade through the hype, but don’t dismiss it all as hype. This is real. The improvements over the last six months alone have been wild, unlike anything I’ve seen in my career.
I’m on the excited side of this. I’m having fun experimenting, moving patterns from “let’s try this” to “this is legitimately useful, let’s use it on our projects.” That constant loop of experimenting, picking out what works, sharing it with teams.
But excited doesn’t mean everything is solved. There are real problems to figure out. Security concerns. Cost questions. New workflows to discover. The ways we learn and grow as engineers might be fundamentally changing.
You don’t have to be on the bleeding edge, but you need to get in the mix. Software engineers aren’t going away. I think it’s more important than ever to understand how everything works. But we need to change our thinking about what “everything” means and how we operate in this new world.