Are We Learning the Wrong Things?


There’s an Anthropic study making the rounds about how engineers using AI tools aren’t retaining knowledge the same way. The implication is pretty clear: we’re supposed to be worried about this.

And honestly, I have been worried. When I use AI to blast through work that used to take days, I notice something’s different. I’m not doing the reps. I’m not hitting the same bugs ten times until I develop a sixth sense for them. I’m not building that hard-won intuition that comes from grinding through problems.

But here’s where I keep getting stuck: I can’t figure out if this is actually a problem.

The Old Way of Learning

The traditional path went something like this: you hit a bug or need to learn a framework. You spend hours or days figuring it out. You try things, they break, you try more things. Eventually something clicks. You move on, but now you’ve got this knowledge, both the explicit stuff you learned and the implicit feel for how things work.

Do that enough times and you build expertise. Not just facts, but patterns. An instinct for where bugs hide. A sense of what solutions might work before you try them.

That process is absolutely being short-circuited now. Work that used to take me days takes hours. And I’m talking about doing this properly: code review, testing, the whole thing. Not just vibing code into existence.

How You Use the Tools Matters

But here’s something important: the Anthropic study also shows that how you interact with these tools changes what you retain.

I find myself treating Claude more like a pair programmer than an agent. When it proposes a solution I don’t fully understand, I dig into it. I ask follow-up questions. I push back on things that feel off. I suggest refactorings. I’m not just accepting whatever code comes back and shipping it.

This takes more time than just running with whatever gets generated. But I’m definitely learning from it. I’ve picked up new patterns, discovered language features I didn’t know existed, seen clever solutions that triggered new ways of thinking about problems.

The learning feels different though. I’m going broader, tackling things outside my comfort zone with less hesitation, exploring areas that used to feel like stretches. But I’m losing some depth. I’m not grinding through the same problem space over and over until I know every edge case by feel.

I wonder if this is a generational thing, or at least a seniority thing. More experienced engineers seem more likely to interrogate the code, maybe because we’re skeptical or because we’re curious when something looks novel. We’ve seen enough code to know when to ask questions.

The Messy Middle

I think we’re in a weird transitional space right now. One foot in the old world where that accumulated knowledge matters, one foot in the new where maybe it doesn’t?

It feels like we’re atrophying. Like we’re losing something important. But then I think about assembly programmers who spent years building intuition for optimization tricks that literally nobody needs to know anymore. Those skills were real and valuable, right up until they weren’t.

Are we watching the same thing happen again, just at a different level of abstraction?

Where This Might Be Going

I keep coming back to this: the value of learning through repetition is that you recognize patterns when you see them again. You can smell a problem before it happens. You know where to look.

But what if you’re not going to be in those spaces anymore? What if AI handles that level of problem faster than you ever could, whether you’ve seen it before or not?

Maybe the intuition we need isn’t at the implementation level anymore. Maybe it’s moving up, to architecture, to system design, to knowing what to build and how pieces should fit together. The strategic stuff, not the tactical.

I Don’t Have Answers

I’m genuinely uncertain here. Part of me thinks we’re losing something crucial. Part of me thinks we’re just experiencing the discomfort of a major abstraction shift.

What I do notice: the knowledge I built up the old way is making me more effective with these tools. I can review AI-generated code better because I’ve debugged similar patterns before. I can spot architectural issues because I’ve lived with bad architectures.

But is that a permanent advantage, or am I just like the last generation who needed to know assembly code? Are we training engineers differently going forward, focusing on different skills entirely?

There are trade-offs everywhere here. Breadth versus depth. Speed versus understanding. Using these tools thoughtfully versus using them efficiently. And underneath it all, this nagging question: what even matters to learn anymore?

This feels important to figure out, but maybe it’s also something we won’t really understand until we’re through it. We’re writing the playbook as we go.

If you’re feeling this too, like you’re learning differently, or not learning the same things, or not sure what you should be learning, you’re not alone. I don’t think any of us really know yet.