Let's say it's early 2022. You're just doing your thing: open your code editor, spin up your dev servers, and you start fixing bugs, or making new features, or whatever that day calls for. You type away diligently. The only code you don't write yourself is the little snippets you get from Stack Overflow, Medium, GeeksForGeeks, and maybe documentation. This version of you quite likely knows every single part of the codebase you're working on.
Fast forward to early 2026. You have this magical thing that can write code for you. It can do research for you. Now you can finally try out all those ideas you had but didn't have time to prototype. Now you can debug faster than ever before. You can create different versions of a feature and keep the one you like. Perhaps the most convenient part of all this is that you are not emotionally attached to the code, so you can easily discard it. See where I'm going with this?
The traditional developer is more likely to fall in love with their codebase, which is a good thing. It means the repo won't be filled with slop. They can fine-tune every single part of it to behave exactly how they want, be it speed, UX, efficiency, etc. But then they are most definitely emotionally attached, so it's harder to let go of it, to discard code, even when it might not be serving the codebase well. And that's a huge problem.
Where AI Makes It Easier to Quit
Now, it seems pretty obvious that AI-assisted development solves this issue, right? Because you can more easily discard code ... right? Well, yes, it does. But there's a catch. I have more dead projects now than I did at any given moment even two years ago. It's precisely the fact that you can let go of code so easily that makes it so you can let go of ideas so easily. Maybe the AI agent misunderstood your idea and built it the wrong way. Or you simply ran into hurdles much faster and gave up. Or you exhausted your usage plan, and since you have no idea how the prototype even works, you give up π.
And it's not just full-on dead projects, even abandoned feature branches. Lately, I end up checking out a new branch to add something new to an app, and more often than not, I end up discarding the branch. Because something more pressing comes along, and my thought process is something like, "I can quickly take care of this. It seems much simpler and more urgent than what I was working on. And Codex can quite easily one-shot this." Then yet another feature or issue comes along, and another, and another. Eventually, you lose touch with the thing you were working on first.
It's a double-edged sword, really. It has as many benefits as it does issues. And as with all things, there is a way to strike the perfect balance. Knowing when to rely on AI agents and when not to helps a lot. Knowing exactly how these agents work gives you a more grounded understanding of their core flaws and strengths. You end up having intuitive knowledge of when and how to use them.
Steer the Agent, Donβt Follow It
Take this example: setting up a new TanStack Start, or Next.js, or Angular project. More often than not, the agent will try to go the manual route, creating each individual file and populating it. This creates a wider surface area for errors. You would definitely get better results by just running the commands stated in the documentation yourself. It is simple, straightforward, and is the recommended path.
Then you move on to codebase structure and organisation. If you leave that entirely to Codex or Claude Code or whichever agent you use, you might find it difficult, or at least annoying, to navigate and understand the codebase. These things really like to over-engineer basic stuff, and it takes a keen eye to notice and rectify that. They also have no prior knowledge of your preferences.
Basically, there's a bunch of little quirks you'll observe when using AI agents. But that's the key right there, you need to observe. It's never safe to assume that code produced by these things is correct, or safe. While they might not make syntax errors, they'll definitely make logic mistakes with total confidence, or what you may otherwise call confident bullshit at machine speed. I find that it's generally more beneficial to review what the agents produce. Sounds pretty obvious, right? Yet many people (myself included) tend to not do that. Proper reviewing means going one tiny step at a time, one feature, one bug, one part of a problem at a time, otherwise you'll end up with a ton of edited files, and reviewing all that doesn't seem worth the effort. "But you could just ask another agent to review one agent's work, right? After all, these things are so good at criticism, and that would be faster."
Yeah, you definitely could. But that kind of misses the whole point of reviewing an agent's work, doesn't it? Remember, these things are just advanced autocomplete. They are more likely to double down on a stupid decision or throw duct tape on it than to go back and rectify it. This is the classic behaviour of, "You're totally right. There is a simpler way to do that." Remember, you're not just reviewing the agent's output to catch errors, you're doing that to guide and steer it. We all have different preferences when it comes to things like codebase structure, patterns, etc. So there's no one-size-fits-all template when it comes to that. I might prefer using a simple if-else block where you might prefer to use a lookup table. If an agent gives me a lookup table, I'll be irritated. If you get an if-else block, you'll be irritated. You just can't make all humans happy, can you? π
Know the Codebase Anyway
So go one small bit at a time, review what the agent does, and steer it towards what you prefer. And you just might end up falling in love with your codebase again. Not so much to the point where you find it hard to discard code that is of no benefit anymore, but just enough so you know how every part of it works and you can easily trace issues when they come up. This is even more beneficial in the early stages of a project. Establish your preferred patterns and structures from the very beginning. Coz remember, agents are basically advanced autocomplete at their core. So if an agent sees your preferred way of doing things in a codebase, it is more likely to replicate that. And you'll find that you don't even have to steer it as much as the codebase grows, because your preferences are quite evident.
So, should you put in the effort to know a codebase inside out? Absolutely. Please do.
Are you going to ship slower than if you just didn't review anything? Yes, totally.
Are you going to ship better, more maintainable software? Abso-freaking-lutely, yes!