Back to Blog
AI in PracticeEngineeringLegacy Systems

AI in the Real World: Why Legacy Codebases Are the True Test of AI Adoption

Everyone's talking about building with AI. Almost nobody's talking about what happens when your codebase is older than the AI tools themselves.

Starting from scratch is a different game

When you build something new, even without AI, it’s just easier. You’re the one deciding how the architecture works, what libraries to use, and how everything connects. There’s no history to deal with. AI fits naturally into that because the rules are clear and consistent from day one, and honestly, that’s the version of AI adoption most people are writing about online.

But that’s not the reality for a lot of teams.

The mess that comes with time

When a product has been in production for years, the code comes with baggage. The most recent code might follow solid, modern patterns. But go deeper into the same project, and you’ll find older parts built with a completely different philosophy, different dependencies, sometimes a completely different way of thinking. Nobody went back to refactor those parts because they didn’t need it. They’ve been running fine in production, and touching them would be a risk without reward.

People who’ve been working in that code long enough know which parts to follow and which to leave alone. But when AI reads the same codebase, it treats all of it as equally valid.

So what do you actually tell AI to follow?

And that’s the question that catches most teams off guard. You can’t just hand AI the whole repo and say “match the style” because there isn’t one style. There’s a patchwork of decisions made by different people over different years. AI will absorb all of it, the modern approach and the deprecated one sitting three folders away, and it won’t know which one you’d want going forward.

You can ask AI to review the code and flag inconsistencies, and it’ll do a decent job at that. But deciding which pattern becomes the standard? That’s an architectural decision, not something you can automate.

What this really comes down to

The teams that actually succeed with AI in mature products won’t be the ones who picked the best tool. They’ll be the ones who did the less exciting work of figuring out their own standards first, deciding what AI should and shouldn’t learn from, and treating AI adoption as a team effort. Because it’s not about one person deciding what’s right. It’s about the whole team learning how to train AI with their feedback, correcting it when it picks up the wrong patterns, and building that discipline together.

That’s what AI adoption looks like when the codebase has history. Not a clean demo on a fresh repo, a real decision, on a real Monday morning, about which version of your own code gets to be the truth.