There's a pattern we keep seeing across companies right now. Leadership has bought into AI. There's budget, there's enthusiasm, maybe a few early experiments running. But somewhere between the idea and the outcome, things slow down. The results don't land the way anyone expected.
When you dig into why, it's rarely the AI itself. The models are capable. The tools exist. The problem is almost always what's sitting underneath: systems and data infrastructure that were built for a completely different era of computing. According to McKinsey, as much as 70% of software used by Fortune 500 companies was developed over two decades ago. Across industries and geographies, the picture is remarkably consistent.
That's a harder conversation to have, but it's the right one.
Most enterprise software in use today was built between 2000 and 2015. Solid, functional, purpose-built for the problems of that moment. ERP systems, CRM platforms, data warehouses, internal tools, all designed around the assumption that humans would be the primary consumers of information.
AI changes that assumption completely. Autonomous systems need clean, structured, accessible data. They need APIs that respond in milliseconds, not batch processes that run overnight. They need event-driven architectures, not systems designed around scheduled jobs and manual triggers. When you try to bolt AI onto legacy infrastructure, you're not just adding a new feature, you're asking a system to do something it was never architected for.
The cost of that mismatch shows up in the budget. Enterprises typically spend between 60 and 80% of their IT budgets maintaining legacy systems, leaving 20 to 40% at best for anything new. McKinsey found that for one large bank, 70% of total IT capacity was consumed by legacy maintenance alone. A 2026 Cognizant study of 1,000 Global 2000 executives found that organisations currently allocate 61% of their technology budget to keeping existing systems running. That's not innovation. That's remediation. And it directly starves the investment needed for AI to work properly.
We'd call this the integration tax. The longer you carry it, the wider the gap grows between what your competitors are building and what your current systems can support.
The word "modernisation" gets used loosely, so to be specific: it's not about replacing everything. Wholesale system replacement is expensive, slow, and disruptive and historically, close to 70% of big-bang rewrites fail to deliver, coming in over budget, behind schedule, or missing the original business logic entirely.
What it actually looks like is a layered approach, leaving the stable core intact while systematically exposing data and functionality through modern interfaces. API layers on top of legacy systems. Event streaming where there were previously static databases. Data pipelines that clean and unify records that were never designed to talk to each other.
The goal is to make your existing systems AI-readable. That's a narrower, more achievable target than a full rewrite, and it delivers value much faster. McKinsey's own research suggests that using AI to assist modernisation efforts can reduce timelines by 40 to 50% and cut costs derived from technical debt by around 40% which makes programmes that once looked financially unattractive suddenly viable.
One example: a logistics company we worked with had solid operational software been running it for twelve years, works fine, nobody wants to touch it. But they wanted predictive maintenance alerts and dynamic route optimisation using AI. The answer wasn't replacing the operational system. It was building a data layer on top of it: extracting the right signals in real time, normalising them, and feeding them into models that could actually reason over them. The core system stayed. The AI capability got built. Total timeline was under six months.
The single thing that most consistently blocks AI progress in established companies is data quality. Not data volume, most companies have plenty of data. It's the shape and reliability of it.
The numbers bear this out. According to a March 2026 report from Cloudera and Harvard Business Review Analytic Services, only 7% of enterprises say their data is completely ready for AI, and 46% cite data quality as a top obstacle. Gartner has found that poor data categorisation alone can increase AI implementation costs by up to 40%. Meanwhile, a 2025 survey of over 200 business professionals found data quality and availability ranked as the single biggest barrier to AI adoption, ahead of skills gaps and budget constraints.
AI systems are not forgiving of inconsistency the way human analysts are. A human analyst will notice that two systems use different naming conventions and adjust. An automated system won't unless you've explicitly handled that. Duplicate records, missing fields, inconsistent formats, siloed data that was never intended to be joined, all of it creates friction that slows down or breaks AI workflows entirely.
Fixing this isn't glamorous work. It's data modelling, governance frameworks, pipeline engineering. But it's foundational. Companies that get this right first tend to see dramatically faster returns on their AI investments, because they're not spending half their engineering effort cleaning up before they can do anything useful.
There are a few structural choices that separate companies positioned to scale with AI from those that will keep hitting walls.
The first is moving from monolithic data storage to event-based systems. When everything that happens in your business generates a real-time signal that AI systems can subscribe to, you unlock a fundamentally different class of applications. Reactive systems, not batch ones. This also addresses one of the core compatibility problems between legacy infrastructure and AI, the assumption that data is queried periodically rather than streamed continuously.
The second is investing in an internal data contract framework, essentially a formal agreement between systems about what data looks like, who owns it, and how it changes over time. This sounds procedural, but it's what allows AI systems to trust the data they're consuming. A 2025 survey found that 51% of organisations have already implemented a semantic data layer to standardise results across departments, and those organisations consistently report higher trust in their AI outputs. Without something like this in place, you're building on sand.
The third, and probably the most underestimated, is observability. Knowing what your systems are doing at any given moment. Traditional monitoring tells you when something breaks. Modern observability tells you why, with enough granularity that AI-driven automation can act on it. This becomes the nervous system for any serious AI operation.
Despite growing investment in AI, a 2025 Accenture study found that more than half of large companies surveyed had yet to scale a truly transformative AI investment. The gap isn't really about access to AI tools. It's about the infrastructure underneath them. Meanwhile, 85% of senior executives in a Cognizant global study said they were concerned their existing technology estate would imperil their ability to integrate AI at all.
The companies we see making real progress are not necessarily the ones who moved fastest or spent the most. They're the ones who were honest about their starting point and built a realistic sequence.
They started with a modernisation audit: understanding which systems are stable but outdated, which data is valuable but inaccessible, and where the biggest structural gaps are. Then they prioritised ruthlessly, identifying the two or three use cases where AI could deliver meaningful business value in six to twelve months, and working backwards from those to understand what infrastructure changes were actually required.
That last part is key. Working backwards from the use case, not forward from the technology. It stops you from over-engineering and keeps the work connected to outcomes the business actually cares about. When modernisation is done well, the return is significant: Kyndryl's 2025 State of Mainframe Modernisation Survey of 500 business and IT leaders found ROI ranging from 288% to 362% depending on approach.
The Cognizant study put it plainly: organisations currently spending 61% of their budget on legacy maintenance plan to bring that down to 27% by 2030. The ones who get there first will have a significant head start on everything that comes next.
If your team is working through any of this whether it's understanding where your current infrastructure stands or mapping out a modernisation sequence that fits your growth plans, feel free to reach out. It's a conversation worth having early.
Want to learn more about how we approach AI transformation at Itsavirus? Read our AI Transformation Framework whitepaper here