The Recursive Revolution: How Enterprises Are Using LLMs to Demolish Their Own Monoliths

The Recursive Revolution: How Enterprises Are Using LLMs to Demolish Their Own Monoliths

There is a quiet paradox running through the boardrooms of every major bank, insurer, and government agency right now. Executives have spent the better part of three years hearing that AI will transform their operations — and they believe it. What they keep running into is a problem that no vendor slide deck adequately prepares them for: you cannot bolt AI onto a monolith and call it transformation. The legacy architecture that underpins most enterprise organizations wasn’t designed for the event-driven, API-first, low-latency world that modern AI demands.

The conventional answer has always been a multi-year migration. Rip out the COBOL. Decompose the monolith. Rebuild on cloud-native infrastructure. Budget three to seven years and brace for cost overruns. But something unexpected is happening in the most pragmatic corners of the enterprise world: organizations are using AI to dismantle the very systems that are blocking AI adoption. It is a self-referential modernization loop — and it is rewriting what a migration project looks like.

The Paradox at the Core

Legacy systems are not just technically inconvenient. They are epistemically opaque. Decades of accumulated business logic, undocumented workarounds, and tribal knowledge are encoded in systems that no single living engineer fully understands. A typical COBOL mainframe at a Tier 1 bank may contain 30 to 50 million lines of code, much of it written by developers who retired in the 1990s. The documentation, where it exists, is out of date. The institutional memory has evaporated.

This opacity is precisely what has made legacy modernization so expensive and so risky. You cannot refactor what you cannot understand. And you cannot adopt AI at scale — with its demands for clean data pipelines, real-time inference, and modular service boundaries — until you have modernized.

Enter the recursive solution.

What AI-Assisted Legacy Analysis Actually Looks Like

A new category of tooling has emerged that applies large language models directly to the problem of legacy comprehension. In practice, this means several distinct workstreams running in parallel:

  • COBOL decomposition: LLMs trained or fine-tuned on legacy codebases can read COBOL, PL/I, and RPG programs and produce plain-language summaries of what each module does, what data it touches, and what downstream systems depend on it. What once required a specialist contractor spending six months reading code now takes days.
  • Service boundary mapping: By ingesting system logs, database schemas, and API call graphs, AI tools can identify natural seams in a monolith — places where the system already behaves like loosely coupled services, even if it wasn’t designed that way. These seams become the basis for a strangler-fig decomposition strategy.
  • Log-driven refactoring: Runtime logs are a goldmine of behavioral information that legacy documentation never captured. LLMs can parse millions of log entries to reconstruct execution paths, identify dead code, and flag undocumented edge cases before a single line of new code is written.

The output is not just documentation. It is a living map of the legacy system that accelerates every subsequent decision an architect or engineering team needs to make.

Where It Is Already Working: Banking, Insurance, and Government

The sectors with the oldest and most complex legacy estates are, perhaps counterintuitively, the earliest adopters of this recursive approach.

Banking: Several major North American and European banks have begun using LLM-assisted analysis pipelines to decompose core banking platforms. Migration timelines that were scoped at four to six years are being renegotiated. Early adopters report that automated codebase analysis is compressing the discovery and planning phase — historically the longest and most expensive — from 18 months to under three.

Insurance: Policy administration systems built in the 1980s remain the operational backbone of the industry. Insurers are using AI tools to extract the embedded actuarial and underwriting logic from aging platforms, reexpressing it as documented business rules that can be reimplemented in modern environments without losing regulatory compliance.

Government: Public sector agencies, constrained by procurement rules and political risk, have historically been the most conservative modernizers. But the pressure of digital service delivery expectations is forcing the issue. AI-assisted legacy analysis offers a lower-risk entry point: understand first, migrate second.

Across all three sectors, the pattern is consistent. The AI doesn’t do the migration. It does the comprehension — and comprehension has always been the bottleneck.

The Dual-Track Strategy

The most sophisticated enterprises are not simply using AI to accelerate legacy analysis. They are running a deliberate dual-track strategy: AI tools modernize the old stack while engineering teams build the new AI-native target state in parallel.

On one track, LLMs are extracting, documenting, and gradually decomposing legacy systems. On the other, architects are designing the greenfield platform — event-driven, API-first, cloud-native — that the business will ultimately migrate onto. The two tracks converge iteratively, with migrated modules being validated against the AI-generated behavioral documentation of their legacy predecessors.

This approach inverts the traditional migration risk profile. Instead of a single high-stakes cutover at the end of a multi-year project, value is delivered continuously. Business logic is validated continuously. Risk is distributed across many small decisions rather than concentrated in one.

What This Means for Enterprise Architects and CTOs

The implications for technology leadership are significant and immediate:

  • The migration toolchain has fundamentally changed. Static analysis, manual code review, and spreadsheet-based dependency mapping are being displaced by AI-native comprehension tools. Teams that build fluency in these tools now will have a structural advantage in execution speed and risk management.
  • The discovery phase is no longer the long pole. Legacy comprehension, once the irreducible bottleneck, can be dramatically accelerated. This changes how migration projects should be scoped, staffed, and funded.
  • AI fluency is now a prerequisite for legacy strategy. CTOs who treat AI purely as a product capability to be added after modernization will find themselves outpaced by competitors who are using AI to achieve modernization faster.

The recursive revolution will not eliminate the hard work of enterprise modernization. Migrating 50 million lines of COBOL is still a formidable engineering challenge. But for the first time, the intelligence required to understand what those lines actually do is no longer the scarcest resource in the room. That changes everything about how the work gets done — and how long it takes.

Leave a Reply

Your email address will not be published. Required fields are marked *