Strangler migrations are often approached as an architectural exercise. Teams focus on carving out services, defining boundaries, and reducing coupling step by step. These are all necessary ingredients for modernisation, but they rarely tell the full story of how progress unfolds in practice.
As migrations move from design into execution, data starts to shape both the pace and the complexity of the work. Not because it is poorly designed, but because it represents accumulated operational reality. Data reflects how the organisation has actually functioned over time, including edge cases, reporting needs, and informal dependencies that were never part of an original blueprint.
When modernisation begins, that reality remains active. Systems change, but the data continues to be shared, referenced, and relied upon across processes that cannot simply be paused.
In a target-state architecture, responsibilities are clear. Each service owns its data, boundaries are explicit, and teams can evolve their domains independently. That clarity is usually the reason organisations choose a strangler approach in the first place.
During the transition, however, the legacy system still operates alongside new services. It often assumes direct database access, synchronous reads, and immediate consistency across shared tables. These assumptions are deeply embedded and difficult to remove all at once without disrupting operations.
When a new service takes ownership of part of the data, the system temporarily operates in two modes. The new service is designed around clear ownership and controlled interfaces, while the legacy environment continues to depend on patterns that predate those principles. This coexistence is where data starts to introduce structural tension.
As soon as ownership shifts, several constraints appear simultaneously. The legacy system still needs access to information to function correctly. The new service must act as the authoritative source for its domain. At the same time, business processes must continue without interruption while both systems coexist.
These constraints create a need for synchronisation. In many cases, they also introduce some degree of eventual consistency. This is not a sign of poor design. It is a consequence of running a transitional architecture that balances continuity with change.
The challenge is not primarily technical complexity. It is ensuring that ownership, expectations, and trade-offs are clearly understood across teams.
A commonly used approach is to let the new service own its database and emit events whenever relevant data changes. A synchronisation process listens to those events and updates the legacy database asynchronously. The legacy system continues to operate, but ownership of the data has effectively shifted.
This pattern allows organisations to respect domain boundaries while keeping existing processes alive. It also makes consistency an explicit design choice rather than an implicit side effect of shared access. At the same time, this approach introduces new considerations that need to be addressed deliberately.
Once updates become asynchronous, there are periods where different systems reflect slightly different states. Reports may lag behind operational reality. User interfaces may briefly show outdated information. Teams may encounter discrepancies that were previously masked by synchronous access.
These effects are often described in technical terms, but their impact is organisational. If expectations are not aligned upfront, confidence in the system can erode, even when it behaves exactly as designed.
Effective teams define consistency requirements per domain. Some capabilities tolerate delay without issue. Others require stronger guarantees. Treating all data uniformly tends to create unnecessary friction and slows down decision-making during the migration.
Synchronisation jobs, dual writes, and shared tables are common in transitional architectures. Used consciously, they provide a practical way to move forward without destabilising the business. The risk lies in allowing these mechanisms to become permanent by default. Without clear ownership, documentation, and a plan for removal, temporary solutions quietly solidify into long-term complexity.
Discipline is essential. Data ownership should be explicit from the moment a service is introduced. Transitional mechanisms should be visible and understood, not hidden in background processes. There should also be a clear path for removing synchronisation once the legacy dependency is no longer required.
A recurring pattern is treating data as something to address after services are in place. By that point, assumptions about access, consistency, and reporting are already embedded in multiple systems. Reversing them becomes increasingly costly.
Data strategy does not require exhaustive upfront modelling. It does require early agreement on core principles. Who owns which data, how changes propagate during the transition, and which inconsistencies are acceptable need to be discussed early, even if the answers evolve over time.
When these principles are absent, teams often continue to make progress on code while data becomes the limiting factor that constrains further change.
Strangler migrations work best when they acknowledge the realities of long-lived systems. Years of shared databases and operational shortcuts cannot be undone instantly. What matters is making compromises explicit, intentional, and time-bound.
Modernisation is not about achieving architectural purity during the transition. It is about reducing risk, maintaining continuity, and creating the conditions for long-term improvement.
When data questions start to dominate the conversation, that is not a sign of failure. It is a signal that the migration has reached the layer where structural decisions matter most.