Back

Designing microservices that scale: starting with the database

January 8, 2026

If the Strangler Fig pattern helps you migrate code safely, the database-per-service pattern determines whether your architecture remains viable once the data starts to move.

When organisations decompose a monolith, there is a common shortcut. The code is split into services, but everything continues to talk to the same legacy database. On paper, this looks like progress.

In practice, it creates a distributed monolith: higher complexity, more failure modes, and none of the autonomy that microservices are supposed to deliver.

True decoupling starts with a simple rule: each service owns its own data. No service should depend on another service’s tables or schema.

Why sharing a database quietly undermines your system

Allowing a new service to read directly from a shared legacy database feels pragmatic. It reduces upfront work and avoids difficult migration decisions. But it also reintroduces tight coupling at the most fragile layer of your system.

Schema changes become a coordination problem.

A seemingly harmless table modification can break downstream services instantly. Performance issues propagate across boundaries, as new services compete with the monolith for the same database resources. Scaling becomes constrained, because you can no longer scale a single service’s persistence layer independently.

These issues rarely surface on day one. They emerge gradually, once teams start moving faster and the system is under real load. By then, the architectural debt is already embedded.

One service, one database

The database-per-service pattern enforces a clear boundary: data belonging to a domain can only be accessed through that domain’s service interface.

A customer service persists customer data in its own database. An order service does the same for orders. If the order service needs customer information, it requests it via the customer service API rather than querying the database directly.

This constraint may feel limiting at first, but it is precisely what creates autonomy. Services can evolve independently, scale independently, and make internal changes without unexpected side effects elsewhere.

Migrating data without stopping the business

Decoupling databases raises the question: how do you move data from the monolith without downtime?

In practice, there are two common approaches.

A first step is often dual writes. The legacy system is modified so that every write to the old database is also sent to the new service or its database. This keeps data roughly in sync and allows teams to validate the new path under production load. The trade-off is complexity. Error handling becomes subtle, and inconsistencies are hard to reason about when one write succeeds and the other fails.

A more robust approach is change data capture. Instead of touching legacy code, you replicate changes by observing the database’s transaction log. Inserts, updates, and deletes are streamed to the new service automatically. This keeps responsibilities separated and reduces risk, at the cost of additional infrastructure and operational maturity.

Neither approach is trivial. What matters is choosing deliberately, with a clear plan for how long the transition phase will last.

Rethinking transactions across services

In a monolithic system, transactions are straightforward. If part of a workflow fails, the entire transaction rolls back. Once data is split across services, that guarantee disappears.

The usual answer is the Saga pattern. Instead of one global transaction, you model a sequence of local transactions. Each step commits independently. If a later step fails, compensating actions are executed to undo the earlier work.

This shifts complexity from the database into the application layer. That is not a drawback; it is an explicit design choice. Business workflows become visible, testable, and adaptable, rather than hidden inside database semantics.

Make the hard decisions first

The database-per-service pattern introduces real challenges: data duplication, eventual consistency, and more sophisticated integration logic. But avoiding it only postpones the cost, while compounding the risk.

If the goal of microservices is independent evolution, scalability, and organisational alignment, then data ownership must follow service boundaries. Anything else leads back to tightly coupled systems that are harder to change than the monolith you started with.

If you are in the middle of such a transition, it is worth stepping back and asking whether your data architecture supports the organisation you are trying to build.

That conversation is often more important than the tooling choices that follow.

Latest insights

A sharp lens on what we’re building and our take on what comes next.

See more
The practical way to optimise cloud spend with human–AI collaboration
AI governance checklist: 10 questions every leader must answer before adopting AI
How to Choose the Right Software Development Company

Latest insights

A sharp lens on what we’re building and our take on what comes next.

See more
How OpenTelemetry and Jaeger expose silent failures in modern systems
We put Claude Code and Cursor to the test: Here are our findings
Responsible AI: the 7 non-negotiables

Latest insights

A sharp lens on what we’re building and our take on what comes next.

See more
Developing the Factum app
Shaping the Future: Itsavirus Unveils a New Vision for Smart Cities
Building Barter, revolutionizing influencer marketing

Latest insights

A sharp lens on what we’re building and our take on what comes next.

See more
Workshop : From Idea to MVP
Webinar: What’s next for NFT’s?
Webinar: finding opportunities in chaos

Latest insights

A sharp lens on what we’re building and our take on what comes next.

See more
How we helped Ecologies to turn survey results into reliable, faster reports using AI
How to deal with 1,000 shiny new tools
Develop AI Integrations with Itsavirus