That reaction is almost Pavlovian in modern architecture circles. Even my cat would make a weird face hearing me saying that !
We’ve spent a decade preaching the gospel of micro-services, anti-corruption layers, and “never couple to the past.” Yet sometimes a straight-up UPDATE from your shiny new service into that creaky old MySQL instance is exactly the right call.
Why? Because coupling is not evil; it’s a design tool. The key is to balance it. Vlad Khononov’s Balanced Coupling model (I highly recommend his book !) gives us a three-lens checklist—integration strength, distance, and volatility—to decide when “reckless” becomes “pragmatic.”
The real-world dilemma
Your team owns a brand-new payment-reconciliation service. Unfortunately, the authoritative customer ledger sits in a 15-year-old monolith. The table you must update hasn’t changed since the Sarkozy presidency, the DBA team is drowning in tickets, and time-to-market trumps architectural purity.
Classic playbook: build an HTTP façade so the monolith remains an implementation detail.
Pragmatic playbook: wrap an adapter in your hexagonal architecture and ship your feature this sprint, and accept the coupling debt.
Before choosing, run the decision through the three dimensions.
Integration Strength: What are we sharing?
Integration strength measures how much knowledge crosses the boundary. Khononov simplifies the classic “module-coupling” hierarchy into four levels — intrusive, functional, model, contract — ordered from most to least risky.
Intrusive: you reach into another module’s variables or private tables.
Functional: you call its functions (or stored procedures).
Model: you share a business data model.
Contract: you depend only on an integration-specific schema or DTO.
A raw SQL UPDATE on the legacy table is model coupling: both services share the same persistent representation of a business concept.
If the table is genuinely frozen — no new columns, no semantic shifts—then model coupling may be acceptable. Document that assumption loudly.
Distance: How far apart are the pieces?
Distance is a sociotechnical axis: the farther two components are (different repos, different deploy pipelines, different teams) the higher the cost of coordinating change.
Direct SQL feels “close” at runtime because everything happens inside one transaction, but operationally the components are worlds apart: your service runs in its own server, the database sits on bare metal, release cadences diverge. Every schema change now demands cross-team choreography.
Mitigation:
Adapter pattern: hide the SQL behind a thin, explicit port. Swapping to an API later becomes a local change. Hexagonal architecture to the rescue here.
Contract tests: run smoke queries in CI to fail fast if a DBA alters the table.
Shared documentation: treat the table spec like an API spec owned jointly by both teams.
These tactics shorten effective distance, even when physical distance stays large.
Volatility: How likely is change?
The final lens asks: what’s the probability the thing you’re coupling to will evolve? Volatility is the dimension that turns a risky connection into a non-issue or a ticking bomb.
If the ledger schema is as stable as COBOL, volatility is low, and the cost × risk product is tiny. Conversely, if marketing suddenly wants GDPR fields added every quarter, volatility skyrockets and your shortcut will haunt you.
Operating a “direct-SQL” integration safely
1. Own the table
Treat yourself as co-owner. Add naming conventions, write migration scripts, monitor DDL events.
2. Emit integration events
When your service changes the legacy row, publish a domain event so downstream systems aren’t surprised.
3. Capture contract tests
A nightly job that checks column existence, types, and nullability gives early warning.
4. Plan an escape hatch
Keep the adapter’s public interface crisp. If one day volatility spikes, swap the implementation without rewriting business code.
Why the anti-corruption layer isn’t “wrong”, just expensive ?
Layering an HTTP service in front of the database reduces integration strength and distance in one shot. It is objectively cleaner. But every layer carries:
build & run cost
monitoring surface
latency & eventual consistency
a new team queue
Balanced Coupling reframes the debate: purity isn’t free; pay only when the risk justifies the cost :)
Takeaways
Coupling is a spectrum, not a sin. Use integration strength, distance, volatility to place any decision on that spectrum.
Low-volatility + well-encapsulated access = acceptable direct SQL.
Mitigate, don’t moralize. Adapters, contract tests, and shared ownership turn a “quick hack” into a maintainable link.
Revisit periodically. If volatility increases, refactor to a boundary (API, events) before crisis hits.
Architecture is economics. Sometimes the cheapest, fastest, simplest path is the most responsible one—provided you’ve balanced the coupling.
Happy pragmatic coding :)
Pierre.