For many years traditional single-tiered architecture or/and client/server architecture (practically, thin client talking to beefy
server) were the dominant choices for building software applications and platforms. And fairly speaking, for the majority of the
projects it worked (and still works) quite well but the appearance of microservice architecture suddenly put the scary monolith
label on all of that (which many read as legacy ).
This is a great example when the hype around the technology could shadow the common sense. There is nothing wrong with
monoliths and there are numerous success stories to prove that. However, there are indeed the limits you can push them for. Let
us briefly talk about that and outline a couple of key reasons to look towards adoption of microservice architecture.
Like it or not, in many organization monolith is the synonym to big ball of mud. The maintenance costs are skyrocketing very
fast, the increasing amount of bugs and regressions drags quality bar down, business struggles with delivering new features since
it takes too much time for developers to implement. This may look like a good opportunity to look back and analyze what went
wrong and how it could be addressed. In many cases splitting the large codebase into a set of cohesive modules (or components)
with well established APIs (without necessarily changing the packaging model per se) could be the simplest and cheapest solution
possible.
But often you may hit the scalability issues, both scaling the software platform and scaling the engineering organization, which
are difficult to solve while following the monolith architecture. The famous Conway’s law summarized this pretty well.