Imagine you were running a small company. For one line of products certain skills and some amount of working hours per unit are required. The people you work with have worked on many products previously and adopted a wide range of skills so that, as customer demand changes, responding to a change of workload is no problem.
Imagine now you would split teams by skill and turn each team into an independent company of its own that you outsource processing of all work requiring a certain skill to.
Would you imagine that to work better then your previous setup? Do you think that would be more efficient while equally responsive to changing needs? Would it make good use of resources?
I certainly would not. But that is what the Microservices Approach claims to be the thing:
Microservices are a software development technique—a variant of the service-oriented architecture (SOA) architectural style that structures an application as a collection of loosely coupled services. In a microservices architecture, services are fine-grained and the protocols are lightweight. The benefit of decomposing an application into different smaller services is that it improves modularity. This makes the application easier to understand, develop, test, and become more resilient to architecture erosion.
Apart from the fact that this definition is overly packed with feel-good terms, it also gets causality upside-down.
Let’s read it in reverse: Yes, good modularity helps preserve architectural integrity and simplifies understanding, developing, and maintenance of a solution. But while good modularization helps identifying useful service interfaces, having service interface as such does not imply good or easily achieved modularization.
In fact, definition of modules or software components is best done by some subject of responsibility for some aspect of the solution, or – and that is now really important for this discussion – for non-functional requirements in the first place. The ability to identify service interfaces in this mix is mostly a result of the modularization at hand rather than the other way around.
Next, the fact that you identified service interfaces, does in no way mean that it is even remotely useful to distribute them in any loosely coupled (meaning via remote invocation interfaces) way. In particular, the more fine-grained services are defined, the harder and less meaningful it becomes to distribute them. Imagine services that rely on services that rely on other services.
Any remote interface introduced comes at a tremendous cost in complexity as you lose transactionality and simple refactor but introduce remote invocation performance and security problems, complex deployments, complex management and monitoring operations.
The thing is: As for outsourcing of business functions, there can be very good reasons to distribute application functions. Those are however never driven by discovery of some API that qualifies as service boundary but almost exclusively by non-functional requirements on components of the solution. For example:
You want to separate some expensive asynchronous load from the user interfacing parts of your application to avoid harming the user experience.
Your database system will be separated from your application server as it requires a single point of data ownership.
Some function requires specialized hardware or has license and security restrictions that prevents it from being embedded into an application directly.
Some parts of your application have much stricter robustness constraints and should be isolated from application failures. And very prominently:
Your system is integrating with some legacy system that is technology different or is not to be touched at all.
- Do not use service interfaces as a driver of modularization – take a look from higher to identify responsibilies.
- Responsibilities drive good modularity not technological artifacts.
- Avoid the complexity of distributed deployments unless for clear non-functional requirements.