The Linda Problem of Distributed Computing

Suppose an important function of your solution is pricing calculation for a trading good.

What is the more appropriate solution approach:

  1. You develop a software module that implements pricing computation
  2. You develop a REST server that returns pricing computation results

I am convinced that more than a few developers would intuitively chose b).

Taking a step back and thinking about it some more (waking your lazy “System 2”) it should become clear that choice a) is much stronger. If you need to integrate pricing computation in a user interface, need a single process deployment solution, AND a REST interface – it’s all simple adaptations of a). While having b) gives little hope for a). So why chose b)?

This, I believe to be an instance of a “conjunction fallacy”. The fact that b) is more specific, more tangible, more representative as a complete solution to the problem makes it more probable to your intuition.

Back to the observation at hand: Similar to the teaser example above, I have seen more than one case where business functions got added to an integration tier (e.g. an ESB) without any technological need (like truly unmodifiable legacy systems and the like). An extremely poor choice considering that remote coupling is harder to maintain, has tremendously more complex security and consistency requirements. Still it happens and it looks good and substantial on diagrams and fools observers into seeing more meaning than justified.

Truth is:

Distribution is a function of load characteristics not of functional separation

(or more generally speaking: Non-functional requirements govern distribution).

The prototypical reason to designate boxes for different purposes is that load characteristics differ significantly and some SLA has to be met (formally or informally). For many applications this does not apply at all. For most of the rest a difference between “processing a user interaction synchronously” and “performing expensive, long-running background work asynchronously” is all that matters. All the rest is load-balancing.

Before concluding this slightly unstructured post, here’s a similar case:

People love deployment pipelines and configuration management tools that push configuration to servers or run scripts. It definitely gives rise to impressive power-plant-mission-control-style charts. In reality however: Any logical hop (human or machine) between the actual system description (code and config) and the execution environment adds to the problem and should be avoided (as probability of success decreases exponentially with the number of hops).

In fact:

The cost of system update is a function of the number of configurable intermediate operations from source to execution

and as an important corallary:

The cost of debugging an execution environment is a function of the number of configurable intermediate operation from source to execution

 –

More on that another time though.

This post was inspired by “Thinking, Fast and Slow” by Daniel Kahneman that has a lot of eye-opening insights on how our intuitive vs. non-intuitive cognitive processes work. As the back cover says: “Buy it fast. Read it slowly”

 

Advertisements