Notes on Working with Transactions

Transactional state handling is nothing we think much about anymore. It’s there. To the extent, that – I suspect – many have not thought much about it in the first place.

This post is about some, say, transaction potholes that I ran into at times – fooled by my own misleading intuition. And then… it is a little database transactions 101 that you may have forgotten again.

Recap

When I left university I had not studied database theory. Many Math and CS students did not. And yet I spent most of my professional career working for software solutions that have a relational database system (RDBMS) such as Oracle, Postgres, DB2, MS SQL Server or even MySQL, if not at their heart, than at least as their spine.

I have also worked on solutions that relied on self-implemented file system storage, when an RDBMS could have done the job. I consider that a delusional phase and waste of time.

There is of course limits, and there is problems where an RDBMS is not suitable – or available. Where it is however, it the one solution because:

  1. There is a rather well-defined and well-standardized storage interface (SQL, Drivers) to store, retrieve, and query your data.
  2. The relational algebra is logically sound, mostly implemented, well-documented, and really flexible, and actually proven to be so.
  3. Any popular RDBMS system provides for an operational environment, can be extended with professional support, has backup & recovery methods, etc, etc.
  4. It is transactional!

Let’s talk about the last bullet point. The key feature of transactional behavior is that of its all or nothing promise: Either all your changes will be applied or none. You will have heard about that one.

This is so important not because it is convenient and saves you some work of change compensation. It is important because normally you have absolutely no idea what are those changes that your code, the code you called, or the code that was called by the code you called actually made! That’s big.

But what is in that “all or nothing” scope? How do you demarcate transations?

That depends a bit on what you are implementing. The principle of least surprise is your friend though. There are some simple cases:

  1. User interactions are always a good transaction scope
  2. Processing an event or a service call with a response is a good transaction scope

Or to cut things short a good scope is every control flow…

  • … that represents a complete state change in your system’s logic and
  • … that does not take long.

Why should it not take long?

There is hardly a system that executes a single control flow. There will be many concurrent control flows – otherwise nobody will want to use the system, right? And they share the same database system. The real problem of a long running transaction is not so much that it holds on to a database connection for a long time, which may or may not be a sparse resource, but that it may prevent other control flows from proceeding by holding on to (pessimistic) locks in your database:

In order to prevent you from creating nonsense updates an RDBMS can and does provide exclusive access to parts of the data stored – e.g. in the form of a row-level lock when updating a record. There is extremely good reasons for that you can read up on elsewhere. Point is: It happens.

notlong

And as you have effectively no idea what updates are caused by your code – as we learned above – you can be sure that your system will run into blocked concurrency situation and will not be responsive and will not scale well in the presence of a long running transaction scheme. You do not want that.

Split Transactions

Coming back to transaction demarcation – here is a classic. In Java EE, a long time ago, when it was still called J2EE, there was really no declarative out-of-box transaction demarcation for Web applications. Following the logic of what is a good transaction demarcation above, the normal case is however that a single Web application request, at least when representing a user interaction, is a premier candidate for a transaction scope.

In an attempt to map what was thought to be useful for a distributed application, most likely because Enterprise Java Beans (EJB) had been re-purposed from remote objects to local application components (from hell), the proposed model du jour was to capture every Web application user transaction into a method invocation of a so-called Session Bean (EJB) – because those were by default transactional. See e.g. Core J2EE Patterns – Session Facade.

Now imagine due to some oversight, you called two of those for a single interaction: Two transactions. If the first invocation completed, a state change would be committed that a failure of the second invocation would not roll back and “boom!” you would have an incomplete or even inconsistent state change. Stupid.

To avoid that, it is best to move transaction demarcation as high as possible, as near as possible to the entry point as possible within your transaction management.

Nested Scopes

Sometimes however, you need more than one transaction within one control flow, even though there is no timing constraint. Sometimes you need nested transactions. That is, even before the current transaction terminates, the control flow starts another transaction.

This is needed if some deeper layer, traversed by the current control flow, needs to commit a state change regardless of the outcome of what is happening after. For example, a log needs to be written that an attempt of an interaction was performed.

Seeing this in code, a nested transaction typically radiates an aura of splendid isolation. But it is deceiving and dangerous. The same problem is as for long running transactions applies here: Your nested transaction may need to wait for a lock. In this case: It may need to wait for a lock that was acquired by the very same control flow – a deadlock!

deadlock

The need is rare – but the concept so tempting that it is used definitely much more frequently than justified.

So what about long running state changes?

Unfortunately some applications need to implement state changes that take long. Some mass update of database objects, some long running stateful interaction. This is a rich subject in its own right and it cannot be excluded that there will be some posts on that in this blog.

References

Advertisements

Applications Create Programming Models

Typically when we think about applications we structure them by business functions that will be implemented based on given, quite generic programming models such as, broadly speaking, Web applications and background jobs. These serve as entry points to our internal architecture comprising of, say, services and repositories.

Any code base that grows requires modularization along its abstraction boundaries if it wants to stay manageable. Contracts between modules are expressed as APIs. We tend to think about APIs as “access points” to methods that the “API consumer” invokes:

simple

But that’s only the trivial direction. It is typical that some work be delegated to the API consumer to fill in some aspect. For example streaming some data, handling some event, consuming some computation result. That is: It is not only the subsystem that implements the API, but the consumer does as well:

bidirectional

In the still simple cases, the APIs implemented by the consumer are passed-on callbacks (or event handlers, etc.). This is by far the most prominent method employed by your typical open source library.

If callbacks need to be invoked asynchronously, or completely independently from some previous invocation, e.g. as a job activity on some other cluster node, this approach becomes increasingly cumbersome: In order to make sure callbacks can be found, they need to be registered with the implementation before any invocation need occurs.

In short: You need to start seriously about how to find and manage callback implementations in your runtime environment. That is, switching to the term extension interfaces, you have a serious case of Extend me maybe….

andfind

Being an API that may be used by a probably yet undefined number of consumers, other aspects may require rules and documentation such as

  • The transactional environment propagated to extension interface invocations.
  • Is there authorization constraints that have to be considered or can be declared by implementors?
  • Is there concurrency/threading considerations to be considered?

And that is where it starts becoming worthy to be called a programming model.

While this looks like just another way of saying callback or extension interface, it is important to consider the weight of responsibility implied by this – better early than later.

For a growing software system, managing extensions in a scalable and robust way is a life saver.

Not considering this in time may well become a deferred death blow.

Z2-environment Version 2.5 Is Out

It took a while, but it got finally done. Version 2.5 of the z2-environment is out. Documentation and samples have been updated and tested.

Here is what version 2.5 was about:

More Flexibility in Component Configuration

A major feature of z2 is to run a system cluster strictly defined by centrally, version-controlled configuration. As there is no rule without an exception, some configuration is just better defined by the less static and local system runtime environment, such as environment variable or scripted system properties.

To support that better and without programming, component properties may now be expressed by an extensible expression evaluation scheme with built-in JEXL support. Expressions are evaluated upon first load of component descriptors after start or after invalidation at runtime.

Some use-cases are:

  • Seamless branch or URL switching based on environment settings.
  • Dynamic evaluation of database config, in particular remote authentication based on custom evaluators or environment settings.
  • Declarative sharing of configuration across components.

Some aspects, such dynamic evaluation of searchable properties, were not made dynamic due to the risk of unpredictable behavior. Future work may show that the concept can be extended further though.

pseudo_uml

Check it out in the documentation.

More Complete in-Container Test Support

Z2 offers a sleek, built-in way of running application-side in-container tests: z2Unit. Previously, the JUnit API had its limits in serializabilty over the wire – which is essential for z2Unit. JUnit improved greatly in that department and after the correspinding adaptation of z2Unit some problematic Unit Test Runner combinations (such as e.g. z2Unit and parameterized tests) now work smoothly.

z2unit

Check it out in the documentation.

Better Windows Support

Some very old problems with blanks in path or parameter names got finally fixed. There is a straight forwared command line specification syntax for worker processes that is (mostly) backward compatible.

Also, and possibly more importantly, system property propagation from Home to Worker processes is now fully configurable.

Check it out in the documentation.

Better Git Support

Z2 can read directly from GIT repositories. However, previously only a branch could be specified as content selector. Now any GIT ref will do.

thatsit

Check it out in the documentation.

There is some more. Please check out the version page, if you care.

What’s next:

The plans for 2.6 are still somewhat open. As the work in Version 3 will not make it into any production version – namespace changes are too harming at this time – some useful structural simplifications implemented in 3.0 are considered, such as:

  • Worker processes as participants in Home process target states (rather than a Home Layout)
  • Introducing a “root” repository that hosts any number of remote or “inline” repositories and so streamlines local core config
  • Supporting a co-located Tomcat Web Container as an alternative to an integrated Jetty Web Container
  • Component templates that provide declarative default configurations and so remove duplications (i.e. any Java component based on Spring+Spring AOP).

Thanks and good luck an whatever you do that needs to be done right!

References

 

 

A Web App Wrapper or “Why are Web Applications Less Likeable Than Native Applications – Still?”

In between for something completely different.

I use web apps in my daily work. Heavily, if not mostly – except maybe my IDE and the occasional MS Office session. But for reasons that I find not obvious, they are still not on par with native apps. This is not due to lack of responsiveness or desktop integration. There is very little in terms of user experience where web apps that I use lack. And still – if available I’d rather chose the native alternative. So…

Why is it that web apps are still not as “likeable” as native apps?

A few weeks ago mozilla thunderbird, the friendly companion of many years, finally became unusably slow for me. As MS Outlook is no option for me I started looking for an alternative that would be fast at startup and while flipping and searching through mail, would run on Linux, AND has a well-working calendar integration. There are numerous promising candidates for the first two requirements. But, strangely enough, it seems that calendar support is a tough problem.

But then, my e-mail as well as my calendar is perfectly accessible via a Web interface. It is just that I do not use it that much – although it is fast, responsive, usable the same on all machines, and obviously OS-independent (and was made by Google). Duh!

So why not use that instead of a dedicated native client?

Turns out what really turns me off is that the Web client exposes you to a through and through fleeting user experience:

  • As your desktop gets cluttered with open browser tabs, usually the sensible way out is to close them all. Your important stuff got closed as well.
  • You are using multiple users but your browser only manages one session at a time
  • You want to have the right stuff opened at startup – not nothing, not what happened to be open last time – and you want to have multiple such configurations.

None of this seems unreasonable. And yet I have not found anything that does just that for me.

Ta da!

As a conclusion I looked into “how to wrap my favorite web apps into a native application”. Not for the first time – but this time with the necessary frustration to see it through. Such a “wrapper” should fix the problems above and other do absolutely nothing beyond the absolutely required. Here is the result:

https://github.com/ZFabrik/z-front-side

How does it work?

It is based on electron – that is: It is essentially a scripted chrome browser. And it is very basic and does very little beyond showing a few site-buttons, preloading some of them (always the same) and can be loaded several times for different “partitions” – which implements the multi-session capability.

I have been using it with two different configurations (shared on all machines) and two partitions (private/work), for a few weeks now and finally feel like the five to ten Web apps I use all the time, every day, feel completely integrated with the overall desktop experience – just like any other native application.

Feel free to use, enhance, copy whatever.

From Here to the Asteroid Belt (I)

When I came up with the title line, I had a completely different conclusion in mind. It’s a nice line though. In contrast to the conclusion, it stayed.

Oh and by the way: Spring is finally here:

IMG_20170323_1720145.jpg
(spring at the office, so much work lately, so little time to enjoy it)

This is one of those “what’s the right tool for the problem” posts. Most, me being no different, try to use tools they know best for essentially any problem at hand. And this is good instinct. It’s what people have always done and obviously they did something right. Knowing a tool well is of great value and typically supersedes in effectiveness the use of a tool that might be more powerful – if used correctly – but that you are not an expert at.

At scale however, when building something more complex or widely distributed, tool choice becomes decisive and intrinsic qualities such as simplicity, reliability, popularity, platform independence, performance, etc. may outweigh the benefits of tool expertise.

What I want to look at specifically is the applicability of a programming and execution platform for deployment scenarios ranging from an in-house, or SaaS deployments to massively distributed stand-alone applications such as mobile apps or desktop applications.

The latter two form the two endpoints of the custom vs. non-custom development scale and the non-distributed to arbitrarily distributed scale.

The rules are pretty clear:

In-house/SaaS: Complete control. The system is the application is the solution. There is no customization or distribution problem because everything is (essentially) 100% custom and 0% distributed.

Mobile/Desktop: No control over the single instance that is executed somewhere in the wild. Hard to monitor what is going on, minimal to no customization, potentially infinitely many instance in use concurrently.

But what about the places in between. The customized business solutions that drive our economic backbone from production sites to warehouse solutions, from planning to financials, from team productivity to workflow orchestration?

diagram.png

Let’s say you have an application that is part standard solution (to be used as is) but typically requires non-trivial customization, adaptation, extension to be effectively useful.

What are the options?

Option C: Maintain a code line per instance or customer

That is (still?) a popular method – probably because it is simple to start with and it makes sure the original developer is in complete control.

That is also its downside: It does not scale well into any sort of eco-system and licensing model including third-parties. For a buyer it means 100% dependency on a supplier that most likely got paid dearly for a customer specific modification and will asked to be so at any time of further adaptation and extension.

Option P: Build a plugin model on top of a binary platform API

That is the model chosen for browsers and similar applications. It works very well as long as the platform use-case is sufficiently well defined, and the market interesting enough.

It obviously requires to invest significantly into feature rich and stable APIs, as well as into an effective plug-in model, a  development approach for plug-ins, and a distribution channels or bundling/installation model.

In essence you build a little operating system for some specific application case – and that is simply not an easy and cheap task to do right.

Option S: Ship (significant) source code and support extension and customization on site

This approach has several strong advantages: You can supply hot fixes and highly special customization with minimal interference. Customization is technically not limited to particular functions or API. There is no extra cost per installation on the provider side compared to Option C.

It assumes however that the ability to change, version, and deploy is built-in and necessary tools are readily available. As code life-cycle is now managed on-site, some attention need to be paid to handle code life cycle cleanly.

From a consumer’s point of view it reduces dependency and (leaving legal considerations aside) technically enables inclusion of third-party modifications and extensions.

Scenario Determines Tool

In part II we will look at how the three different scenarios above translate into tool approaches. Stay tuned.

 

 

A simple modularization algorithm

Lately I worked on breaking up a module that had grown too big. It had started to feel hard to maintain and getting oriented in the code’s module felt increasingly cumbersome. As we run tests by module, automated tests triggered by check ins started taking too long and as several developers are working on code of the one module, failing module tests became harder to attribute.

In other words: It was time to think about some module refactoring, some house keeping.

There was a reason however that the module had not been broken up already: It had some lurking dependency problems. Breaking it up would mean to change other module’s dependencies just because – which felt arbitrary – and still there was re-use code to be made accessible to any split off.

Comparing to past experience this is the typical situation when everybody feels that something needs to be done but it always turns out to be a little too risky and unpredictable so that no one really dares. And after all: You can always push things still a bit further.

As that eventually leads to a messed up, stalling code-base and we are smart enough (or simply small enough?) to acknowledge that, we made the decision to fix it.

Now – I have done this kind of exercise on and off. It has some unpleasantly tiring parts and overall feels a little repetitive. Shouldn’t there be some kind of algorithm to follow?

That is what this post is about:

A simple modularization algorithm

Of course, as you will notice shortly: We cannot magically remove the inherent complexity of the problem. But nevertheless, we can put it into a frame that takes out some of distracting elements:

algo

Step 1: Group and Classify

It may sound a ridiculous, but the very first thing is to understand what is actually provided by the current module’s code. This may not be as obvious as it sounds. If it would be clear and easy to grasp, you most probably wouldn’t have ended up in the current mess anyway.

So the job to do is to classify contents into topics and use-cases. E.g.

  • API definitions. Possibly API definitions that can even be split into several APIs
  • Implementation of one or more APIs for independent uses
  • Utility code that exists to support implementations of some API

At this stage, we do not refactor or add abstraction. We only assess content in a way that we end up getting a graph of code fragments (a class, a group of classes) with dependencies. Note: The goal of the excercise is not to get a UML class diagram. Instead we aim for groups that can be named by what they are doing: “API for X”, “Implementation of job Y”, “Helper classes for Z”.

Most likely the result will look ugly. You might find an intermingled mess of some fifty different background service implementations that are all tight together by some shared wiring registry class that wants to know them all. You might find some class hierarchy that is deeply clutterd with business logic specific implementation and extending it further is the only practical way of enhancing the application. Remember: If it was not for any of these, you would not be here.

Our eventual goal is to change and enhance the resulting structure in a way that allows to form useful compartmentation and to turn a mess into a scalable module structure:

trans1

That’s what step 2 and step 3 are about.

Step 2: Abstract

The second step is the difficult piece of work. After step one, looking at your resulting graph it should be easy to categorize sub graphs into either one of the following categories:

  1. many of the same kind (e.g. many independent job implementations),
  2. undesirably complex and/or cyclic
  3. a mix of the two

If only the first holds, you are essentially done with step 2. If there is actual unmanageable complexity left, which is why you are here, you need to now start refactoring to get rid of it.

This is the core of the exercise and where you need to apply your design skills. This comes down to applying software design patterns ([1]), using extensibility mechanisms, and API design. The details are well beyond the scope of this post.

After you completed one such abstraction exercise, repeat step 2 until there is no more b) and c) cases.

Eventually you need to be left with a set of reasonably sized, well-defined code fragment sets, that are in an acyclic directed graph linking them up by linking dependencies.

trans2

(For example, removing the cycle and breaking the one-to-many dependency was achieved by replacing A by a delegation interface A’ and some lookup or registration mechanism).

Step 3: Arrange & Extract

After completing step 2 we are still in one module. Now is the time to split fragments up into several modules so that we can eventually reap the benefits of: Less to comprehend at a time, clearer locating of new implementations, a structure that has come back to manageability – provided of course that you did a good job in step 2 (bet, you saw that coming). This post is not about general strategies and benefits for modularization. But there are plenty in this blog (see below) and elsewhere.

Given our graph of fragments from step 2, make sure it is topologically ordered in direction of linking dependency (in the example from upper left to lower right).

Now start extracting graph nodes into modules. Typically this is easy as most of the naming and abstraction effort was done in the previous steps. Also, when starting you probably had other constraints in mind, like extensibility patterns or different life cycle constraints – e.g. some feature not being part of one deployment while being in another. These all play into the effort of extraction.

The nice thing is: At this time, having the graph chart at hand, the grouping can be easily altered again.

Repeat this effort until done:

trans3

Enjoy!

References

  1. Software Design Patterns (Wikipedia)
  2. Directed Acyclic Graphs (Wikipedia)
  3. Dependency Management for Modular Applications
  4. Modularization is more than cutting it into pieces
  5. Extend me maybe…

This is not OO!

oo

Once in a while I am part of a discussion, typically more of a complaint, that a certain design is not OO. OO as in object oriented.

While that is sometimes a not so well thought through statement anyway, there is something to it. I don’t have a problem with that though, and I am not sure why anybody would. As if OO is something valuable without justification.

Of course it is not. And many have written about it.

In the field that I am mostly concerned with, data driven applications, if not “the same old database apps”, I would go as far as saying: Of course it is not OO. The whole problem is so not OO – no wonder that the application design does not breath OO.

Unlike a desktop application, where the transition of what you expect to interact with as a user kind of naturally, at least on a naive level, translates into an object oriented design, this is only on a very, very, really super much abstract level the case with data driven applications.

That requires some clarification. The term data driven application does not make sense in the singular form. There is always data driven applications. Otherwise you would hardly mention the “data” in the term. As the term suggests, the assumption is that there is a highly semantical, well-defined (not necessarily as in good) data model that makes sense in its own right. And there is many applications that make sense out of it, provide ways of modification and analyse or cross-connect with other data and applications.

It is not far from here to a classic:

Data outlive applications

Or as a corollary:

Data is not bound to its methods, methods are bound to their data.

But that’s what OO in data driven applications tends to do: It ties not only methods to data – but instead also tends to tie data to methods. As for example in OR-Mapping – if you want to call that OO.

That is of course non-sensical and would undermine all modularization and development scale-out efforts – unless of course its the data of essentially a single application. Then, and only then, it makes sense to consider it the state of the objects that in turn describe its methods.

It is still perfectly meaningful to use object orientation as a structuring means offered by a programming language. Objects will typically not represent database backed state – other than as so-called value-objects. But that does by no means diminish the usefulness of object oriented language features.

Not with the target audience anymore?

Lately when I listen to talks on new and hot stuff for developers, be it in person or on the internet, I have the feeling that its not possibly me who’s being talked to.

It’s not just like having seen the latest web development framework over and over again – it’s that’s it’s only ever Web frameworks – sort of. It’s the unpleasant feeling that there is a hell of a lot of noise about stuff that matters very little.

This post got triggered when my long time colleague Georgi told me about Atomist, the new project by Rod Johnson of Spring Framework fame. Atomist is code generation on steroids and will no doubt get a lot of attention in the future. It allows you create and re-use code transformations to quickly create or alter projectsrather than, say, copy-pasting project skeletons.

There may well be a strong market for tools and practices that focus on rapidly starting a project. And that is definitely good for the show. It is really far away from my daily software design and development experience though, where getting started is the least problem.

Problems we work on are the actual business process implementation, code and module structure, extensibility in projects and for specific technology adaptations, data consistency and data scalability, point-in-time recovery, and stuff like “what does it mean to roll this out into production systems that are constantly under load?”.

Not saying that there is no value in frameworks that can do some initial stuff, or even consistently alter some number of projects later (can it?– Any better than a normal refactoring?) – but over the lifetime of a project or even product this seems to add comparatively little value.

So is this because it is just so much simpler to build and marketeer a new Web framework than anything else? Is there really so much more demand? Or is this simply a case of Streetlight effect?

I guess most of the harder problems that show up in the back-ends of growing and heavily used applications cannot be addressed by technology per se – but instead can only addressed by solution patterns (see e.g. Martin Fowler’s Patterns of Enterprise Architecture) to be adhered to. The back-end is where long-lasting value is created though. Far away from the front end. So it should be worthwhile to do something about it.

There is one example of an extremely successful technology that has a solid foundation, some very successful implementations, an impressively long history, and has become the true spine of business computing: The relational database.

Any technology that would standardize and solve problems like application life cycle management, work load management – just to name two – on the level that the relational database model and SQL have standardized data operations should have a golden future.

Links

  1. Atomist
  2. Rod Johnson on Atomist
  3. Streetlight Effect
  4. Martin Fowler’s Patterns of Enterprise Architecture

Some more…

* it may as well be micro-services (a technical approach that aims to become the “Web” of the backend). But then look at stuff like this

The Human Factor in Modularization

Here is yet another piece on my favorite subject: Keeping big and growing systems manageable. Or, conversely, why that is so hard and failing so often?

Why do large projects fail and why is productivity diminishing in large projects?

Admittedly, that is a big question. But here is some piece on humans in that picture.

Modularization – once more

Let’s concentrate on modularization as THE tool to scale software development successfully and the lack that drives projects into death march mode and eventually into failure.

stackIn this write-up, modularization is all the organization of structure of software above the coding level and below the actual process design. All the organization of artefacts to obtain building blocks for the assembly of solutions that implement processes as desired. In many ways the conceptual or even practical interface between specified processes to implement and their actual implementation. So in short: This is not about any specific modularization approach into modules, packages, bundles, namespaces or whatever.

I hereby boldly declare that modularization is about

Isolation of declarations and artifacts from the visibility and harmful impact on other declarations, artifacts, and resource;

Sharing of declarations, artifacts, resources with other declarations, artifacts, and resources in a controlled way;

Leading the extensibility and further development by describing structure and interplay declarations, artifacts, and resources in an instructive way.

Depending on the specific toolset, using these mechanism we craft APIs and implementations and assemble systems from modular building blocks.

If this was only some well-defined engineering craft, we would be done. Obviously this is not the case as so many projects end up as some messy black hole that nobody wants to get near.

The Problem is looking back at you from the Mirror

The task of getting a complex software system into shape is a task that is performed by a group of people and is hence subject to all human flaws and failures we see elsewhere – sometimes leading to one of the greater examples of successful teamwork assembling something much greater than the sum of its pieces.

I found it appropriate to follow the lead of the deadly sins and divine virtues.

hubrisLets start by talking about hubris: The lack of respect when confronted with the challenge of growing a system and overestimating of abilities to fix structural problems on the go. “That’s not rocket science” has been heart more than once before hitting the wall.

greedThis is followed closely in rank and time by syndroms of greed. The unwillingness to invest into structural maintenance. Not so much when things start off, but very much further down the timeline when restructurings are required to preserve the ability to move on.

gluttonyDifferent, but possibly more harmful, is astronaut-architecting, creating an abundance of abstraction layers and “too-many-steps-ahead” designs. The modularization gluttony.

lustTaking pride in designing for unrequested capabilities while avoiding early verticals, showing off platforms and frameworks where solutions and early verticals are expected is a sign of over-indulgence in modularization lust and built-up of vainglory from achievements that can be useful at best but are secondary for value creation.

lazinessjealousNow sticking to a working structure and carefully evolving it for upcoming tasks and challenges requires an ongoing team effort and a practiced consensus. Little is as destructive as team members that work against a commonly established practice out of wrath, resentment, ignorance, or simply sloth.

Modularization efforts fail out of ignorance and incompetence

But it does not need to. If there are human sins increasing the likelihood of failure, virtues should work the opposite.

justiceEvery structure is only as good as it is adaptable. A certain blindness for personal taste in style and people may help implement justice towards future requirements and team talent and so improve development scalability. Offering an insulation of harmful influences, a modularized structure can limit the impact of changes that are still to be proven valuable.

bravenessAt times it is necessary to restructure larger parts of the code base that are either not up to latest requirements or have been silently rotting due to unfitness for some time already. It can take enormous courage and discipline to pass through days or weeks of work for a benefit that is not immediate.

prudenceCourage is nothing without the prudence to guide it towards the right goal, including the correction of previous errors.

temperanceThe wise thing however is to avoid getting driven too far by the momentum of gratifying design by subjecting yourself to a general mode of temperance and patience.

 

 

MAY YOUR PROJECTS SUCCEED!

(Pictures by Pieter Brueghel the older and others)

 

IT Projects vs. Product Projects

A while back when I was discussing a project with a potential client, I was amazed at how little willingness to invest into analysis and design there was. Instead of trying to understand the underlying problem space and projecting what would have to happen in near and mid-term future, the client wanted an immediate solution recipe – something that simply would fix it for now.

What had happened?

I acted like a product developer – the client acted like an IT organization

This made me wonder about the characteristic differences between developing a product and solving problems in an IT organization.

A Question of Attitude

Here’s a little – incomplete –  table of attitudes that I find characterize the two mindsets:

IT Organization Product Organization
Let’s not talk about too much – make decisions! Let’s think it through once more.
The next goal counts. No need to solve problems we do not experience today. We want to solve the “whole” problem – once and forever.
Maintenance and future development is a topic for the next bugdet round. Let’s try to build something that has defined and limited maintenance needs and can be developed further with little modification.
If it works for hundreds, it will certainly work perfectly well for billions. Prepare for scalability challenges early.
We have an ESB? Great let’s use that! We do not integrate around something else. We are a “middle” to integrate with.
There is that SAP /ORCL consultant who claims to know how to do it? Let him pay to solve it! We need to have the core know-how that helps us plan for the future.

I have seen these in action more than once. Both points of view are valid and justified: Either you care about keeping something up and running within budget or you care about addressing a problem space for as long and as effectively as possibly. Competing goals.

It gets a little problematic though when applying the one mindset onto the other’s setting. Or, say, if you think you are solving an IT problem but are actually having a product development problem at hand.

For example, a growing IT organization may discover that some initially simple job of maintaining software client installations on workstations reaches a level that procedures to follow have effectively turned into a product – a solution to the problem domain – without anybody noticing nor paying due attention.

The sad truth is that you cannot build a product without some foresight and long lasting mission philosophy. Without growing and cultivating an ever refined “design story” any product development effort will ends up as the stereotypical big ball of mud.

In the case of the potential client of ours, I am not sure how things worked out. Given their attitude I guess they simply ploughed on making only the most obvious decisions – and probably did not get too far.

Conclusion

As an IT organization make sure not to miss when a problem space starts asking for a product development approach – when it will pay off to dedicate resources and planning to beat the day-to-day plumbing effort by securing key field expertise and maintaining a solution with foresight.