The Human Factor in Modularization

Here is yet another piece on my favorite subject: Keeping big and growing systems manageable. Or, conversely, why that is so hard and failing so often?

Why do large projects fail and why is productivity diminishing in large projects?

Admittedly, that is a big question. But here is some piece on humans in that picture.

Modularization – once more

Let’s concentrate on modularization as THE tool to scale software development successfully and the lack that drives projects into death march mode and eventually into failure.

stackIn this write-up, modularization is all the organization of structure of software above the coding level and below the actual process design. All the organization of artefacts to obtain building blocks for the assembly of solutions that implement processes as desired. In many ways the conceptual or even practical interface between specified processes to implement and their actual implementation. So in short: This is not about any specific modularization approach into modules, packages, bundles, namespaces or whatever.

I hereby boldly declare that modularization is about

Isolation of declarations and artifacts from the visibility and harmful impact on other declarations, artifacts, and resource;

Sharing of declarations, artifacts, resources with other declarations, artifacts, and resources in a controlled way;

Leading the extensibility and further development by describing structure and interplay declarations, artifacts, and resources in an instructive way.

Depending on the specific toolset, using these mechanism we craft APIs and implementations and assemble systems from modular building blocks.

If this was only some well-defined engineering craft, we would be done. Obviously this is not the case as so many projects end up as some messy black hole that nobody wants to get near.

The Problem is looking back at you from the Mirror

The task of getting a complex software system into shape is a task that is performed by a group of people and is hence subject to all human flaws and failures we see elsewhere – sometimes leading to one of the greater examples of successful teamwork assembling something much greater than the sum of its pieces.

I found it appropriate to follow the lead of the deadly sins and divine virtues.

hubrisLets start by talking about hubris: The lack of respect when confronted with the challenge of growing a system and overestimating of abilities to fix structural problems on the go. “That’s not rocket science” has been heart more than once before hitting the wall.

greedThis is followed closely in rank and time by syndroms of greed. The unwillingness to invest into structural maintenance. Not so much when things start off, but very much further down the timeline when restructurings are required to preserve the ability to move on.

gluttonyDifferent, but possibly more harmful, is astronaut-architecting, creating an abundance of abstraction layers and “too-many-steps-ahead” designs. The modularization gluttony.

lustTaking pride in designing for unrequested capabilities while avoiding early verticals, showing off platforms and frameworks where solutions and early verticals are expected is a sign of over-indulgence in modularization lust and built-up of vainglory from achievements that can be useful at best but are secondary for value creation.

lazinessjealousNow sticking to a working structure and carefully evolving it for upcoming tasks and challenges requires an ongoing team effort and a practiced consensus. Little is as destructive as team members that work against a commonly established practice out of wrath, resentment, ignorance, or simply sloth.

Modularization efforts fail out of ignorance and incompetence

But it does not need to. If there are human sins increasing the likelihood of failure, virtues should work the opposite.

justiceEvery structure is only as good as it is adaptable. A certain blindness for personal taste in style and people may help implement justice towards future requirements and team talent and so improve development scalability. Offering an insulation of harmful influences, a modularized structure can limit the impact of changes that are still to be proven valuable.

bravenessAt times it is necessary to restructure larger parts of the code base that are either not up to latest requirements or have been silently rotting due to unfitness for some time already. It can take enormous courage and discipline to pass through days or weeks of work for a benefit that is not immediate.

prudenceCourage is nothing without the prudence to guide it towards the right goal, including the correction of previous errors.

temperanceThe wise thing however is to avoid getting driven too far by the momentum of gratifying design by subjecting yourself to a general mode of temperance and patience.

 

 

MAY YOUR PROJECTS SUCCEED!

(Pictures by Pieter Brueghel the older and others)

 

IT Projects vs. Product Projects

A while back when I was discussing a project with a potential client, I was amazed at how little willingness to invest into analysis and design there was. Instead of trying to understand the underlying problem space and projecting what would have to happen in near and mid-term future, the client wanted an immediate solution recipe – something that simply would fix it for now.

What had happened?

I acted like a product developer – the client acted like an IT organization

This made me wonder about the characteristic differences between developing a product and solving problems in an IT organization.

A Question of Attitude

Here’s a little – incomplete –  table of attitudes that I find characterize the two mindsets:

IT Organization Product Organization
Let’s not talk about too much – make decisions! Let’s think it through once more.
The next goal counts. No need to solve problems we do not experience today. We want to solve the “whole” problem – once and forever.
Maintenance and future development is a topic for the next bugdet round. Let’s try to build something that has defined and limited maintenance needs and can be developed further with little modification.
If it works for hundreds, it will certainly work perfectly well for billions. Prepare for scalability challenges early.
We have an ESB? Great let’s use that! We do not integrate around something else. We are a “middle” to integrate with.
There is that SAP /ORCL consultant who claims to know how to do it? Let him pay to solve it! We need to have the core know-how that helps us plan for the future.

I have seen these in action more than once. Both points of view are valid and justified: Either you care about keeping something up and running within budget or you care about addressing a problem space for as long and as effectively as possibly. Competing goals.

It gets a little problematic though when applying the one mindset onto the other’s setting. Or, say, if you think you are solving an IT problem but are actually having a product development problem at hand.

For example, a growing IT organization may discover that some initially simple job of maintaining software client installations on workstations reaches a level that procedures to follow have effectively turned into a product – a solution to the problem domain – without anybody noticing nor paying due attention.

The sad truth is that you cannot build a product without some foresight and long lasting mission philosophy. Without growing and cultivating an ever refined “design story” any product development effort will ends up as the stereotypical big ball of mud.

In the case of the potential client of ours, I am not sure how things worked out. Given their attitude I guess they simply ploughed on making only the most obvious decisions – and probably did not get too far.

Conclusion

As an IT organization make sure not to miss when a problem space starts asking for a product development approach – when it will pay off to dedicate resources and planning to beat the day-to-day plumbing effort by securing key field expertise and maintaining a solution with foresight.

Java Modularity – Failing once more?

Like so many others, I have pretty much ignored project Jigsaw for some time now – assuming it would stay irrelevant for my work or slowly fade away and be gone for good. The repeated shifts in planned inclusion with the JDK seemed to confirm this course. Jigsaw started in 2009 – more than six years ago.

Jigsaw is about establishing a Java Module System deeply integrated with the Java language and core Java runtime specifications. Check out its goals on the project home page. It is important to note the fourth goal:

 

Make it easier for developers to construct and maintain libraries and large applications, for both the Java SE and EE Platforms.

 

Something Missing?

Lately I have run into this mail thread: http://permalink.gmane.org/gmane.comp.java.openjdk.jigsaw/2492

In that mail thread Jürgen Höller (of Spring fame) notes that in order to map Spring’s module layout to Jigsaw modules, it would be required to support optional dependencies – dependencies that may or may not be satisfied given the presence of another module at runtime.

This is how Spring supports its set of adapter and support types for numerous other frameworks that you may or may not use in your application: Making use of Java’s late linking approach, it is possible to expose types that may not be usable without the presence of some dependent type but will not create a problem unless you start using them either. That is, optional dependencies would allow Spring to preserve its way of encapsulating the subject of „Spring support for many other libraries“ into one single module (or actually a jar file).

In case you do not understand the technical problem, it is sufficient to note that anybody who has been anywhere near Java class loading considerations as well as actual Java application construction in real live should know that Spring’s approach is absolutely common for Java infrastructure frameworks.

Do Jigsaw developers actually know or care about Java applications?

Who knows, maybe they simply forgot to fix their goals. I doubt it.

 

Module != JAR file

 

There is a deeper problem: The overloaded use of the term module and the believe of infrastructure developers in the magic of the term.

Considering use of the module term in programming languages, it typically denotes some encapsulation of code with some interface and rules on how to expose or require some other module. This is what Jigsaw focussed on and it is what OSGi focussed on. It is what somebody interested in programming language design would most likely do.

In Java this approach naturally leads using or extending the class loading mechanism to expose or hide types between modules for re-use (or information hiding resp.) which in turn means to invent descriptors that describe use relationships (meaning the ability to reference types in this case) and so on.

This is what Jigsaw does and this is what OSGi did for that matter.

It is not what application developers care about – most of the time.

There is an overlap in interest of course. Code modules are an important ingredient in application assembly. Problems of duplicate type definitions by the same name (think different library versions) and separation of API and implementation are essential to scalable, modular system design.

But knowing how to build a great wall is not the same as knowing how to build a great house.

From an application development perspective, a module is much rather a generic complexity management construct. A module encapsulates a responsibility and in particular should absolutely not be limited to code and is not particularly well-served by squeezing everything into the JAR form factor.

What we see here is a case of Application vs. Infrastructure culture clash in action (see for example Local vs. Distributed Complexity).

The focus on trying to find a particularly smart and technically elegant solution for the low-level modularization problem eventually hurts the usefulness of the result for the broader application development community (*).

Similarly, ignorance of runtime modularization leads to unmaintainable, growth-limited, badly deployable code bases as I tried to describe in Modularization is more than cutting it into pieces.

The truth is somewhere in between – which is necessarily less easily described and less universal in nature.

I believe that z2 is one suitable approach for a wide class of server-side applications. Other usage scenarios might demand other approaches.

I believe that Jigsaw will not deliver anything useful for application developers.

I wish you a happy new year 2016!

References

Ps.:

* One way of telling that the approach will be useless for developers is when discussions conclude that “tools will fix the complexity”. What that comes down to is that you need a compiler (the tool) to make use of the feature, which in turn means you need another input language. So who is going to design that language and why would that be easier?

* It is interesting to check out the history of SpringSource’s dm Server (later passed on to R.I.P. at Eclipse as project Virgo). See in particular the interview with Rod Johnson.

Z2 as a Functional Application Server

Intro

As promised this is first in a series of posts elaborating on integration of Clojure with Z2. It probably looks like a strange mix, however I believe it’s extremely empowering combination of two technologies sharing lots of design philosophy. Clojure has brought me lots of joy by enabling me to achieve much more in my hobby projects and I see how the combination of z2 and Clojure further extends the horizon of what’s possible. I’d be happy if I manage to help other people give it a try and benefit in the same way I did.

The LISP universe is very different one. It’s hard to convince someone with zero previous experience to look at the strange thing with tons of parentheses written using polish notation, so I am going to share my personal story and hope it resonates with you. I will focus on the experience or how I “felt” about it. There is enough theory and intellectual knowledge on the internet already and I will link to it where appropriate.

So, given this is clearly personal and subjective view, let’s put in some context.

Short Bio

I’ve been using Java professionally for 12+ years. Predominantly in the backend. I’ve worked on application servers, as well as on business applications in large, medium and small organizations. Spring is also something I have been heavily relying on in the last 8 years. Same goes for maven. I’ve used JBoss and done bit of application server development myself, but when Spring Boot came up I fell in love with it.

Like every other engineer out there my major struggle through the years have been to manage the complexity. The inherent complexity of the business problem we have to solve plus the accidental complexity added by our tools, our poor understanding of the problem domain and our limited conceptual vocabulary. I have been crushed more than once under the weight of that complexity. Often my own and the team’s share would be more than 50%. I have seen firsthand how poorly groomed code base ends up in state where the next feature is just not possible. This has real business impact.

The most scary thing about complexity is that it grows exponentially with size. This is why I strongly subscribe to the “code is liability” worldview. Same goes for organizations. The slimmer you are the faster and further you can go.

Ways to deal with complexity

Now that the antagonist is clearly labeled, let’s focus on my survival kit.

#1 Modularization

One powerful way to get on top of complexity is divide and conquer by using modularization. This is where z2 comes into the game. It has other benefits as well, but I would put it’s modularization capabilities as feature #1. Maven and Spring have been doing that for me through the years. On more coarse level Tomcat and JBoss provide some modularization facilities as well, however it is extremely rare in my experience where they are deliberately exploited.

Getting modularization right is hard on both ends:

  • The framework has to strike a balance between exercising control and enabling extensibility, otherwise it becomes impractical.
  • The component developers still have to think hard and define boundaries of “things” while using the framework idioms with mastery. I haven’t met yet technology that removes this need. It’s all about methodology and concepts (I dislike the pattern cult).

Discussing more precise definition of what exactly is modularization and why the well-known methodologies are too general to be useless as recipes is too big of discussion for here.

My claim is that z2 strikes the best balance I have seen so far while employing very powerful concept.

#2 Abstractions

Another powerful way is to use better abstractions. While modularization puts structure in chaos, the right abstractions reduce the amount of code and other artifacts, hence the potential for chaos. Just like any other thing, all abstractions are not made equal and I assume they can be ordered according to their power.

My personal definition for power: if abstraction A allows you to achieve the same result with less code than abstraction B then it’s more powerful. Of course reality is much more hairy than this. You have to account for abstraction’s implementation, investment in learning, long term maintenance costs & so on.

Alternative definition: if abstraction A allows you to get further in terms of project size and complexity (before the project collapses) it’s more powerful.

The abstractions we use on daily basis are strongly influenced by the language. Language can encourage, discourage (due ergonomics) or even blacklist an abstraction by having no native support for it. My claim here is that the Java language designers have made some very limiting choices and this has profound effect on the overall productivity as well as the potential access to new abstractions. Clojure, on the other side has excellent mix right out of the box with convenient access to very wide range of other abstractions.

The OO vs. FP discussion deserves special attention and will get it. I won’t claim that Clojure is perfect, far from it. However the difference in power I have experienced is significant and big part of that difference is due to carefully picked set of base abstractions implemented in very pragmatic way.

 So, what’s next?

Next comes the story how Java and DDD helped me survive and how JavaScript made me feel like a fool for wasting so many hours slicing problems the wrong way and worrying about insignificant things. Clojure will show up as well, you can count on this.

While you wait for the next portion, here are two links that have influenced heavily my current thinking:

  • Beating the averages — the blub paradox has been an eye opening concept for me. I have read this article in 2008 for the first time and kept coming back to it. It validated my innate tendency to be constantly dissatisfied with how things are and look for something better. Paradoxically, it never made me try out LISP🙂
  • Simple made easy — This is the presentation that among other side effects made me give Clojure a chance. This presentation probably has the best return of investment for an hour spent in front of the screen.

Microservices Nonsense

Microservice Architecture (MSA) is a software design approach in which applications are intentionally broken up into remoteable services, in order built from small and independently deployable application building blocks with the goal of reducing deployment operations and dependency management complexity.

(See also Fowler, Thoughtworks)

Back in control

Sounds good, right? Anybody developing applications of some size knows that increasing complexity leads to harder to manage updates, increased deployment and restart durations, more painful distribution of deployables. In particular library dependencies have the tendency to get out of control and version graphs tend to become unmanageable.

So, why not break things up into smaller pieces and gain back control?

This post is on why that is typically the wrong conclusion and why Microservice Architecture is a misleading idea.

Conjunction Fallacy

From a positive angle one might say that MSA is a harmless case of a conjunction fallacy: Because the clear cut, that sounds more specific as a solution approach, makes it more plausible (see the Linda Problem of ….).

If you cannot handle it here, why do you think you can handle it here?

If you cannot organize your design to manage complexity in-process however, why should things work out more smoothly, if you move to a distributed setup where aspects like security, transaction boundaries, interface compatibility, and modifiability are substantially harder to manage (see also Distributed big ball of… )

No question, there can be good reasons for distributed architectures: Organization, load distribution, legacy systems, different expertise and technology preferences.

It’s just the platform (and a little bit of discipline)

Do size of deployables and dependency management complexity belong into that list?

No. The former simply implies that your technology choice has a poor roll-out model. In particular Java EE implementations are notoriously bad at handling large code bases (unlike, you might have guessed, z2). Similarly, loss of control over dependencies shows a lack of dependency discipline and, more often, a brutal lack of modularization effort and capabilities (see also Modularization is….)

Use the right tool

Now these problems might lead to an MSA approach out of desperation. But one should at least be aware that this is platform short-coming and not a logical implication of functional complexity.

If you were asked to move a piece of furniture you would probably use your car. If you were asked to move ten pieces of furniture, you would not look for ten cars – you would get a truck.

 

 

On Classpath Hygiene

On Classpath Hygiene

One of the nasty problems in large JVM-based systems is that of type conflicts. These arise when more than one definition of a class is found for one and the same name – or, similarly, if there is no single version of a given class that is compatible with all using code.

This post is about how much pain you can inflict, when you expose APIs in a modular environment and do not pay attention about unwanted dependencies exposed to your users.

These situations do not occur because of ignorance or negligence in the first place and most likely not in the code your wrote.

The actual root cause is, from another perspective, one of Java’s biggest strength: The enormous eco system of frameworks and libraries to chose from. Using some third party implementation almost always means to include some dependencies of other libraries – not necessarily of compatible versions.

Almost from its beginning, Java had a way of splitting “class namespaces” so that name clashes of classes with different code could be avoided and type visibility be limited – and not the least that coded may be retrieved from elsewhere (than the classpath of the virtual machine): Class loaders.

Even if they share the same name, classes loaded (defined) by one class loader are separate from classes loaded by other class loaders and may not be casted. They may share some common super type though or use identical classes on their signatures and in their implementation. Indeed the whole concept makes little sense if the splitting approach does not include an approach for sharing.

Isolation by class loaders combined with more or less clever ways of sharing types and resoures is the underpinning of all Java runtime modularization (as in any Java EE server, OSGi, and of course Z2).

In the default setup provided by Java’s makers, class loaders are arranged in a tree structure, where each class loader has a parent class loader:

standard

The golden rule is: When asked to load a class, a class loader first asks its parent (parent delegation). If the parent cannot provide the class, the class loader is supposed to search it in its own way and, if found, define the class with the VM.

This simple pattern makes sure that types available at some class loader node in the tree will be consistently shared by all descendants.

So far so good.

Frequently however, when developers invent the possibility of extension by plugins, modularization comes in as a kind of afterthought and little thinking is invested in making sure that plugin code gets to see no more than what is strictly needed.

Unfortunately, if you chose to expose (e.g.) a version of Hibernate via your API, you essentially make your version the one any only that can responsibly be used. This is a direct consequence of the standard parent-delegation model.

Now let’s imagine a that plugin cannot work with the version that was “accidentally” imposed by the class loading hierarchy, so that the standard model becomes a problem. Then, why not turn things around and let the plugin find it’s version with preference over the provided one?

This is exactly what many Java EE server developers thought as well. And it’s an incredibly bad solution to the problem.

Imagine you have a parent/child class loader setup, where the parent exposes some API with a class B (named “B”) that uses another class A (named “A”). Secondly assume that the child has some class C that uses a class A’ with the same name as A, “A”. Because of a local-first configuration, C indeed uses A’. This was setup due to some problem C had with the exposed class A of the parent.

local-first

Suppose that C can provide instances of A’ and you want to use that capability at some later time. That other time, an innocent

C c = new C(); 
B b = new B(); 
b.doSomethingWithA(c.getA());

will shoot you with a Classloading Constraint Violation Error because A and A’ are  incompatible from the JVM’s perspective – which is completely invisible from the code.

At this level, you might say that’s no big deal. In practice however, this happens somewhere deep down in some third party lib. And it happens at some surprise point in time.

Debug that!

Summary

Java Data API Design Revisited

When domain entities get bigger and more complex, designing a safe, usable, future-proof modification API is tricky.

 

This article is on a simple but effective approach to designing for data updates.

 

Providing an update API for a complex domain entity is more complicated than most developers initially expect. As usual, problems start showing when complexity increases.

Here’s the setup: Suppose your software system exposes a service API for some domain entity X to be used by other modules.

When using the Java Persistence API (JPA) it is not uncommon to expose the actual domain classes for API users. That greatly simplifies simple updates: Just invoke domain class setters, and unless the whole transaction fails, updates will be persisted. There is a number of problems with that approach though. Here are some:

  • If modifications of the domain object instance are not performed in one go, other code invoked in between may see inconsistent states (this is one reason why using immutables are favourable).
  • Updates that require non-trivial constraint checking may not be performed on the entity in full but rather require service invocations – leading to a complex to use API.
  • Exposing the persistent domain types, including their “transparent persistence” behavior is very much exposing the actual database structure which easily deviates from a logical domain model over time, leading to an API that leaks “internal” matters to its users.

The obvious alternative to exposing JPA domain classes is to expose read-only, immutable domain type interfaces and complement that by service-level modification methods whose arguments represent all or some state of the domain entity.

Only for very simple domain types, it is practical to offer modification methods with simple built-in types such as numbers or strings though, as that leads to hard to maintain and even harder to use APIs.

Alas, we need some change describing data transfer object (DTO – we use that term regardless of the remoting case) that can serve as a parameter of our update method.

As soon as updates are to prepared either remotely or in some multi-step editing process, intermediate storage of yet-to-be-applied updates needs to be implemented and having some help for that is great in any case. So DTOs are cool.

Given a domain type X (as read-only interface), and some service XService we assume some DTO type XDto, so that the (simplified) service interface looks like this:

 

public interface XService {
 X find(String id);
 X create(XDto xdto);
 X update(String id, XDto xdto);
}

 

If XDto is a regular Java Bean with some members describing updated attributes for X, there are a few annoying issues that take away a lot of the initial attractiveness:

  • You cannot differ a null value from undefined. That is, suppose X has a name attribute and XDto has a name attribute as well – describing a new value for X’s attribute. In that case, null may be a completely valid value. But then: How to describe the case that no change at all should be applied?
  • This is particularly bad, if setting some attribute is meant to trigger some other activity.
  • You need to write or generate a lot of value object boilerplate code to have good equals() and hashcode() implementations.
  • As with the first issue: How do you describe the change of a single attribute only?

In contrast to that, consider an XDto that is implemented as an extension of HashMap<String,Object>:

public class XDto extends HashMap<String,Object> {
  public final static String NAME = "name";
  public XDto() { }
  public XDto(XDto u) {
    if (u.containsKey(NAME)) { setName(u.getName()); }
  }
  public XDto(X x) {
    setName(x.getName());
  }
  public String getName() {
    return (String) get(NAME);
  }
  public void setName(String name) {
    put(NAME,name);
  }
}

Apart from having a decent equals, hashcode, toString implementation, considering it is a value object, this allows for the following features:

  • We can perfectly distinguish between null and undefined using Map.containsKey.
  • This is great, as now, in the implementation of the update method for X, we can safely assume that any single attribute change was meant to be. This allows for atomic, consistent updates with very relaxed concurrency constraints.
  • Determining the difference, compared to some initial state is just an operation on the map’s entry set.

 

In short: We get a data operation programming model (see the drawing below) consisting of initializing some temporary update state as a DTO, operating on this as long as needed, extracting the actual change by comparing DTOs, sending back the change

 

Things get a little more tricky when adding collections of related persistent value objects to the picture. Assume X has some related Ys that are nevertheless owned by X. Think of a user with one or more addresses. As for X we assume some YDto. Where X has some method getYs that returns a list of Y instances, XDto now works with YDtos.

Our goals is to use simple operations on collections to extend the difference computation from above to this case. Ideally, we support adding and removing of Y’s as well as modification, where modified Y‘s should be represented, for update, with a “stripped” YDto as above.

Here is one way of achieving that: As Y is a persistent entity, it has an id. Now, instead of holding on to a list of YDto, we construct XDto to hold a list of pairs (id, value).

Computing the difference between two such lists of pairs means to remove all that are equal and in addition, for those with the same id, to recures into YDto instances for difference computation. Back on the list level, a pair with no id indicates a new Y to be created, a pair with no YDto indicates a Y that no longer is part of X. This is actually rather simple to implement generically.

That is, serializated as JSON, the delta between two XDto states with modified Y collection would look like this:

{
  y:[
    {“id”:”1”, “value”:{“a”=”new A”}},             // update "a" in Y "1"
    {“id”:”2” },                                   // delete Y "2"
    {“value” : {“a”=”initial a”, “b”:”initial b”}} // add a new Y
  ]
}

All in all, we get a programming model that supports efficient and convenient data modifications with some natural serialization for the remote case.

cp

The supplied DTO types serve as state types in editors (for example) and naturally extend to change computation purposes.

As a side note: Between 2006 and 2008 I was a member of the very promising Service-Data-Objects (SDO) working group. SDO envisioned a similar programming style but went much further in terms of abtraction and implementation requirements. Unfortunately, SDO seems to be pretty much dead now – probably due to scope creep and lack of an accessible easy to use implementation (last I checked). Good thing is we can achieve a lot of its goodness with a mix of existing technologies.

References

 

Local vs. Distributed Complexity

As a student or programming enthusiast, you will spend considerable time getting your head around data structures and algorithms. It is those elementary concepts that make up the essential tool set to make a dumb machine perform something useful and enjoyable.

When going professional, i. e. when building software to be used by others, typically developers end up either building enabling functionality, e. g. low level frameworks and libraries (infrastructure) or applications or parts thereof, e. g. user interfaces, jobs (solutions).

There is a cultural divide between infrastructure developers and solution developers. The former have a tendency to believe the latter do somehow intellectually inferior work, while the latter believe the former have no clue about real life.

While it is definitely beneficial to develop skills in API design and system level programming, without the experience of developing and delivering an end-to-end solution however, this is like knowing the finest details on kitchen equipment without ever cooking for friends.

The Difference

A typical characteristic of an infrastructure library is a rather well-defined problem scope that is known to imply some level of non-trivial complexity in its implementation (otherwise it would be pointless):

 

Local complexity is expected and accepted.

 

In contrast, solution development is driven by business flows, end-user requirements, and other requirements that are typically far from stable until done and much less over time. Complete solutions typically consists of many spread out – if not distributed – implementation pieces – so that local complexity is simply not affordable.

 

Distributed complexity is expected, local complexity is not acceptable.

 

The natural learning order is from left to right:

local_to_distributed

Conclusion

Unfortunately many career and whole companies do not get past the infrastructure/solution line. This produces deciders that have very little idea about “the real” and tend to view it as a simplified extrapolation of their previous experience. Eventually we see astronaut architectures full of disrespect for the problem space, absurd assumptions on how markets adapt, and eventually how much time and reality exposure solutions require to become solid problem solvers.

 

Java EE is not for Standard Business Software

The “official” technology choice for enterprise software development on the Java platform is the Java Enterprise Edition or Java EE for short. Java EE is a set of specifications and APIs defined within the Java Community Process (JCP) – it is a business software standard.

 

This post is on why it is naive to think that knowing Java EE is your ticket to create for standard business software

I use the term standard business software for software systems that are developed by one party and used by many and that are typically extended and customized for and by specific users (customers) to integrate it with customer-specific business processes. The use of the word “standard” does not indicate that it is necessarily widely used or somehow agreed on by some committee – it just says that it is standardizing a solution approach to a business problem for a range of possible applications – and typically requires some form of adaptation before usable in a specific setting.

How hard can it be?

It is a myth that Java Enterprise development is harder than on other platforms – pre se. That is, from the point of view of the programming language and, specifically, the Java EE APIs, writing the software as such is not more complex compared to other environment. Complex software is complex, regardless of the technology choice.

In order to turn your software into “standard software” however, the following needs to be addressed as well:

You need an approach to customize and extend your software

This is only partially a software architecture problem. It is also means to provide your customer with the ability to add code, manage upgrades, integration test. Java EE provides very little in terms of code extensibility, close to nothing for modularity with isolation, and obviously it says nothing about how to actually produce software.

You need an operational approach

This is the one most underestimated aspect. While any developer knows that the actual Java EE implementation, the Java EE Server, makes a huge difference when things get serious, the simplistic message that an API standard is good enough to make the implementation indeed interchangeable has led to organizations standardize on some specific Java EE product.

This situation had positive side effects for two parties: IT can extend its claim, Java EE vendor can sell more licenses. And it has a terrible side effect for one party: You as a developer.

It’s up to you to qualify your software for different Java EE implementations of different versions. It’s up to you to describe operations of your software in conjunction with the specific IT-mandated version. When things go bad however, you will still get the blame.

Why is it so limited?

There is a pattern here: There is simply no point for Java EE vendors to extend the standard with anything helping you solve those problems, there is simply no point in providing standard means to help you ship customizable extensible business solutions.

Although it is hard to tell, considering the quality of the commercial tools I know of, but addressing the operational side and also solving modularity questions is definitely something that seemed to provide excellent potential for selling added value on the one side and effective vendor-lock-in on the other side.

This extends to the API specifications. When I was working on JCP committees in my days at SAP, it was rather common to argue that some ability should specifically be excluded from the standard or even precluded in order to make sure that you may well be able to develop for some Java EE server product but not in competition to it. And that makes a lot of sense from a vendor’s perspective. This is saying that

Java EE is a customization and extension tool for Java EE vendor solution stacks.

 

Not that any vendor was particularly successful in implementing that effect – thanks to the competition stemming from open source projects that have become de-facto standards such as the Spring Framework and Hibernate two name only two of many more.

Summary

Outside of an established IT organization, i.e. as a party selling solutions into IT organizations, it makes very little sense to focus on supporting a wide range of Java EE implementation and have yourself pay the price for it. Instead try to bundle as much infrastructure as possible with your solution to limit operational combinatorics.

To be fair: It is a good thing that we have Java EE. But one should not be fooled into believing that it is the answer to interoperabiltiy.

References

  1. Java EE, http://en.wikipedia.org/wiki/Java_Platform,_Enterprise_Edition
  2. JCP, http://en.wikipedia.org/wiki/Java_Community_Process

From a Human Perspective

When designing software that runs in a distributed environment, an extremely helpful tool is to look for slow-world analogies. As our brain thinks much more intuitively when considering human-implemented processes, finding flaws in system deployment architectures is significantly simpler in the analogy and surprisingly accurate.

In the analogy we identify

A thread An activity to attend to (e.g. sorting letters)
An OS process A worker, or more politely: A human
An OS instance (a VM) A home
A remote message A letter
A remote invokation A phone call
A file A file

You can easily go more fine-grained: A big server running a big database for example corresponds to a big administration building with lots of workers running around piling files in some huge archive packed with file cabinets.

In contrast some legacy host running a lot of under-equipped virtual machines is more like a … trailer park.

Asynchronous communication clearly corresponds to the exchange of letters while phone calls play the role of synchronous service calls and so perfectly allow to model scalability and reliability characteristics of both communication styles.

Some Examples

Example 1: De-coupling via asynchronous communication

It is not uncommon that crucial bottlenecks in a distributed architecture derive from some many-to-one state updates that was simply not taken seriously. I.e. many places synchronously call one place to drop off some state update.

In the anology it is perfectly obvious that having many people call in via phone is much more expensive in terms of capacity requirements and much less reliable than processing piles of letters – a work load that can be independently scaled, is very reliable, and makes good use of resources.

Example 2: Node-local search index

In online portals, a shared database can become a major data reading bottleneck that in addition needs to process most crucial updates as well. In the analogy this corresponds to a blackboard (the DB) and many remote workers (the front ends) calling in to ask for some piece of information. It is much more efficient to hand a periodically updated copy (a catalog) out to the front end workers.

Example 3: Zero-Downtime deployment

This is a particularly nice one. The problem addressed by ZDD is that in a distributed setup, a partial roll out of a new software version introduces some not completely trivial compatibility constraints. In particular, any shared resource (a database, a shared service), when upgraded, still needs to accept interactions with some range of previous software versions running on its clients. In the analogy this corresponds to remote offices where clerks still use an old form in some and a new form version in other offices. A central office needs to be able to process old forms as well as new revisions. Likewise when sending out information to remote offices, it needs to be presented in a format comprehensible by clerks that have not been trained for the new version and yet needs to comply to the latter as well. All ZDD requirements for the IT analogy follow.

I guess, you get the point and I will stop here.

A Final Note

One last piece however, an axiom to the whole idea, if you will, is the

Underlying principle: We all are built the same – we just happen to do different things

Considering traditional labor, this is pretty much true in the real world. It should similarly be true for your solution: If your (anology) workers are overspecialized (can only speak on phone, will not process paper forms…) for no other reason than a deployment diagram that seemed to be a good idea at some time, you are in for trouble mid-term.

That is: As a general principle (modulo well-justified exceptions) all nodes in your deployment decomposition can – in principle – do any kind of application work, from rendering a front end to computing a report.

As a corollary this implies that: Not doing something but still being able should not incur pain in terms of added deployment and configuration complexity. (see also modularization and integratedness).