The Human Factor in Modularization

Here is yet another piece on my favorite subject: Keeping big and growing systems manageable. Or, conversely, why that is so hard and failing so often?

Why do large projects fail and why is productivity diminishing in large projects?

Admittedly, that is a big question. But here is some piece on humans in that picture.

Modularization – once more

Let’s concentrate on modularization as THE tool to scale software development successfully and the lack that drives projects into death march mode and eventually into failure.

stackIn this write-up, modularization is all the organization of structure of software above the coding level and below the actual process design. All the organization of artefacts to obtain building blocks for the assembly of solutions that implement processes as desired. In many ways the conceptual or even practical interface between specified processes to implement and their actual implementation. So in short: This is not about any specific modularization approach into modules, packages, bundles, namespaces or whatever.

I hereby boldly declare that modularization is about

Isolation of declarations and artifacts from the visibility and harmful impact on other declarations, artifacts, and resource;

Sharing of declarations, artifacts, resources with other declarations, artifacts, and resources in a controlled way;

Leading the extensibility and further development by describing structure and interplay declarations, artifacts, and resources in an instructive way.

Depending on the specific toolset, using these mechanism we craft APIs and implementations and assemble systems from modular building blocks.

If this was only some well-defined engineering craft, we would be done. Obviously this is not the case as so many projects end up as some messy black hole that nobody wants to get near.

The Problem is looking back at you from the Mirror

The task of getting a complex software system into shape is a task that is performed by a group of people and is hence subject to all human flaws and failures we see elsewhere – sometimes leading to one of the greater examples of successful teamwork assembling something much greater than the sum of its pieces.

I found it appropriate to follow the lead of the deadly sins and divine virtues.

hubrisLets start by talking about hubris: The lack of respect when confronted with the challenge of growing a system and overestimating of abilities to fix structural problems on the go. “That’s not rocket science” has been heart more than once before hitting the wall.

greedThis is followed closely in rank and time by syndroms of greed. The unwillingness to invest into structural maintenance. Not so much when things start off, but very much further down the timeline when restructurings are required to preserve the ability to move on.

gluttonyDifferent, but possibly more harmful, is astronaut-architecting, creating an abundance of abstraction layers and “too-many-steps-ahead” designs. The modularization gluttony.

lustTaking pride in designing for unrequested capabilities while avoiding early verticals, showing off platforms and frameworks where solutions and early verticals are expected is a sign of over-indulgence in modularization lust and built-up of vainglory from achievements that can be useful at best but are secondary for value creation.

lazinessjealousNow sticking to a working structure and carefully evolving it for upcoming tasks and challenges requires an ongoing team effort and a practiced consensus. Little is as destructive as team members that work against a commonly established practice out of wrath, resentment, ignorance, or simply sloth.

Modularization efforts fail out of ignorance and incompetence

But it does not need to. If there are human sins increasing the likelihood of failure, virtues should work the opposite.

justiceEvery structure is only as good as it is adaptable. A certain blindness for personal taste in style and people may help implement justice towards future requirements and team talent and so improve development scalability. Offering an insulation of harmful influences, a modularized structure can limit the impact of changes that are still to be proven valuable.

bravenessAt times it is necessary to restructure larger parts of the code base that are either not up to latest requirements or have been silently rotting due to unfitness for some time already. It can take enormous courage and discipline to pass through days or weeks of work for a benefit that is not immediate.

prudenceCourage is nothing without the prudence to guide it towards the right goal, including the correction of previous errors.

temperanceThe wise thing however is to avoid getting driven too far by the momentum of gratifying design by subjecting yourself to a general mode of temperance and patience.

 

 

MAY YOUR PROJECTS SUCCEED!

(Pictures by Pieter Brueghel the older and others)

 

Advertisements

IT Projects vs. Product Projects

A while back when I was discussing a project with a potential client, I was amazed at how little willingness to invest into analysis and design there was. Instead of trying to understand the underlying problem space and projecting what would have to happen in near and mid-term future, the client wanted an immediate solution recipe – something that simply would fix it for now.

What had happened?

I acted like a product developer – the client acted like an IT organization

This made me wonder about the characteristic differences between developing a product and solving problems in an IT organization.

A Question of Attitude

Here’s a little – incomplete –  table of attitudes that I find characterize the two mindsets:

IT Organization Product Organization
Let’s not talk about too much – make decisions! Let’s think it through once more.
The next goal counts. No need to solve problems we do not experience today. We want to solve the “whole” problem – once and forever.
Maintenance and future development is a topic for the next bugdet round. Let’s try to build something that has defined and limited maintenance needs and can be developed further with little modification.
If it works for hundreds, it will certainly work perfectly well for billions. Prepare for scalability challenges early.
We have an ESB? Great let’s use that! We do not integrate around something else. We are a “middle” to integrate with.
There is that SAP /ORCL consultant who claims to know how to do it? Let him pay to solve it! We need to have the core know-how that helps us plan for the future.

I have seen these in action more than once. Both points of view are valid and justified: Either you care about keeping something up and running within budget or you care about addressing a problem space for as long and as effectively as possibly. Competing goals.

It gets a little problematic though when applying the one mindset onto the other’s setting. Or, say, if you think you are solving an IT problem but are actually having a product development problem at hand.

For example, a growing IT organization may discover that some initially simple job of maintaining software client installations on workstations reaches a level that procedures to follow have effectively turned into a product – a solution to the problem domain – without anybody noticing nor paying due attention.

The sad truth is that you cannot build a product without some foresight and long lasting mission philosophy. Without growing and cultivating an ever refined “design story” any product development effort will ends up as the stereotypical big ball of mud.

In the case of the potential client of ours, I am not sure how things worked out. Given their attitude I guess they simply ploughed on making only the most obvious decisions – and probably did not get too far.

Conclusion

As an IT organization make sure not to miss when a problem space starts asking for a product development approach – when it will pay off to dedicate resources and planning to beat the day-to-day plumbing effort by securing key field expertise and maintaining a solution with foresight.

Java Modularity – Failing once more?

Like so many others, I have pretty much ignored project Jigsaw for some time now – assuming it would stay irrelevant for my work or slowly fade away and be gone for good. The repeated shifts in planned inclusion with the JDK seemed to confirm this course. Jigsaw started in 2009 – more than six years ago.

Jigsaw is about establishing a Java Module System deeply integrated with the Java language and core Java runtime specifications. Check out its goals on the project home page. It is important to note the fourth goal:

 

Make it easier for developers to construct and maintain libraries and large applications, for both the Java SE and EE Platforms.

 

Something Missing?

Lately I have run into this mail thread: http://permalink.gmane.org/gmane.comp.java.openjdk.jigsaw/2492

In that mail thread Jürgen Höller (of Spring fame) notes that in order to map Spring’s module layout to Jigsaw modules, it would be required to support optional dependencies – dependencies that may or may not be satisfied given the presence of another module at runtime.

This is how Spring supports its set of adapter and support types for numerous other frameworks that you may or may not use in your application: Making use of Java’s late linking approach, it is possible to expose types that may not be usable without the presence of some dependent type but will not create a problem unless you start using them either. That is, optional dependencies would allow Spring to preserve its way of encapsulating the subject of „Spring support for many other libraries“ into one single module (or actually a jar file).

In case you do not understand the technical problem, it is sufficient to note that anybody who has been anywhere near Java class loading considerations as well as actual Java application construction in real live should know that Spring’s approach is absolutely common for Java infrastructure frameworks.

Do Jigsaw developers actually know or care about Java applications?

Who knows, maybe they simply forgot to fix their goals. I doubt it.

 

Module != JAR file

 

There is a deeper problem: The overloaded use of the term module and the believe of infrastructure developers in the magic of the term.

Considering use of the module term in programming languages, it typically denotes some encapsulation of code with some interface and rules on how to expose or require some other module. This is what Jigsaw focussed on and it is what OSGi focussed on. It is what somebody interested in programming language design would most likely do.

In Java this approach naturally leads using or extending the class loading mechanism to expose or hide types between modules for re-use (or information hiding resp.) which in turn means to invent descriptors that describe use relationships (meaning the ability to reference types in this case) and so on.

This is what Jigsaw does and this is what OSGi did for that matter.

It is not what application developers care about – most of the time.

There is an overlap in interest of course. Code modules are an important ingredient in application assembly. Problems of duplicate type definitions by the same name (think different library versions) and separation of API and implementation are essential to scalable, modular system design.

But knowing how to build a great wall is not the same as knowing how to build a great house.

From an application development perspective, a module is much rather a generic complexity management construct. A module encapsulates a responsibility and in particular should absolutely not be limited to code and is not particularly well-served by squeezing everything into the JAR form factor.

What we see here is a case of Application vs. Infrastructure culture clash in action (see for example Local vs. Distributed Complexity).

The focus on trying to find a particularly smart and technically elegant solution for the low-level modularization problem eventually hurts the usefulness of the result for the broader application development community (*).

Similarly, ignorance of runtime modularization leads to unmaintainable, growth-limited, badly deployable code bases as I tried to describe in Modularization is more than cutting it into pieces.

The truth is somewhere in between – which is necessarily less easily described and less universal in nature.

I believe that z2 is one suitable approach for a wide class of server-side applications. Other usage scenarios might demand other approaches.

I believe that Jigsaw will not deliver anything useful for application developers.

I wish you a happy new year 2016!

References

Ps.:

* One way of telling that the approach will be useless for developers is when discussions conclude that “tools will fix the complexity”. What that comes down to is that you need a compiler (the tool) to make use of the feature, which in turn means you need another input language. So who is going to design that language and why would that be easier?

* It is interesting to check out the history of SpringSource’s dm Server (later passed on to R.I.P. at Eclipse as project Virgo). See in particular the interview with Rod Johnson.

Z2 as a Functional Application Server

Intro

As promised this is first in a series of posts elaborating on integration of Clojure with Z2. It probably looks like a strange mix, however I believe it’s extremely empowering combination of two technologies sharing lots of design philosophy. Clojure has brought me lots of joy by enabling me to achieve much more in my hobby projects and I see how the combination of z2 and Clojure further extends the horizon of what’s possible. I’d be happy if I manage to help other people give it a try and benefit in the same way I did.

The LISP universe is very different one. It’s hard to convince someone with zero previous experience to look at the strange thing with tons of parentheses written using polish notation, so I am going to share my personal story and hope it resonates with you. I will focus on the experience or how I “felt” about it. There is enough theory and intellectual knowledge on the internet already and I will link to it where appropriate.

So, given this is clearly personal and subjective view, let’s put in some context.

Short Bio

I’ve been using Java professionally for 12+ years. Predominantly in the backend. I’ve worked on application servers, as well as on business applications in large, medium and small organizations. Spring is also something I have been heavily relying on in the last 8 years. Same goes for maven. I’ve used JBoss and done bit of application server development myself, but when Spring Boot came up I fell in love with it.

Like every other engineer out there my major struggle through the years have been to manage the complexity. The inherent complexity of the business problem we have to solve plus the accidental complexity added by our tools, our poor understanding of the problem domain and our limited conceptual vocabulary. I have been crushed more than once under the weight of that complexity. Often my own and the team’s share would be more than 50%. I have seen firsthand how poorly groomed code base ends up in state where the next feature is just not possible. This has real business impact.

The most scary thing about complexity is that it grows exponentially with size. This is why I strongly subscribe to the “code is liability” worldview. Same goes for organizations. The slimmer you are the faster and further you can go.

Ways to deal with complexity

Now that the antagonist is clearly labeled, let’s focus on my survival kit.

#1 Modularization

One powerful way to get on top of complexity is divide and conquer by using modularization. This is where z2 comes into the game. It has other benefits as well, but I would put it’s modularization capabilities as feature #1. Maven and Spring have been doing that for me through the years. On more coarse level Tomcat and JBoss provide some modularization facilities as well, however it is extremely rare in my experience where they are deliberately exploited.

Getting modularization right is hard on both ends:

  • The framework has to strike a balance between exercising control and enabling extensibility, otherwise it becomes impractical.
  • The component developers still have to think hard and define boundaries of “things” while using the framework idioms with mastery. I haven’t met yet technology that removes this need. It’s all about methodology and concepts (I dislike the pattern cult).

Discussing more precise definition of what exactly is modularization and why the well-known methodologies are too general to be useless as recipes is too big of discussion for here.

My claim is that z2 strikes the best balance I have seen so far while employing very powerful concept.

#2 Abstractions

Another powerful way is to use better abstractions. While modularization puts structure in chaos, the right abstractions reduce the amount of code and other artifacts, hence the potential for chaos. Just like any other thing, all abstractions are not made equal and I assume they can be ordered according to their power.

My personal definition for power: if abstraction A allows you to achieve the same result with less code than abstraction B then it’s more powerful. Of course reality is much more hairy than this. You have to account for abstraction’s implementation, investment in learning, long term maintenance costs & so on.

Alternative definition: if abstraction A allows you to get further in terms of project size and complexity (before the project collapses) it’s more powerful.

The abstractions we use on daily basis are strongly influenced by the language. Language can encourage, discourage (due ergonomics) or even blacklist an abstraction by having no native support for it. My claim here is that the Java language designers have made some very limiting choices and this has profound effect on the overall productivity as well as the potential access to new abstractions. Clojure, on the other side has excellent mix right out of the box with convenient access to very wide range of other abstractions.

The OO vs. FP discussion deserves special attention and will get it. I won’t claim that Clojure is perfect, far from it. However the difference in power I have experienced is significant and big part of that difference is due to carefully picked set of base abstractions implemented in very pragmatic way.

 So, what’s next?

Next comes the story how Java and DDD helped me survive and how JavaScript made me feel like a fool for wasting so many hours slicing problems the wrong way and worrying about insignificant things. Clojure will show up as well, you can count on this.

While you wait for the next portion, here are two links that have influenced heavily my current thinking:

  • Beating the averages — the blub paradox has been an eye opening concept for me. I have read this article in 2008 for the first time and kept coming back to it. It validated my innate tendency to be constantly dissatisfied with how things are and look for something better. Paradoxically, it never made me try out LISP 🙂
  • Simple made easy — This is the presentation that among other side effects made me give Clojure a chance. This presentation probably has the best return of investment for an hour spent in front of the screen.

Continuity is King

towerUnfortunately there has been so much going on in my work life and my private life lately that I didn’t get around thinking and writing much.

Here is just a short note that v2.4 of z2 is ready: v2.4

It simply upgrades z2 to Java 8 and upgrades the version of Jetty we use to 9.3. The latter implies that Java 8 is a strict requirement too.

Little change is good as those projects that run with z2 need anything but platform disruption at this time.

As the core did not change incompatibly (at least not that I know), using a v2.4 core with previous z2-base version 2.3 will simply add Java 8 support to any such setup as well.

Anyway here’s what I hope to continue on once I am back to normal operations again:

  • A piece on how to use clojure on z2 with a guest author from Vienna
  • A piece on a variation of the Stockholm Syndrome that can be observed in the relationship of developers with their toolset
  • A piece on how organization can be classified as Project vs. Product driven

 

Microservices Nonsense

Microservice Architecture (MSA) is a software design approach in which applications are intentionally broken up into remoteable services, in order built from small and independently deployable application building blocks with the goal of reducing deployment operations and dependency management complexity.

(See also Fowler, Thoughtworks)

Back in control

Sounds good, right? Anybody developing applications of some size knows that increasing complexity leads to harder to manage updates, increased deployment and restart durations, more painful distribution of deployables. In particular library dependencies have the tendency to get out of control and version graphs tend to become unmanageable.

So, why not break things up into smaller pieces and gain back control?

This post is on why that is typically the wrong conclusion and why Microservice Architecture is a misleading idea.

Conjunction Fallacy

From a positive angle one might say that MSA is a harmless case of a conjunction fallacy: Because the clear cut, that sounds more specific as a solution approach, makes it more plausible (see the Linda Problem of ….).

If you cannot handle it here, why do you think you can handle it here?

If you cannot organize your design to manage complexity in-process however, why should things work out more smoothly, if you move to a distributed setup where aspects like security, transaction boundaries, interface compatibility, and modifiability are substantially harder to manage (see also Distributed big ball of… )

No question, there can be good reasons for distributed architectures: Organization, load distribution, legacy systems, different expertise and technology preferences.

It’s just the platform (and a little bit of discipline)

Do size of deployables and dependency management complexity belong into that list?

No. The former simply implies that your technology choice has a poor roll-out model. In particular Java EE implementations are notoriously bad at handling large code bases (unlike, you might have guessed, z2). Similarly, loss of control over dependencies shows a lack of dependency discipline and, more often, a brutal lack of modularization effort and capabilities (see also Modularization is….)

Use the right tool

Now these problems might lead to an MSA approach out of desperation. But one should at least be aware that this is platform short-coming and not a logical implication of functional complexity.

If you were asked to move a piece of furniture you would probably use your car. If you were asked to move ten pieces of furniture, you would not look for ten cars – you would get a truck.

 

 

If you do it like all the others, what makes you believe you will do any better?

When deciding on the choice of tool, process, or methodology, developers, architects, and team leads feel a strong urge to “do exactly as is best practice” (or standard), that is, as recommended on popular web sites or in text books.

On average following some “best practice” may protect against bad mistakes. Some reasoning on why choose some approach over another should always be done though. Sometimes there is glaring examples that show that some given best practice is quite obviously not so good after all.

An Example

For the sake of this post I will use the example of “Feature Branching”. The idea of feature branching is that instead of having many developers work on the same code line, while implementing different features, to have developers create branches per feature that will be integrated back into the main code line when done.

While the first part of this idea sounds fantastic and fits wonderfully to models of distributed source control, the second part becomes absurdly complex for large and “wide” code bases and when applied with more than a handful of developers.

There is no need to discuss for what kind of projects and management setups feature branching may work. Assuming you are about to develop an actual solution that is under constant development, feature branching obviously does not work well, as companies that have products with large code bases do simply not use this approach.

This is obvious in the sense that where large teams work on a large solution code base (note: see also local vs. distributed complexity), feature branching is not used as a means of regular feature development. This, to my knowledge includes Facebook, Google, and definitely not traditional SAP (check out the references below).

How Come?

How come something is touted as best practice that does not get embraced where it should actually matter most: At scale?

Here are some guesses: For once, peer pressure is strong. It can be a frustrating intellectual effort to argue against a best practice. Secondly “experts” are somewhat wrong most of the time, simply because professional writing about a field as wide as software engineering leaves little time to actually practice first-handedly what you are writing about. Most expert authors will only ever have experienced tools and approaches for a short time for small problems – and actually have little incentive to do otherwise. And finally: Something that solves a problem in the small and distributed (as often in the OSS community) frequently does not work well in the large and interconnected.

But then: Does it make sense to do differently?

It obviously does – considering the examples above. But one does not look to the likes of facebook. The simple truth is that, if you do not stand out in some way, you are by definition mediocre – which is nothing else but saying that other than for political protection or some other non-market reason, there is no particular reason to have you win the market.

When does it make sense to do differently?

Even assuming you are completely sure to know a better approach, when does it make sense to fight for adoption?

I think the simple answer is: For disruptive approaches, the only meaningful point in time to fight for it is in the very beginning of a project or product (and sometimes organization).

Remember the technology adoption life cycle (see “crossing the chasm” and others):

What this says is that even if you manage to win the enthusiasts, winning the mainstream audience is the harder part.

In our example, the market is the product organization and the disruptive tool or approach is the technology we want to see used. Luckily, initially our organization will have a significant number of visionaries and enthusiasts with respect to anything that promises to give our product a head start.

Over time, choices have been made, customers acquired, visionaries will become pragmatics and the willingness to do anything but the most predictable and least harmful as far the specific product’s development is concerned will fade. That is, the product organization as a whole makes a transition very much in correspondence with the product’s target group changes.

Consequently, introducing a disruptive change from within might turn out to be a futile exercise in any but the earliest stages of a product’s life cycle.

Summary

Doing differently can be the difference between mediocrity and excellence. Foundations are laid at the beginning of product development. Non-mainstream choices must be made early on.

References

  1. Martin Fowler on Feature Branching
  2. Does-Facebook-use-feature-branching (Quora)
  3. Paul Hammant on Trunk Based Development
  4. Paul Hammant on Google’s vs Facebook’s Trunk Based Development

 

The Mechanics of Getting Known

“…a soliton is a self-reinforcing solitary wave (a wave packet or pulse) that maintains its shape while it propagates at a constant velocity. Solitons are caused by a cancellation of nonlinear and dispersive effects in the medium.” (see Wikipedia).

Solitons occur in shallow water. Shock waves like Tsunamis can be modeled as solitons. Solitons can also be observed in lattices (see Toda-Lattice).

Among the many interesting properties of solitons is that solitons can pass “through” each other while overtaking – as if they move completely independently of each other:

By Kraaiennest (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons

By Kraaiennest (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)%5D, via Wikimedia Commons

This post is my own little theory on the mechanics of how an information (on a subject, person, anything) becomes relevant – over time:

The Theory

An information on a subject, such as “Grails is very popular among Web developers” (of which I am not so sure anymore) or “No point in buying a Blackberry phone – they will be gone any time soon” (I bought one just last fall) or “Web development agency X is reliable and competent” (this time I really don’t know) spreads in a lattice of people (vertices) and relationships (edges) just like a soliton. It may pass others and it may differ in velocity of traversal.

Its velocity corresponds to how “loud” it is, its amplitude. It is louder, if it is generally considered more noteworthy when it entered the lattice.

As so many information snippets reach us every day, we sort out most as insignificant right away. So what makes a piece of information memorable and in particular recallable (e.g. when wondering “what is actually a good Web development agency?”) or even trigger an action like researching something in more depth?

It is the number of times and some (yet unknown) increasing function of the sum of amplitudes of all times that that piece of information (and its equivalent variants) has reached us.

So what?

Now that we have this wonderful theory, let’s see where that takes us.

It fits to common observations: Big marketing campaigns (high amplitude) send big solitons into the lattice. They do not necessary suffice to create action and so need to be augmented with talks, articles, rumors to add more hits,

Also it explains why that is equivalent to creating many small information solitons. There is great examples of open source tools that made it to impressive fame via repeated references in articles and books – without any big bang.

Most importantly, it explains the non-linearity of “return” on marketing: Little will lead to nothing in the short term. Not just little but actually nothing. Over time however hit thresholds will be exceeded and interest lead to action. As the speed with which solitons pass through the lattice does not change talking to many will not speed up the overall process – but increase the later return instead.

References

Surprisingly enough, some 15 years ago, as part of my dissertation work, I published some math papers on PDEs with solitons:

On Classpath Hygiene

On Classpath Hygiene

One of the nasty problems in large JVM-based systems is that of type conflicts. These arise when more than one definition of a class is found for one and the same name – or, similarly, if there is no single version of a given class that is compatible with all using code.

This post is about how much pain you can inflict, when you expose APIs in a modular environment and do not pay attention about unwanted dependencies exposed to your users.

These situations do not occur because of ignorance or negligence in the first place and most likely not in the code your wrote.

The actual root cause is, from another perspective, one of Java’s biggest strength: The enormous eco system of frameworks and libraries to chose from. Using some third party implementation almost always means to include some dependencies of other libraries – not necessarily of compatible versions.

Almost from its beginning, Java had a way of splitting “class namespaces” so that name clashes of classes with different code could be avoided and type visibility be limited – and not the least that coded may be retrieved from elsewhere (than the classpath of the virtual machine): Class loaders.

Even if they share the same name, classes loaded (defined) by one class loader are separate from classes loaded by other class loaders and may not be casted. They may share some common super type though or use identical classes on their signatures and in their implementation. Indeed the whole concept makes little sense if the splitting approach does not include an approach for sharing.

Isolation by class loaders combined with more or less clever ways of sharing types and resoures is the underpinning of all Java runtime modularization (as in any Java EE server, OSGi, and of course Z2).

In the default setup provided by Java’s makers, class loaders are arranged in a tree structure, where each class loader has a parent class loader:

standard

The golden rule is: When asked to load a class, a class loader first asks its parent (parent delegation). If the parent cannot provide the class, the class loader is supposed to search it in its own way and, if found, define the class with the VM.

This simple pattern makes sure that types available at some class loader node in the tree will be consistently shared by all descendants.

So far so good.

Frequently however, when developers invent the possibility of extension by plugins, modularization comes in as a kind of afterthought and little thinking is invested in making sure that plugin code gets to see no more than what is strictly needed.

Unfortunately, if you chose to expose (e.g.) a version of Hibernate via your API, you essentially make your version the one any only that can responsibly be used. This is a direct consequence of the standard parent-delegation model.

Now let’s imagine a that plugin cannot work with the version that was “accidentally” imposed by the class loading hierarchy, so that the standard model becomes a problem. Then, why not turn things around and let the plugin find it’s version with preference over the provided one?

This is exactly what many Java EE server developers thought as well. And it’s an incredibly bad solution to the problem.

Imagine you have a parent/child class loader setup, where the parent exposes some API with a class B (named “B”) that uses another class A (named “A”). Secondly assume that the child has some class C that uses a class A’ with the same name as A, “A”. Because of a local-first configuration, C indeed uses A’. This was setup due to some problem C had with the exposed class A of the parent.

local-first

Suppose that C can provide instances of A’ and you want to use that capability at some later time. That other time, an innocent

C c = new C(); 
B b = new B(); 
b.doSomethingWithA(c.getA());

will shoot you with a Classloading Constraint Violation Error because A and A’ are  incompatible from the JVM’s perspective – which is completely invisible from the code.

At this level, you might say that’s no big deal. In practice however, this happens somewhere deep down in some third party lib. And it happens at some surprise point in time.

Debug that!

Summary

Working on z2env Version 3

Despite its fantastic qualities as a development and execution environment, z2’s adoption is very low. That of course does not at all stop us from improving it further (as we are actively benefiting from it anyway).

Whenever I talk about z2, the feedback is typically in one of two categories.

The first one is the “I don’t get it”-category.

There was a time when running builds was such a natural ingredient of software development to me, that I would have been in that category as well. So I forgive them their ignorance.

The other category is the “Great idea – … to bad I cannot use it”-category.

Being a disruptive approach and knowing how change averse the development community is (contrary to common belief), it is natural that z2 has to fight with adoption. Specifically, the more profound critique towards z2 is about being too big, too proprietary, too non-standard.

So this is what version 3 is all about:

Less and more focussed

The one thing z2 is about is removing obstacles between code and execution. You should only think about code, modules, software structure. In order to enhance the “do one thing and do it well” qualities of z2, we will strip of capabilities that may not be totally obvious (like for example z2’s JTA implementation, support for worker processes) and either drop those completely or move them into addons.

Better and Friendlier Open Source

Z2 has always been open source. In version 3 all package names will be “org.z2env” and, possibly more interesting than that cosmetic change, we will make sure that there will be no use of libraries with a problematic license like GPL. Only Apache 2 or compatible licenses will be contained.

Integrating with Tomcat

Previously, z2 embedded Jetty as its preferred Web Container. Jetty is a great Web container and its embeddability is (rightfully) legendary. The vast majority of Java developers use Tomcat though.

With version 3 we found a cool way of hooking z2 up with your ordinary Tomcat installation and its configuration, so that Web applications defined in z2 work next to whatever else you have deployed.

If TomEE would not make such harsh structural assumptions on application deployment – assumptions we cannot agree with, and much less adhere to – we would even have EJBs in z2.

That is no big deal though – as I have the vague feeling that EJB enthusiasts are probably even less likely to adopt z2.

Getting Started

Enough talk! While there is still a lot to do (porting the Spring add-on and all the sample applications), a simple Getting Started guide can be found here

https://redmine.z2-environment.net/projects/z2env/wiki/Getting_Started

Willing to invest 15 Min into something cool? Here it is!

Feedback greatly welcome!