Z2-environment Version 2.5 Is Out

It took a while, but it got finally done. Version 2.5 of the z2-environment is out. Documentation and samples have been updated and tested.

Here is what version 2.5 was about:

More Flexibility in Component Configuration

A major feature of z2 is to run a system cluster strictly defined by centrally, version-controlled configuration. As there is no rule without an exception, some configuration is just better defined by the less static and local system runtime environment, such as environment variable or scripted system properties.

To support that better and without programming, component properties may now be expressed by an extensible expression evaluation scheme with built-in JEXL support. Expressions are evaluated upon first load of component descriptors after start or after invalidation at runtime.

Some use-cases are:

  • Seamless branch or URL switching based on environment settings.
  • Dynamic evaluation of database config, in particular remote authentication based on custom evaluators or environment settings.
  • Declarative sharing of configuration across components.

Some aspects, such dynamic evaluation of searchable properties, were not made dynamic due to the risk of unpredictable behavior. Future work may show that the concept can be extended further though.

pseudo_uml

Check it out in the documentation.

More Complete in-Container Test Support

Z2 offers a sleek, built-in way of running application-side in-container tests: z2Unit. Previously, the JUnit API had its limits in serializabilty over the wire – which is essential for z2Unit. JUnit improved greatly in that department and after the correspinding adaptation of z2Unit some problematic Unit Test Runner combinations (such as e.g. z2Unit and parameterized tests) now work smoothly.

z2unit

Check it out in the documentation.

Better Windows Support

Some very old problems with blanks in path or parameter names got finally fixed. There is a straight forwared command line specification syntax for worker processes that is (mostly) backward compatible.

Also, and possibly more importantly, system property propagation from Home to Worker processes is now fully configurable.

Check it out in the documentation.

Better Git Support

Z2 can read directly from GIT repositories. However, previously only a branch could be specified as content selector. Now any GIT ref will do.

thatsit

Check it out in the documentation.

There is some more. Please check out the version page, if you care.

What’s next:

The plans for 2.6 are still somewhat open. As the work in Version 3 will not make it into any production version – namespace changes are too harming at this time – some useful structural simplifications implemented in 3.0 are considered, such as:

  • Worker processes as participants in Home process target states (rather than a Home Layout)
  • Introducing a “root” repository that hosts any number of remote or “inline” repositories and so streamlines local core config
  • Supporting a co-located Tomcat Web Container as an alternative to an integrated Jetty Web Container
  • Component templates that provide declarative default configurations and so remove duplications (i.e. any Java component based on Spring+Spring AOP).

Thanks and good luck an whatever you do that needs to be done right!

References

 

 

A Web App Wrapper or “Why are Web Applications Less Likeable Than Native Applications – Still?”

In between for something completely different.

I use web apps in my daily work. Heavily, if not mostly – except maybe my IDE and the occasional MS Office session. But for reasons that I find not obvious, they are still not on par with native apps. This is not due to lack of responsiveness or desktop integration. There is very little in terms of user experience where web apps that I use lack. And still – if available I’d rather chose the native alternative. So…

Why is it that web apps are still not as “likeable” as native apps?

A few weeks ago mozilla thunderbird, the friendly companion of many years, finally became unusably slow for me. As MS Outlook is no option for me I started looking for an alternative that would be fast at startup and while flipping and searching through mail, would run on Linux, AND has a well-working calendar integration. There are numerous promising candidates for the first two requirements. But, strangely enough, it seems that calendar support is a tough problem.

But then, my e-mail as well as my calendar is perfectly accessible via a Web interface. It is just that I do not use it that much – although it is fast, responsive, usable the same on all machines, and obviously OS-independent (and was made by Google). Duh!

So why not use that instead of a dedicated native client?

Turns out what really turns me off is that the Web client exposes you to a through and through fleeting user experience:

  • As your desktop gets cluttered with open browser tabs, usually the sensible way out is to close them all. Your important stuff got closed as well.
  • You are using multiple users but your browser only manages one session at a time
  • You want to have the right stuff opened at startup – not nothing, not what happened to be open last time – and you want to have multiple such configurations.

None of this seems unreasonable. And yet I have not found anything that does just that for me.

Ta da!

As a conclusion I looked into “how to wrap my favorite web apps into a native application”. Not for the first time – but this time with the necessary frustration to see it through. Such a “wrapper” should fix the problems above and other do absolutely nothing beyond the absolutely required. Here is the result:

https://github.com/ZFabrik/z-front-side

How does it work?

It is based on electron – that is: It is essentially a scripted chrome browser. And it is very basic and does very little beyond showing a few site-buttons, preloading some of them (always the same) and can be loaded several times for different “partitions” – which implements the multi-session capability.

I have been using it with two different configurations (shared on all machines) and two partitions (private/work), for a few weeks now and finally feel like the five to ten Web apps I use all the time, every day, feel completely integrated with the overall desktop experience – just like any other native application.

Feel free to use, enhance, copy whatever.

From Here to the Asteroid Belt (I)

When I came up with the title line, I had a completely different conclusion in mind. It’s a nice line though. In contrast to the conclusion, it stayed.

Oh and by the way: Spring is finally here:

IMG_20170323_1720145.jpg
(spring at the office, so much work lately, so little time to enjoy it)

This is one of those “what’s the right tool for the problem” posts. Most, me being no different, try to use tools they know best for essentially any problem at hand. And this is good instinct. It’s what people have always done and obviously they did something right. Knowing a tool well is of great value and typically supersedes in effectiveness the use of a tool that might be more powerful – if used correctly – but that you are not an expert at.

At scale however, when building something more complex or widely distributed, tool choice becomes decisive and intrinsic qualities such as simplicity, reliability, popularity, platform independence, performance, etc. may outweigh the benefits of tool expertise.

What I want to look at specifically is the applicability of a programming and execution platform for deployment scenarios ranging from an in-house, or SaaS deployments to massively distributed stand-alone applications such as mobile apps or desktop applications.

The latter two form the two endpoints of the custom vs. non-custom development scale and the non-distributed to arbitrarily distributed scale.

The rules are pretty clear:

In-house/SaaS: Complete control. The system is the application is the solution. There is no customization or distribution problem because everything is (essentially) 100% custom and 0% distributed.

Mobile/Desktop: No control over the single instance that is executed somewhere in the wild. Hard to monitor what is going on, minimal to no customization, potentially infinitely many instance in use concurrently.

But what about the places in between. The customized business solutions that drive our economic backbone from production sites to warehouse solutions, from planning to financials, from team productivity to workflow orchestration?

diagram.png

Let’s say you have an application that is part standard solution (to be used as is) but typically requires non-trivial customization, adaptation, extension to be effectively useful.

What are the options?

Option C: Maintain a code line per instance or customer

That is (still?) a popular method – probably because it is simple to start with and it makes sure the original developer is in complete control.

That is also its downside: It does not scale well into any sort of eco-system and licensing model including third-parties. For a buyer it means 100% dependency on a supplier that most likely got paid dearly for a customer specific modification and will asked to be so at any time of further adaptation and extension.

Option P: Build a plugin model on top of a binary platform API

That is the model chosen for browsers and similar applications. It works very well as long as the platform use-case is sufficiently well defined, and the market interesting enough.

It obviously requires to invest significantly into feature rich and stable APIs, as well as into an effective plug-in model, a  development approach for plug-ins, and a distribution channels or bundling/installation model.

In essence you build a little operating system for some specific application case – and that is simply not an easy and cheap task to do right.

Option S: Ship (significant) source code and support extension and customization on site

This approach has several strong advantages: You can supply hot fixes and highly special customization with minimal interference. Customization is technically not limited to particular functions or API. There is no extra cost per installation on the provider side compared to Option C.

It assumes however that the ability to change, version, and deploy is built-in and necessary tools are readily available. As code life-cycle is now managed on-site, some attention need to be paid to handle code life cycle cleanly.

From a consumer’s point of view it reduces dependency and (leaving legal considerations aside) technically enables inclusion of third-party modifications and extensions.

Scenario Determines Tool

In part II we will look at how the three different scenarios above translate into tool approaches. Stay tuned.

 

 

A simple modularization algorithm

Lately I worked on breaking up a module that had grown too big. It had started to feel hard to maintain and getting oriented in the code’s module felt increasingly cumbersome. As we run tests by module, automated tests triggered by check ins started taking too long and as several developers are working on code of the one module, failing module tests became harder to attribute.

In other words: It was time to think about some module refactoring, some house keeping.

There was a reason however that the module had not been broken up already: It had some lurking dependency problems. Breaking it up would mean to change other module’s dependencies just because – which felt arbitrary – and still there was re-use code to be made accessible to any split off.

Comparing to past experience this is the typical situation when everybody feels that something needs to be done but it always turns out to be a little too risky and unpredictable so that no one really dares. And after all: You can always push things still a bit further.

As that eventually leads to a messed up, stalling code-base and we are smart enough (or simply small enough?) to acknowledge that, we made the decision to fix it.

Now – I have done this kind of exercise on and off. It has some unpleasantly tiring parts and overall feels a little repetitive. Shouldn’t there be some kind of algorithm to follow?

That is what this post is about:

A simple modularization algorithm

Of course, as you will notice shortly: We cannot magically remove the inherent complexity of the problem. But nevertheless, we can put it into a frame that takes out some of distracting elements:

algo

Step 1: Group and Classify

It may sound a ridiculous, but the very first thing is to understand what is actually provided by the current module’s code. This may not be as obvious as it sounds. If it would be clear and easy to grasp, you most probably wouldn’t have ended up in the current mess anyway.

So the job to do is to classify contents into topics and use-cases. E.g.

  • API definitions. Possibly API definitions that can even be split into several APIs
  • Implementation of one or more APIs for independent uses
  • Utility code that exists to support implementations of some API

At this stage, we do not refactor or add abstraction. We only assess content in a way that we end up getting a graph of code fragments (a class, a group of classes) with dependencies. Note: The goal of the excercise is not to get a UML class diagram. Instead we aim for groups that can be named by what they are doing: “API for X”, “Implementation of job Y”, “Helper classes for Z”.

Most likely the result will look ugly. You might find an intermingled mess of some fifty different background service implementations that are all tight together by some shared wiring registry class that wants to know them all. You might find some class hierarchy that is deeply clutterd with business logic specific implementation and extending it further is the only practical way of enhancing the application. Remember: If it was not for any of these, you would not be here.

Our eventual goal is to change and enhance the resulting structure in a way that allows to form useful compartmentation and to turn a mess into a scalable module structure:

trans1

That’s what step 2 and step 3 are about.

Step 2: Abstract

The second step is the difficult piece of work. After step one, looking at your resulting graph it should be easy to categorize sub graphs into either one of the following categories:

  1. many of the same kind (e.g. many independent job implementations),
  2. undesirably complex and/or cyclic
  3. a mix of the two

If only the first holds, you are essentially done with step 2. If there is actual unmanageable complexity left, which is why you are here, you need to now start refactoring to get rid of it.

This is the core of the exercise and where you need to apply your design skills. This comes down to applying software design patterns ([1]), using extensibility mechanisms, and API design. The details are well beyond the scope of this post.

After you completed one such abstraction exercise, repeat step 2 until there is no more b) and c) cases.

Eventually you need to be left with a set of reasonably sized, well-defined code fragment sets, that are in an acyclic directed graph linking them up by linking dependencies.

trans2

(For example, removing the cycle and breaking the one-to-many dependency was achieved by replacing A by a delegation interface A’ and some lookup or registration mechanism).

Step 3: Arrange & Extract

After completing step 2 we are still in one module. Now is the time to split fragments up into several modules so that we can eventually reap the benefits of: Less to comprehend at a time, clearer locating of new implementations, a structure that has come back to manageability – provided of course that you did a good job in step 2 (bet, you saw that coming). This post is not about general strategies and benefits for modularization. But there are plenty in this blog (see below) and elsewhere.

Given our graph of fragments from step 2, make sure it is topologically ordered in direction of linking dependency (in the example from upper left to lower right).

Now start extracting graph nodes into modules. Typically this is easy as most of the naming and abstraction effort was done in the previous steps. Also, when starting you probably had other constraints in mind, like extensibility patterns or different life cycle constraints – e.g. some feature not being part of one deployment while being in another. These all play into the effort of extraction.

The nice thing is: At this time, having the graph chart at hand, the grouping can be easily altered again.

Repeat this effort until done:

trans3

Enjoy!

References

  1. Software Design Patterns (Wikipedia)
  2. Directed Acyclic Graphs (Wikipedia)
  3. Dependency Management for Modular Applications
  4. Modularization is more than cutting it into pieces
  5. Extend me maybe…

This is not OO!

oo

Once in a while I am part of a discussion, typically more of a complaint, that a certain design is not OO. OO as in object oriented.

While that is sometimes a not so well thought through statement anyway, there is something to it. I don’t have a problem with that though, and I am not sure why anybody would. As if OO is something valuable without justification.

Of course it is not. And many have written about it.

In the field that I am mostly concerned with, data driven applications, if not “the same old database apps”, I would go as far as saying: Of course it is not OO. The whole problem is so not OO – no wonder that the application design does not breath OO.

Unlike a desktop application, where the transition of what you expect to interact with as a user kind of naturally, at least on a naive level, translates into an object oriented design, this is only on a very, very, really super much abstract level the case with data driven applications.

That requires some clarification. The term data driven application does not make sense in the singular form. There is always data driven applications. Otherwise you would hardly mention the “data” in the term. As the term suggests, the assumption is that there is a highly semantical, well-defined (not necessarily as in good) data model that makes sense in its own right. And there is many applications that make sense out of it, provide ways of modification and analyse or cross-connect with other data and applications.

It is not far from here to a classic:

Data outlive applications

Or as a corollary:

Data is not bound to its methods, methods are bound to their data.

But that’s what OO in data driven applications tends to do: It ties not only methods to data – but instead also tends to tie data to methods. As for example in OR-Mapping – if you want to call that OO.

That is of course non-sensical and would undermine all modularization and development scale-out efforts – unless of course its the data of essentially a single application. Then, and only then, it makes sense to consider it the state of the objects that in turn describe its methods.

It is still perfectly meaningful to use object orientation as a structuring means offered by a programming language. Objects will typically not represent database backed state – other than as so-called value-objects. But that does by no means diminish the usefulness of object oriented language features.

Not with the target audience anymore?

Lately when I listen to talks on new and hot stuff for developers, be it in person or on the internet, I have the feeling that its not possibly me who’s being talked to.

It’s not just like having seen the latest web development framework over and over again – it’s that’s it’s only ever Web frameworks – sort of. It’s the unpleasant feeling that there is a hell of a lot of noise about stuff that matters very little.

This post got triggered when my long time colleague Georgi told me about Atomist, the new project by Rod Johnson of Spring Framework fame. Atomist is code generation on steroids and will no doubt get a lot of attention in the future. It allows you create and re-use code transformations to quickly create or alter projectsrather than, say, copy-pasting project skeletons.

There may well be a strong market for tools and practices that focus on rapidly starting a project. And that is definitely good for the show. It is really far away from my daily software design and development experience though, where getting started is the least problem.

Problems we work on are the actual business process implementation, code and module structure, extensibility in projects and for specific technology adaptations, data consistency and data scalability, point-in-time recovery, and stuff like “what does it mean to roll this out into production systems that are constantly under load?”.

Not saying that there is no value in frameworks that can do some initial stuff, or even consistently alter some number of projects later (can it?– Any better than a normal refactoring?) – but over the lifetime of a project or even product this seems to add comparatively little value.

So is this because it is just so much simpler to build and marketeer a new Web framework than anything else? Is there really so much more demand? Or is this simply a case of Streetlight effect?

I guess most of the harder problems that show up in the back-ends of growing and heavily used applications cannot be addressed by technology per se – but instead can only addressed by solution patterns (see e.g. Martin Fowler’s Patterns of Enterprise Architecture) to be adhered to. The back-end is where long-lasting value is created though. Far away from the front end. So it should be worthwhile to do something about it.

There is one example of an extremely successful technology that has a solid foundation, some very successful implementations, an impressively long history, and has become the true spine of business computing: The relational database.

Any technology that would standardize and solve problems like application life cycle management, work load management – just to name two – on the level that the relational database model and SQL have standardized data operations should have a golden future.

Links

  1. Atomist
  2. Rod Johnson on Atomist
  3. Streetlight Effect
  4. Martin Fowler’s Patterns of Enterprise Architecture

Some more…

* it may as well be micro-services (a technical approach that aims to become the “Web” of the backend). But then look at stuff like this

The Human Factor in Modularization

Here is yet another piece on my favorite subject: Keeping big and growing systems manageable. Or, conversely, why that is so hard and failing so often?

Why do large projects fail and why is productivity diminishing in large projects?

Admittedly, that is a big question. But here is some piece on humans in that picture.

Modularization – once more

Let’s concentrate on modularization as THE tool to scale software development successfully and the lack that drives projects into death march mode and eventually into failure.

stackIn this write-up, modularization is all the organization of structure of software above the coding level and below the actual process design. All the organization of artefacts to obtain building blocks for the assembly of solutions that implement processes as desired. In many ways the conceptual or even practical interface between specified processes to implement and their actual implementation. So in short: This is not about any specific modularization approach into modules, packages, bundles, namespaces or whatever.

I hereby boldly declare that modularization is about

Isolation of declarations and artifacts from the visibility and harmful impact on other declarations, artifacts, and resource;

Sharing of declarations, artifacts, resources with other declarations, artifacts, and resources in a controlled way;

Leading the extensibility and further development by describing structure and interplay declarations, artifacts, and resources in an instructive way.

Depending on the specific toolset, using these mechanism we craft APIs and implementations and assemble systems from modular building blocks.

If this was only some well-defined engineering craft, we would be done. Obviously this is not the case as so many projects end up as some messy black hole that nobody wants to get near.

The Problem is looking back at you from the Mirror

The task of getting a complex software system into shape is a task that is performed by a group of people and is hence subject to all human flaws and failures we see elsewhere – sometimes leading to one of the greater examples of successful teamwork assembling something much greater than the sum of its pieces.

I found it appropriate to follow the lead of the deadly sins and divine virtues.

hubrisLets start by talking about hubris: The lack of respect when confronted with the challenge of growing a system and overestimating of abilities to fix structural problems on the go. “That’s not rocket science” has been heart more than once before hitting the wall.

greedThis is followed closely in rank and time by syndroms of greed. The unwillingness to invest into structural maintenance. Not so much when things start off, but very much further down the timeline when restructurings are required to preserve the ability to move on.

gluttonyDifferent, but possibly more harmful, is astronaut-architecting, creating an abundance of abstraction layers and “too-many-steps-ahead” designs. The modularization gluttony.

lustTaking pride in designing for unrequested capabilities while avoiding early verticals, showing off platforms and frameworks where solutions and early verticals are expected is a sign of over-indulgence in modularization lust and built-up of vainglory from achievements that can be useful at best but are secondary for value creation.

lazinessjealousNow sticking to a working structure and carefully evolving it for upcoming tasks and challenges requires an ongoing team effort and a practiced consensus. Little is as destructive as team members that work against a commonly established practice out of wrath, resentment, ignorance, or simply sloth.

Modularization efforts fail out of ignorance and incompetence

But it does not need to. If there are human sins increasing the likelihood of failure, virtues should work the opposite.

justiceEvery structure is only as good as it is adaptable. A certain blindness for personal taste in style and people may help implement justice towards future requirements and team talent and so improve development scalability. Offering an insulation of harmful influences, a modularized structure can limit the impact of changes that are still to be proven valuable.

bravenessAt times it is necessary to restructure larger parts of the code base that are either not up to latest requirements or have been silently rotting due to unfitness for some time already. It can take enormous courage and discipline to pass through days or weeks of work for a benefit that is not immediate.

prudenceCourage is nothing without the prudence to guide it towards the right goal, including the correction of previous errors.

temperanceThe wise thing however is to avoid getting driven too far by the momentum of gratifying design by subjecting yourself to a general mode of temperance and patience.

 

 

MAY YOUR PROJECTS SUCCEED!

(Pictures by Pieter Brueghel the older and others)

 

IT Projects vs. Product Projects

A while back when I was discussing a project with a potential client, I was amazed at how little willingness to invest into analysis and design there was. Instead of trying to understand the underlying problem space and projecting what would have to happen in near and mid-term future, the client wanted an immediate solution recipe – something that simply would fix it for now.

What had happened?

I acted like a product developer – the client acted like an IT organization

This made me wonder about the characteristic differences between developing a product and solving problems in an IT organization.

A Question of Attitude

Here’s a little – incomplete –  table of attitudes that I find characterize the two mindsets:

IT Organization Product Organization
Let’s not talk about too much – make decisions! Let’s think it through once more.
The next goal counts. No need to solve problems we do not experience today. We want to solve the “whole” problem – once and forever.
Maintenance and future development is a topic for the next bugdet round. Let’s try to build something that has defined and limited maintenance needs and can be developed further with little modification.
If it works for hundreds, it will certainly work perfectly well for billions. Prepare for scalability challenges early.
We have an ESB? Great let’s use that! We do not integrate around something else. We are a “middle” to integrate with.
There is that SAP /ORCL consultant who claims to know how to do it? Let him pay to solve it! We need to have the core know-how that helps us plan for the future.

I have seen these in action more than once. Both points of view are valid and justified: Either you care about keeping something up and running within budget or you care about addressing a problem space for as long and as effectively as possibly. Competing goals.

It gets a little problematic though when applying the one mindset onto the other’s setting. Or, say, if you think you are solving an IT problem but are actually having a product development problem at hand.

For example, a growing IT organization may discover that some initially simple job of maintaining software client installations on workstations reaches a level that procedures to follow have effectively turned into a product – a solution to the problem domain – without anybody noticing nor paying due attention.

The sad truth is that you cannot build a product without some foresight and long lasting mission philosophy. Without growing and cultivating an ever refined “design story” any product development effort will ends up as the stereotypical big ball of mud.

In the case of the potential client of ours, I am not sure how things worked out. Given their attitude I guess they simply ploughed on making only the most obvious decisions – and probably did not get too far.

Conclusion

As an IT organization make sure not to miss when a problem space starts asking for a product development approach – when it will pay off to dedicate resources and planning to beat the day-to-day plumbing effort by securing key field expertise and maintaining a solution with foresight.

Java Modularity – Failing once more?

Like so many others, I have pretty much ignored project Jigsaw for some time now – assuming it would stay irrelevant for my work or slowly fade away and be gone for good. The repeated shifts in planned inclusion with the JDK seemed to confirm this course. Jigsaw started in 2009 – more than six years ago.

Jigsaw is about establishing a Java Module System deeply integrated with the Java language and core Java runtime specifications. Check out its goals on the project home page. It is important to note the fourth goal:

 

Make it easier for developers to construct and maintain libraries and large applications, for both the Java SE and EE Platforms.

 

Something Missing?

Lately I have run into this mail thread: http://permalink.gmane.org/gmane.comp.java.openjdk.jigsaw/2492

In that mail thread Jürgen Höller (of Spring fame) notes that in order to map Spring’s module layout to Jigsaw modules, it would be required to support optional dependencies – dependencies that may or may not be satisfied given the presence of another module at runtime.

This is how Spring supports its set of adapter and support types for numerous other frameworks that you may or may not use in your application: Making use of Java’s late linking approach, it is possible to expose types that may not be usable without the presence of some dependent type but will not create a problem unless you start using them either. That is, optional dependencies would allow Spring to preserve its way of encapsulating the subject of „Spring support for many other libraries“ into one single module (or actually a jar file).

In case you do not understand the technical problem, it is sufficient to note that anybody who has been anywhere near Java class loading considerations as well as actual Java application construction in real live should know that Spring’s approach is absolutely common for Java infrastructure frameworks.

Do Jigsaw developers actually know or care about Java applications?

Who knows, maybe they simply forgot to fix their goals. I doubt it.

 

Module != JAR file

 

There is a deeper problem: The overloaded use of the term module and the believe of infrastructure developers in the magic of the term.

Considering use of the module term in programming languages, it typically denotes some encapsulation of code with some interface and rules on how to expose or require some other module. This is what Jigsaw focussed on and it is what OSGi focussed on. It is what somebody interested in programming language design would most likely do.

In Java this approach naturally leads using or extending the class loading mechanism to expose or hide types between modules for re-use (or information hiding resp.) which in turn means to invent descriptors that describe use relationships (meaning the ability to reference types in this case) and so on.

This is what Jigsaw does and this is what OSGi did for that matter.

It is not what application developers care about – most of the time.

There is an overlap in interest of course. Code modules are an important ingredient in application assembly. Problems of duplicate type definitions by the same name (think different library versions) and separation of API and implementation are essential to scalable, modular system design.

But knowing how to build a great wall is not the same as knowing how to build a great house.

From an application development perspective, a module is much rather a generic complexity management construct. A module encapsulates a responsibility and in particular should absolutely not be limited to code and is not particularly well-served by squeezing everything into the JAR form factor.

What we see here is a case of Application vs. Infrastructure culture clash in action (see for example Local vs. Distributed Complexity).

The focus on trying to find a particularly smart and technically elegant solution for the low-level modularization problem eventually hurts the usefulness of the result for the broader application development community (*).

Similarly, ignorance of runtime modularization leads to unmaintainable, growth-limited, badly deployable code bases as I tried to describe in Modularization is more than cutting it into pieces.

The truth is somewhere in between – which is necessarily less easily described and less universal in nature.

I believe that z2 is one suitable approach for a wide class of server-side applications. Other usage scenarios might demand other approaches.

I believe that Jigsaw will not deliver anything useful for application developers.

I wish you a happy new year 2016!

References

Ps.:

* One way of telling that the approach will be useless for developers is when discussions conclude that “tools will fix the complexity”. What that comes down to is that you need a compiler (the tool) to make use of the feature, which in turn means you need another input language. So who is going to design that language and why would that be easier?

* It is interesting to check out the history of SpringSource’s dm Server (later passed on to R.I.P. at Eclipse as project Virgo). See in particular the interview with Rod Johnson.

Z2 as a Functional Application Server

Intro

As promised this is first in a series of posts elaborating on integration of Clojure with Z2. It probably looks like a strange mix, however I believe it’s extremely empowering combination of two technologies sharing lots of design philosophy. Clojure has brought me lots of joy by enabling me to achieve much more in my hobby projects and I see how the combination of z2 and Clojure further extends the horizon of what’s possible. I’d be happy if I manage to help other people give it a try and benefit in the same way I did.

The LISP universe is very different one. It’s hard to convince someone with zero previous experience to look at the strange thing with tons of parentheses written using polish notation, so I am going to share my personal story and hope it resonates with you. I will focus on the experience or how I “felt” about it. There is enough theory and intellectual knowledge on the internet already and I will link to it where appropriate.

So, given this is clearly personal and subjective view, let’s put in some context.

Short Bio

I’ve been using Java professionally for 12+ years. Predominantly in the backend. I’ve worked on application servers, as well as on business applications in large, medium and small organizations. Spring is also something I have been heavily relying on in the last 8 years. Same goes for maven. I’ve used JBoss and done bit of application server development myself, but when Spring Boot came up I fell in love with it.

Like every other engineer out there my major struggle through the years have been to manage the complexity. The inherent complexity of the business problem we have to solve plus the accidental complexity added by our tools, our poor understanding of the problem domain and our limited conceptual vocabulary. I have been crushed more than once under the weight of that complexity. Often my own and the team’s share would be more than 50%. I have seen firsthand how poorly groomed code base ends up in state where the next feature is just not possible. This has real business impact.

The most scary thing about complexity is that it grows exponentially with size. This is why I strongly subscribe to the “code is liability” worldview. Same goes for organizations. The slimmer you are the faster and further you can go.

Ways to deal with complexity

Now that the antagonist is clearly labeled, let’s focus on my survival kit.

#1 Modularization

One powerful way to get on top of complexity is divide and conquer by using modularization. This is where z2 comes into the game. It has other benefits as well, but I would put it’s modularization capabilities as feature #1. Maven and Spring have been doing that for me through the years. On more coarse level Tomcat and JBoss provide some modularization facilities as well, however it is extremely rare in my experience where they are deliberately exploited.

Getting modularization right is hard on both ends:

  • The framework has to strike a balance between exercising control and enabling extensibility, otherwise it becomes impractical.
  • The component developers still have to think hard and define boundaries of “things” while using the framework idioms with mastery. I haven’t met yet technology that removes this need. It’s all about methodology and concepts (I dislike the pattern cult).

Discussing more precise definition of what exactly is modularization and why the well-known methodologies are too general to be useless as recipes is too big of discussion for here.

My claim is that z2 strikes the best balance I have seen so far while employing very powerful concept.

#2 Abstractions

Another powerful way is to use better abstractions. While modularization puts structure in chaos, the right abstractions reduce the amount of code and other artifacts, hence the potential for chaos. Just like any other thing, all abstractions are not made equal and I assume they can be ordered according to their power.

My personal definition for power: if abstraction A allows you to achieve the same result with less code than abstraction B then it’s more powerful. Of course reality is much more hairy than this. You have to account for abstraction’s implementation, investment in learning, long term maintenance costs & so on.

Alternative definition: if abstraction A allows you to get further in terms of project size and complexity (before the project collapses) it’s more powerful.

The abstractions we use on daily basis are strongly influenced by the language. Language can encourage, discourage (due ergonomics) or even blacklist an abstraction by having no native support for it. My claim here is that the Java language designers have made some very limiting choices and this has profound effect on the overall productivity as well as the potential access to new abstractions. Clojure, on the other side has excellent mix right out of the box with convenient access to very wide range of other abstractions.

The OO vs. FP discussion deserves special attention and will get it. I won’t claim that Clojure is perfect, far from it. However the difference in power I have experienced is significant and big part of that difference is due to carefully picked set of base abstractions implemented in very pragmatic way.

 So, what’s next?

Next comes the story how Java and DDD helped me survive and how JavaScript made me feel like a fool for wasting so many hours slicing problems the wrong way and worrying about insignificant things. Clojure will show up as well, you can count on this.

While you wait for the next portion, here are two links that have influenced heavily my current thinking:

  • Beating the averages — the blub paradox has been an eye opening concept for me. I have read this article in 2008 for the first time and kept coming back to it. It validated my innate tendency to be constantly dissatisfied with how things are and look for something better. Paradoxically, it never made me try out LISP 🙂
  • Simple made easy — This is the presentation that among other side effects made me give Clojure a chance. This presentation probably has the best return of investment for an hour spent in front of the screen.