Software Design Part 4 – Modularization

This is a follow-up on the post Software Design – The Big Picture (Intro) and Software-Design – Part 2: Structuring For Control Flow, as well as Software Design Part 3 – Working With Persistent Data.

One of the fundamental problems of software development (and not only that) is that a) humans are really bad at managing complexity and that b) that code becomes really complex quickly.

The top one reason behind code going bad is that we stop being able to grasp how it actually works so much so that we start being afraid to change it structurally (i.e. refactor it). Possibly contrary to intuition, code that reached that state is essentially a car that lost its steering and – if at all – still moves out of inertia. Not good!

Obviously there must be a way around our limited intellectual capacity – after all we see enormously complex systems at work around us. But are they?

The trick to managing complexity is to avoid it. And the trick to avoid complexity is to build abstractions. Finally something we are quite good at.

Building abstractions happens everywhere from science to accounting. In software, the abstraction is a means to structure code and to decouple the code relying on an abstraction (such as a mobile app wanting to take a picture) from the details of the implementation of the abstraction (such as the hardware driver for the camera).

The same is true when creating an interface or a generic type in our favorite programming language so that we make effective use polymorphism to better structure some code.

You can look at modularization from different levels. For example as a way of structuring code of a library or application that is developed, packaged, and distributed as a whole from an (essentially) single source code folder. For example by arranging packages and folders in ways that help explain and understand responsibilities.

While maintaining a clear and instructive code structure is really important, it only carries so far. The reason is simple: As there is many, many more ways to screw up than there are to improve, any sufficiently large and non-usage-constraint code is prone to rot by violations of abstractions and complexity creep.

This kind of local (if you will) modularization is not what I am considering in this post. Instead, I am talking about moving whole slews of implementation (modules) away from your code, so that at any time the complexity you actually need to deal with is manageable.

The means of modularization are abstraction, encapsulation, and information hiding. However that is actually really the outcome of:

  1. Coming up with an API (contract)
  2. Separating implementation details from its API (hiding details)
  3. Making sure implementation is neither visible nor impacting other parts of the system (encapsulate)

How to Do It

I wrote a few posts on techniques and aspects of modularization. I will just enumerate the basics:

Re-Using and Extending

The two most notable patterns in contract building between modules are providing an API to be used by another module to invoke some function and, in contrast to that, providing API that is to be implemented by another module so that it can be invoked. The latter is in many ways how Object-Oriented-Programming factors into this story.

See Extend me maybe…

Sharing and Isolating

Exposing a contract means to share capabilities to be used by others. What is needed to do so however should not only be visible (so that it may not be accidentally used), it should not affect other parts of the system by its presence either.

See Modularization is more than cutting it into pieces.

Refactoring

Looking at a large non-modularized code base can easily be overwhelming. I tried to come up with an algorithm to reduce complexity iteratively:

See A simple modularization algorithm.

Conclusion

This post looked at modularization as if it only applies to code. A good modular system should provide modularization capabilities to essentially all aspects it is used for though. If managing configuration is an important aspect, it just means that configuration can be part of modules as well. So do Web application resources or whatever else is part of what your platform of choice is typically used for. That is a core feature of the z2-environment.

Modularization is a rich topic. Doing it right, keeping a sustainable complexity level over time regardless of solution size by finding appropriate contracts and management of contracts is skillful craftmanship.

Not paying attention, a lack of willingness to invest into structural maintenance easily leads to frustrating, endless “too little, too late” activities. A growing code base built on weak isolation requires development disciplin that is unrealistic to expect in most commercial circumstances.

Getting it right, seeing it work out however is a great experience of collaborative creation that I am fortunate to have been part of!

Z2-environment Version 2.9 is Available

Finally, Version 2.9 is available for download and use. Version 2.9 comes with some useful improvements.

Please check out the wiki and online documentation.

Support for Java 15

Version 2.9 requires Java 11 and runs with Java up to Version 16 and supports a language level up to Java 15 based on the Eclipse Java Compiler ECJ 4.19 (#2088).

With Java 15, we have now finally multi-line text blocks, saving us some painful reformatting when needing markup or code blocks or long messages as string literals.

@Test
public void multilineStrings() {
	// Text blocks are kind of
	// nice for mark up, messages and code
	err.println("""
	create extension pg_stat_statements;
	select  
	pd.datname, 
	substring(pss.query,1,100) as query,
	calls,
	pss.rows as totalRowCount,
	(pss.total_time / 1000) AS duration,
	((pss.total_time / 1000)/calls) as "avg"  
	from pg_stat_statements as pss 
	join pg_database as pd on pss.dbid=pd.oid 
	order by duration desc limit 20;
	""");
}

Check out the JDK 15 Documentation for more on Java 15.

Upgrade to Jetty 10.0.1

This version now embeds Jetty 10.0.1 as its Web container (#2090). Jetty 10 is the last version supporting the Jakarta EE 8 namespace and the first to support the Servlet 4.0 API.

NOTE: With the next upgrade (Version 2.10) we will move on to Jakarta 9 that is NOT backwards compatible with previous versions of the Jakarta EE or Java EE APIs. This is mainly because package names change from “javax.*” to “jakarta.*” throughout the EE APIs.

See also Understanding Jakarta EE 9.

Supporting JUnit 5 (a.k.a. JUnit Jupiter)

This is arguably the coolest new feature in Z2. Previously Z2 already included an extremely useful in-container testing feature z2 Unit that was built on JUnit 4. I described it in detail in In-system testing re-invented. This is so useful for integration testing of anything that may call itself a meaningful application that I could not imagine developing without it anymore.

Hence it was all the more painful that it took so long to support the JUnit 5 API. Compared to JUnit 4, JUnit 5 is not only completely different but also significantly more complex from the perspective of an extender. However, it is also architecturally cleaner and allows for more testing features and testing flexibilty.

The new implementation of #2036, called z2 Jupiter, now allows to run remote integration tests transparently to the client (IDE, ANT, Jenkins,… etc) without compromising on JUnit 5 features in your tests – even more so than z2Unit did.

package mytest;

import org.junit.jupiter.api.Test;

import com.zfabrik.dev.z2jupiter.Z2JupiterTestable;

@Z2JupiterTestable(componentName="my_tests/java")
public class AZ2IntegratedUnitTest {

    @Test
    public void someTestMethod() {  
        System.out.println("Hello World!");
    }
}

I will described the implementation approach in another blog post. For now please check out How to Unit Test in Z2.

More…

Check out the version page for more details. Go to download and getting started in five minutes or check out some samples.

Posted in z2

Z2-environment Version 2.8 is Available

Finally, version 2.8 is available for download and use. Version 2.8 comes with some useful improvements.

Please check out the wiki and online documentation.

Support for Java 13

Version 2.8 requires Java 9 and runs with Java up to Version 13 and supports a language level up to Java 13 based on the Eclipse Java Compiler ECJ 4.14 (#2035).

Upgraded to Jetty 9.4.24

While that was rather a necessity to run on Java 13, it was also kind of nice to be up-to-date again (#2052)

Follow HEAD!

Previously it was kind of cumbersome to change Git Component Repo declarations when working with feature branches, or to make a connected system switch branches when implementing a system local repo.

At this point, I probably lost you. Anyway, as z2 is self-serving its code and configuration, it would be really cool, if switching branches would be just that: Switching branches and (of course) synchronizing z2. And that is true now.

A Git Component Repository declaration may now use “HEAD” as a reference:

In that case, whatever is the current branch of the repo: Z2 will follow.

Remote Management Goodies

Z2 exposes metrics and basic operations via JMX. Via JMX or via the simple admin user interface, you can check on runtime health and trigger synchronization or reload of worker processes for example. Some things were – in practice – still rather user unfriendly:

  • Synchronizing a remote installation from the command line;
  • Accessing the main log remotely.

There is now a simple to use command line integrated with Z2 that can be used to do just that: Trigger some JMX implemented function and stream back the log. Or just simply continuously stream the log to a remote console.

Remote log streaming is also available from the admin user interface:

More…

Check out the version page for more details. Go to download and getting started in five minutes or check out some samples.

Posted in z2

A Model for Distributed System-Centric Development

It’s been a while and fall has been very busy. I am working on z2 version 2.8 which will bring some very nice remote management additions to simplify managing a distributed application setup.That was the motivation behind this post.

This post is about a deployment approach for distributed software systems that is particularly useful for maintenance and debugging.

But let‘s start from the beginning – let‘s start from the development process.

Getting Started

The basic model of any but the most trivial software development is based on checking out code and configuration from some remotely managed version control system (or Software Configuration Management system, SCM) to a local file system, updating it as needed and testing it on a local execution environment:

At least for the kind of application I care about, various versions, for development, testing, and productive use are stored in version control. In whatever way, be it build and deploy or pull, the different execution environments get updated from changes in the shared SCM. Tagging and branching is used to make sure that latest changes are separated from released changes. Schematically, the real situation is more like this:

There are good reasons we want to have permanent deployments for testing and staging: In large and complex environment a pre-production staging system may consist of a complex distributed setup that integrates with surrounding legacy or mocked third-party systems and have corresponding configurations. In order to collaboratively test workflows, check system configurations, and test with historic data, it is not only convenient but really natural and to have a named installation to turn to. We call that a test system. But then:

How do you collaboratively debug and hotfix a distributed test system?

For compile-package-deployment technologies, you could setup a build pipeline and a distributed deployment mechanism that allows you to push changes you applied locally on your pc to the test system installation. But that would only be you. In order to share and collaborate with other developers on the test system, you need some collaborative change tracking. In other words, you should use an SCM for that.

Better yet, you should have an SCM as an integral part of the test system!

Using an SCM as Integral Part of the System

Here is one approach like that. We are assuming that our test system has a mechanism to either pull changes from an SCM or there is a custom build and deploy pipeline to update the test system from a named branch. Using the z2-Environment, we strongly prefer a pull approach – due to its inherently better robustness.

From a test system‘s perspective we would see this:

Here „test-system“ is the branch defining the current code and configuration of the test system deployment. We simply assume there is a master development branch and a release branch that is still in testing.

So, any push to „test-system“ and a following „pull“ by the test system leads to a consistently tracked system update.

Let‘s assume we are using a distributed version control system (DVCS) like Git. In that case, there is not only an SCM centrally and on the test system, but your development environment has just as capable an SCM. We are going to make use of that.

Overall we are here now:

What we added in this picture is a remote reference to the test-system branch of the test system’s SCM from our local development SCM. That will be important for the workflows we discuss next.

The essence of our approach is that a DVCS like Git provides us a common versioning graph spanning multiple repositories.

Example Workflows

Let‘s play through two main workflows:

  • Consistent and team-enabled update of the test system without polluting the main code line
  • Extracting fix commits from test commits and consolidating the test system

Assume we are in the following situation: In our initial setup, we have a main code line (master) and a release branch. Both have been pushed to our central repository (origin). The test system is supposed to run the release branch but received one extra commit (e.g. for configuration). We omitted the master branch from the test-system repository for clarity. In our development repository (local), we have the master branch, the release branch as well as the test-system branch. The latter two from different remotes respectively. We have remote branches origin/master, origin/release, test-system/test-system to reflect that. We will however not show those here unless that adds information:

In oder to test changes on the test system, we develop locally, push to the test system repo and have the test system be updated from there there. None of that affects the origin repository. Let‘s say we need two rounds:

We are done testing our change with the test system. We want to have the same change in the release and eventually in the master branch.

The most straightforward way of getting there would be to merge the changes back into release and then into master. We did not write particularly helpful commit messages during testing however. For the history of the release and the development branch we prefer some better commit log content. That is why we are squash-merging the test commits the release branch and merge the resulting good commit into release.

After that we can push the release branch and master changes to origin:

While this leads to a clean history centrally, it puts our test system into an unfortunate state. The downside of a squash-merge is that there is no relationship between the resulting commit and the originating history anymore. If we would now merge the „brown“ commit into the test-system branch we would most likely end up with merge conflicts. That may still be the best way forward as it gets you a consistent relationship with the release branch and includes testing information.

At times however, we may want to „reset“ the test-system into a clean state again. In that case, we can do something that we would not allow on the origin repository: Overwrite the test-system history with a new, clean history, starting at where we left off initially. That is, we reset the test-system branch, merge the release commit, and finally force push the new history.

Now after this, the test system has a clean history that shows a history as we would have it when updating with release branch updates normally. None of what we did had any impact on the origin repository until we decided for meaningful changes.

Summary

What looked rather complicated was actually not more then equipping a runtime environment with its own change history and using some ordinary Git versioning „trickery“ to walk through some code and configuration maintenance scenario. We turned an execution environment into a long-living system with a configuration history.

The crucial pre-requisite for any such scenario is the ability of the runtime environment to be updated automatically and easily from a defining configuration repository implemented over Git or a similar DVCS.

A capability that the z2-environment has. With version 2.8 we intend to introduce much better support for distributed update scenarios.

Z2-environment Version 2.7 is Available

I am happy to declare version 2.7 ready for download and use. Version 2.7 comes with a lot of small improvements and some notable albeit rather internal changes.

Please check out the wiki and online documentation.

Support for Java 11

Version 2.7 requires Java 9 and runs with Java up to Version 12 as well and supports a language level up to Java 11 based on the Eclipse Java Compiler ECJ 4.10 (#2021

Well… note the use of var in lambdas. The most noteworthy changes in Java 11 are its support and licensing model. Please visit the Oracle Website for more details.

Updated Jetty Version

As the integrated Jetty Web container required an upgrade to run with Java 11 as well, z2 2.7 now includes Jetty 9.4.14 (#2027).

Robust Multi-Instance Operation

The one core feature of z2 is that any small installation of the core runtime is a full-blown representation of a potentially huge code base.

This is frequently used when running not only an application server environment, but instead, from the same installation, command line tools that can make direct use of possibly heavy-weight backend operations.

In development scenarios however, code may have changed between executions and z2 previously sometimes created resource conflicts between long running tasks and freshly started executions. This has been fixed with #1491.

No more Home Layouts

The essential feature that make it easy to set up a system that has execution nodes serving different but well-defined purposes of a greater, coherent whole is the concept of system states. System states allow to express the grouping of features to be enabled in a given configuration and extend naturally into the component dependency chain that is a backbone of z2’s modularization scheme.

Unfortunately Home Layouts, that defined what worker processes to run in a given application server configuration, duplicated parts of this logic but did not integrate with it. That has been fixed with issue #1981. Now, worker processes are simply components that are part of a home process dependency graph. In essence, while the documentation still mentions Home Layouts, a home layout is now simply a system state and a home layout by convention.

More…

Check out the version page for more details. Go to download and getting started in five minutes or check out some samples.

 

Posted in z2

Z2-environment Version 2.6 is ready – for download!

Finally. I am happy to declare version 2.6 ready for download and use.

Version 2.6 comes with a lot of small improvements and some due follow up on what changed in the Java world.

Aside from regular software maintenance, there is one bigger change: The z2-base distribution.

If you are wondering what you might be missing, read on!

Check out the updated wiki and online documentation.

Java 9 and Java 10 Support

One of the more obvious changes is that v2.6 requires Java 9 and runs with Java 9, 10, and with Java 11 as well. Language-wise, this release is on Java 9 or Java 10 depending on the runtime used.

Java 9 introduced a new module system feature into the Java core model (see the Jigsaw Project). Unfortunately this system has even more flaws than OSGi had as far as considering its usefulness for actually application development. Lets put it this way: You will absolutely not be bothered with Java 9 modularity when using Z2 modularity.

Z2 Distribution for Download

Beyond many useful upgrades and minor improvements, some shown below, the one important “innovation” is that z2 now has a download page and a downloadable distribution.  One of Z2’s defining features is to pull from local and online resources and prepare anything required for running all by itself. That philosophy was very visible in the way we promoted the use of Z2 previously: Only checkout the core and pull anything else from (our) repositories. Problematic about that approach is that it makes creating your own system a unnecessarily complicated procedure. Providing a distribution simplifies getting started on your own and gives you a clean set of assets to import. Last but not least, we provide you with a comprehensive overview over all included 3rd-party licenses:

More Desktop for the GUI

The Z2 GUI that is really just a log container with buttons for the few interactions you need with Z2 is now a little more friendly to the eye by offering font scaling by pressing Ctrl and htting “+” or “-” or turning the mouse wheel.

Expert Features

Linked Components

Link component work like symbolic file system links in that they move the visibility of a component definition to a different module and component name. The link may actually add additional information, such as dependencies, state participation and more. (documentation)

Parameterized tests and suites in z2Unit

Z2Unit, the integrated JUnit-based testing kit for seamless as-easy-as-ever in-container testing had some gaps due some omissions in JUnit’s internal APIs. Z2Unit strives to move the convenience of a local class test to a deep integrated, painless in-system testing. (blog)

Finer Control on Compile Order and Source-Jar filtering

Some optimizations have been added to provide finer control on use of extension compilers (e.g. for AspectJ) for single Java component facets (API, impl, test). Previously z2 also made source JAR files visible to the application. This was neat for development but expensive and in some cases outright problematic at runtime. (documentation)

Clean Up of Third Party Libraries and Various Upgrades

Z2 now integrates Jetty 9.4.8. and JTA 1.2 for its built-in transaction support. Samples and sample dependencies have generally be upgraded to recent versions of e.g. Spring and Hibernate. Check out the version page.

Posted in z2

Java 9 Module System – Useful or Not?

Actually… rather not.

I am currently working on preparing z2-Environment version 2.6. While it is not used by a lot of teams, where it is used we have large, modular, code bases.

This blog is packed with posts on all kinds of aspects of modularization of largish software solutions. Essentially it all boils down to isolation, encapsulation, and sharing to keep complexity under control by separating inner complexities from the “public model” of a solution, in order to foster long-term maintainability and ability to change and extend.

Modularization is a means of preserving structural sanity and comprehensibility over time.

That said, modularization is a concern of developing and evolving software solutions – not libraries.

The Java 9 module system is however exactly that: A means to express relationships between JAR libraries within JAR libraries. I wrote about it before a while back (Java Modularity – Failing once more?). Looking at it again – I found nothing that makes it look more useful.

First of all very few libraries out there will have useful module descriptors – unless they work together trivially anyway. Inconsistent Maven dependencies are bad enough, but can usually worked around in your own assembly. A bad or missing module descriptor essentially requires you to change an existing library.

Even, if all was good, clean, consistent with our popular third-party libraries, what problem would those module-infos and the module system actually solve for us?

The only effective means to really hide implementation details to the extent of keeping definitions completely out of visibility, the means to very controlled expose definitions, even if other versions of identically-named definitions are present in the system is still a class loader based modularization approach. Java 9 modularization does not preclude that. It does not add anything useful either as far as I can tell.

Z2 v2.9 will not have explicit integration for Java 9 module system for now – for lack of usefulness.

References

Z2-environment Version 2.5 Is Out

It took a while, but it got finally done. Version 2.5 of the z2-environment is out. Documentation and samples have been updated and tested.

Here is what version 2.5 was about:

More Flexibility in Component Configuration

A major feature of z2 is to run a system cluster strictly defined by centrally, version-controlled configuration. As there is no rule without an exception, some configuration is just better defined by the less static and local system runtime environment, such as environment variable or scripted system properties.

To support that better and without programming, component properties may now be expressed by an extensible expression evaluation scheme with built-in JEXL support. Expressions are evaluated upon first load of component descriptors after start or after invalidation at runtime.

Some use-cases are:

  • Seamless branch or URL switching based on environment settings.
  • Dynamic evaluation of database config, in particular remote authentication based on custom evaluators or environment settings.
  • Declarative sharing of configuration across components.

Some aspects, such dynamic evaluation of searchable properties, were not made dynamic due to the risk of unpredictable behavior. Future work may show that the concept can be extended further though.

pseudo_uml

Check it out in the documentation.

More Complete in-Container Test Support

Z2 offers a sleek, built-in way of running application-side in-container tests: z2Unit. Previously, the JUnit API had its limits in serializabilty over the wire – which is essential for z2Unit. JUnit improved greatly in that department and after the correspinding adaptation of z2Unit some problematic Unit Test Runner combinations (such as e.g. z2Unit and parameterized tests) now work smoothly.

z2unit

Check it out in the documentation.

Better Windows Support

Some very old problems with blanks in path or parameter names got finally fixed. There is a straight forwared command line specification syntax for worker processes that is (mostly) backward compatible.

Also, and possibly more importantly, system property propagation from Home to Worker processes is now fully configurable.

Check it out in the documentation.

Better Git Support

Z2 can read directly from GIT repositories. However, previously only a branch could be specified as content selector. Now any GIT ref will do.

thatsit

Check it out in the documentation.

There is some more. Please check out the version page, if you care.

What’s next:

The plans for 2.6 are still somewhat open. As the work in Version 3 will not make it into any production version – namespace changes are too harming at this time – some useful structural simplifications implemented in 3.0 are considered, such as:

  • Worker processes as participants in Home process target states (rather than a Home Layout)
  • Introducing a “root” repository that hosts any number of remote or “inline” repositories and so streamlines local core config
  • Supporting a co-located Tomcat Web Container as an alternative to an integrated Jetty Web Container
  • Component templates that provide declarative default configurations and so remove duplications (i.e. any Java component based on Spring+Spring AOP).

Thanks and good luck an whatever you do that needs to be done right!

References

 

 

Posted in z2

Java Modularity – Failing once more?

Like so many others, I have pretty much ignored project Jigsaw for some time now – assuming it would stay irrelevant for my work or slowly fade away and be gone for good. The repeated shifts in planned inclusion with the JDK seemed to confirm this course. Jigsaw started in 2009 – more than six years ago.

Jigsaw is about establishing a Java Module System deeply integrated with the Java language and core Java runtime specifications. Check out its goals on the project home page. It is important to note the fourth goal:

 

Make it easier for developers to construct and maintain libraries and large applications, for both the Java SE and EE Platforms.

 

Something Missing?

Lately I have run into this mail thread: http://permalink.gmane.org/gmane.comp.java.openjdk.jigsaw/2492

In that mail thread Jürgen Höller (of Spring fame) notes that in order to map Spring’s module layout to Jigsaw modules, it would be required to support optional dependencies – dependencies that may or may not be satisfied given the presence of another module at runtime.

This is how Spring supports its set of adapter and support types for numerous other frameworks that you may or may not use in your application: Making use of Java’s late linking approach, it is possible to expose types that may not be usable without the presence of some dependent type but will not create a problem unless you start using them either. That is, optional dependencies would allow Spring to preserve its way of encapsulating the subject of „Spring support for many other libraries“ into one single module (or actually a jar file).

In case you do not understand the technical problem, it is sufficient to note that anybody who has been anywhere near Java class loading considerations as well as actual Java application construction in real live should know that Spring’s approach is absolutely common for Java infrastructure frameworks.

Do Jigsaw developers actually know or care about Java applications?

Who knows, maybe they simply forgot to fix their goals. I doubt it.

 

Module != JAR file

 

There is a deeper problem: The overloaded use of the term module and the believe of infrastructure developers in the magic of the term.

Considering use of the module term in programming languages, it typically denotes some encapsulation of code with some interface and rules on how to expose or require some other module. This is what Jigsaw focussed on and it is what OSGi focussed on. It is what somebody interested in programming language design would most likely do.

In Java this approach naturally leads using or extending the class loading mechanism to expose or hide types between modules for re-use (or information hiding resp.) which in turn means to invent descriptors that describe use relationships (meaning the ability to reference types in this case) and so on.

This is what Jigsaw does and this is what OSGi did for that matter.

It is not what application developers care about – most of the time.

There is an overlap in interest of course. Code modules are an important ingredient in application assembly. Problems of duplicate type definitions by the same name (think different library versions) and separation of API and implementation are essential to scalable, modular system design.

But knowing how to build a great wall is not the same as knowing how to build a great house.

From an application development perspective, a module is much rather a generic complexity management construct. A module encapsulates a responsibility and in particular should absolutely not be limited to code and is not particularly well-served by squeezing everything into the JAR form factor.

What we see here is a case of Application vs. Infrastructure culture clash in action (see for example Local vs. Distributed Complexity).

The focus on trying to find a particularly smart and technically elegant solution for the low-level modularization problem eventually hurts the usefulness of the result for the broader application development community (*).

Similarly, ignorance of runtime modularization leads to unmaintainable, growth-limited, badly deployable code bases as I tried to describe in Modularization is more than cutting it into pieces.

The truth is somewhere in between – which is necessarily less easily described and less universal in nature.

I believe that z2 is one suitable approach for a wide class of server-side applications. Other usage scenarios might demand other approaches.

I believe that Jigsaw will not deliver anything useful for application developers.

I wish you a happy new year 2016!

References

Ps.:

* One way of telling that the approach will be useless for developers is when discussions conclude that “tools will fix the complexity”. What that comes down to is that you need a compiler (the tool) to make use of the feature, which in turn means you need another input language. So who is going to design that language and why would that be easier?

* It is interesting to check out the history of SpringSource’s dm Server (later passed on to R.I.P. at Eclipse as project Virgo). See in particular the interview with Rod Johnson.

Z2 as a Functional Application Server

Intro

As promised this is first in a series of posts elaborating on integration of Clojure with Z2. It probably looks like a strange mix, however I believe it’s extremely empowering combination of two technologies sharing lots of design philosophy. Clojure has brought me lots of joy by enabling me to achieve much more in my hobby projects and I see how the combination of z2 and Clojure further extends the horizon of what’s possible. I’d be happy if I manage to help other people give it a try and benefit in the same way I did.

The LISP universe is very different one. It’s hard to convince someone with zero previous experience to look at the strange thing with tons of parentheses written using polish notation, so I am going to share my personal story and hope it resonates with you. I will focus on the experience or how I “felt” about it. There is enough theory and intellectual knowledge on the internet already and I will link to it where appropriate.

So, given this is clearly personal and subjective view, let’s put in some context.

Short Bio

I’ve been using Java professionally for 12+ years. Predominantly in the backend. I’ve worked on application servers, as well as on business applications in large, medium and small organizations. Spring is also something I have been heavily relying on in the last 8 years. Same goes for maven. I’ve used JBoss and done bit of application server development myself, but when Spring Boot came up I fell in love with it.

Like every other engineer out there my major struggle through the years have been to manage the complexity. The inherent complexity of the business problem we have to solve plus the accidental complexity added by our tools, our poor understanding of the problem domain and our limited conceptual vocabulary. I have been crushed more than once under the weight of that complexity. Often my own and the team’s share would be more than 50%. I have seen firsthand how poorly groomed code base ends up in state where the next feature is just not possible. This has real business impact.

The most scary thing about complexity is that it grows exponentially with size. This is why I strongly subscribe to the “code is liability” worldview. Same goes for organizations. The slimmer you are the faster and further you can go.

Ways to deal with complexity

Now that the antagonist is clearly labeled, let’s focus on my survival kit.

#1 Modularization

One powerful way to get on top of complexity is divide and conquer by using modularization. This is where z2 comes into the game. It has other benefits as well, but I would put it’s modularization capabilities as feature #1. Maven and Spring have been doing that for me through the years. On more coarse level Tomcat and JBoss provide some modularization facilities as well, however it is extremely rare in my experience where they are deliberately exploited.

Getting modularization right is hard on both ends:

  • The framework has to strike a balance between exercising control and enabling extensibility, otherwise it becomes impractical.
  • The component developers still have to think hard and define boundaries of “things” while using the framework idioms with mastery. I haven’t met yet technology that removes this need. It’s all about methodology and concepts (I dislike the pattern cult).

Discussing more precise definition of what exactly is modularization and why the well-known methodologies are too general to be useless as recipes is too big of discussion for here.

My claim is that z2 strikes the best balance I have seen so far while employing very powerful concept.

#2 Abstractions

Another powerful way is to use better abstractions. While modularization puts structure in chaos, the right abstractions reduce the amount of code and other artifacts, hence the potential for chaos. Just like any other thing, all abstractions are not made equal and I assume they can be ordered according to their power.

My personal definition for power: if abstraction A allows you to achieve the same result with less code than abstraction B then it’s more powerful. Of course reality is much more hairy than this. You have to account for abstraction’s implementation, investment in learning, long term maintenance costs & so on.

Alternative definition: if abstraction A allows you to get further in terms of project size and complexity (before the project collapses) it’s more powerful.

The abstractions we use on daily basis are strongly influenced by the language. Language can encourage, discourage (due ergonomics) or even blacklist an abstraction by having no native support for it. My claim here is that the Java language designers have made some very limiting choices and this has profound effect on the overall productivity as well as the potential access to new abstractions. Clojure, on the other side has excellent mix right out of the box with convenient access to very wide range of other abstractions.

The OO vs. FP discussion deserves special attention and will get it. I won’t claim that Clojure is perfect, far from it. However the difference in power I have experienced is significant and big part of that difference is due to carefully picked set of base abstractions implemented in very pragmatic way.

 So, what’s next?

Next comes the story how Java and DDD helped me survive and how JavaScript made me feel like a fool for wasting so many hours slicing problems the wrong way and worrying about insignificant things. Clojure will show up as well, you can count on this.

While you wait for the next portion, here are two links that have influenced heavily my current thinking:

  • Beating the averages — the blub paradox has been an eye opening concept for me. I have read this article in 2008 for the first time and kept coming back to it. It validated my innate tendency to be constantly dissatisfied with how things are and look for something better. Paradoxically, it never made me try out LISP 🙂
  • Simple made easy — This is the presentation that among other side effects made me give Clojure a chance. This presentation probably has the best return of investment for an hour spent in front of the screen.