Updates in Modular Data

In a modular system, data also tends to be modularized. In the post Modularization And Data Relationships we looked at cross-subsystem and cross-domain data dependencies and how to make those available for efficient querying.

In this second part we discuss aspects of update, in particular deletion, of data that other data may depend on. Remember, we are considering a modular domain in which some central piece of data (we will call that dependency data) is referenced by extension modules and domain data (we call that dependent data) that were not specifically considered in the conception of the shared data.

In our example we chose a domain model of some Equipment management core system (the dependency data) that is referenced by extension modules and data definitions, in our case Inspections and Schedules (the dependent data), of a health care extension to that core system:

When equipment data changes or gets deleted dependent inspection data may easily become invalid. For example, a change of equipment type may mean that inspection types or schedules needs to be updated as well. And of course, inspections for deleted equipment will be obsolete altogether.

There is also the technical aspect of foreign key relationships between the inspection database table and the equipment database table. If the foreign key has become obsolete and unresolvable, all equipment information that was previously resolved by joining database tables via the foreign key relationship would be gone and cannot be presented to users anymore.

Simply put there are two choices of how to handle deletions:

  • Do actually not delete records but only mark records as unavailable.
    In that case, for dependent data the original data set is still available and follow up actions may be offered to users. But the domain model becomes more complicated (e.g. w.r.t. uniqueness constraints).
  • Delete and simply cope with it.
    That means: The inspection application needs to be completely coded towards the case that dependency data may be gone. And whatever is needed to supply users with necessary information to plan follow-up action needs to be stored with the dependent data.

Similarly for updates:

  • When data updates to an extent that dependent data has become invalid this situation needs to be discovered when dependent data is visited again to plan and perform whatever corrections are required.

While these approaches look theoretically sound, there is much to be improved. Firstly, developing an application to cope with any kind of inconsistency implied by changes of dependency data will be rather complex in the general case.

Secondly, from a user perspective, it will typically be highly undesirable to even be allowed to perform updates that will lead to situations that require follow-up actions and repairs without being told beforehand.

Hence:

How can we generically handle updates of dependency data not only consistently but also user-friendly?

Here is at least one approach:

A Data Lease System.

With that approach we use a shared persistent data structure that expresses its relationships between dependent and dependency data explicitly and that is known by all subsystems.

In the simplest case, a lease from a dependent data record of a dependency data record holds:

  1. The dependent data record key and a unique domain type identifier for which that key is valid.
    No other present or future domain use of that identifier should be possible.
  2. Likewise the dependency data record key with a corresponding domain type identifier,

( <dependent type>, <dependent key>, <dependency type>, <dependency key>)

In our example this could be something like this:

( “inspection.schedule”, “11ed12f2-8c94-41a7-9143-8d4ff6070f”, “equipment.equipment”, “c5f92006-dc40-41aa-97d0-3ae709b4aca6”)

In other words: From a database perspective, the lease is simply a shared join table structure that is annotated with additional meta-information on “what” is being talked about.

While updating lease information is delegated to a shared service in order to provide additional handling for one-to-many, many-to-one, or many-to-many relationships, cleanup and generic check methods, from a read-access perspective, the principles of Modularization And Data Relationships can be applied in full and the table structure should indeed be used as a join structure establishing the actual relationship.

That is, in the example, the health care to equipment relationship is now expressed via the data lease.

Now, if we have this system in place numerous improvements are readily available.

Giving Users a Choice

First of all the equipment application can now trivially determine, if there are dependent data sets, how many, and of what kind (via the additional domain type identifier).

Based on that information the equipment application could offer choice to users before applying an update. For example, upon a request for deletion, the application may inform the user that the piece of equipment is still referenced by dependent data and refuse the deletion.

For updates this does however not remove the need for understanding the implications on dependent data.

The approach becomes most useful if we pair it up with an extension point approach, in which the domain type identification proposed above is used to look up implementations of callback interfaces provided by the extension modules of the subsystem owning the dependent data.

Giving Power to the Extension

Once we can generically identify the owner of the lease, we can pass on some decision support to the dependent subsystem. In particular: Upon deletion or update, the dependent subsystem may be made aware and analyze exactly what the implications would be.

We can remove all needs for lazy responses to changes of dependency data by involving the dependent subsystem in the update processing in terms of validating the update and eventually handling the update.

In our example, a change to equipment type would be validated by the inspection application. The type change may be unacceptable upon which the user should be informed that deletion is not possible or that the update would disable an inspection schedule. Or, in some other cases the user would be told that it is OK to proceed, but that some settings of dependent inspection data would be altered automatically.

Similarly, a deletion of an equipment record may be rejected, or the user be told that all related inspection data would also be deleted as a consequence.

In any case: Inspection data would be consistent and no lazy checks required anymore.

There is still much to be defined for the specific software system of course. Enjoy designing!

Advertisements

Modularization And Data Relationships

A lot of posts in this blog are on structure largish applications via one or another modularization approach. Size is not the only reason to modularize though. Another important reason to split into independent subsystems is optionality. For example given a core product, industry specific extensions may exist that rely on the core system’s APIs and data models but are made of significant code and data structures to justify an independent software life cycle. While code dependencies have been treated extensively in previous posts, we have not looked much at data relationships.

This post addresses data model dependencies between software subsystems of a modularized software system.

Data Model != Code

When we talk about data model dependencies between subsystems, we are talking about data types, in particular data types representing persistent data, that are exposed by one subsystem to be used by other subsystems.

Exposing data type, i.e. making the knowledge of their definition available between subsystems is typically done using an API contract – be it using a regular programming language, or some data description language such as XML schema.

Making data types available between subsystems is only one side of the story however. Eventually data described by the model will need to be exchanged and combined with other data. Typically describing data and providing access to data is part of a subsystem API and combining data from different subsystems is done by the caller:

In many cases, however, this is not good enough: If subsystems share the same database, losing the ability to query data across domain definitions of subsystems can be a real showstopper for modularization.

An Example

Consider the following hypothetical example: An application system’s core functionality is to manage equipment records, such as for computers, printers, some machinery. An extension to the core system provides functionality specific to the health care industry. In health care, we need to adhere to more regulation than the core system offers. For example we need to observe schedules for inspections by third-party inspectors.

The health care extension has hence its own data base structures that refer to data in the core system. It uses database foreign keys for that. That is, in its database tables we find fields that identify data records in the core system’s database tables.

Now, for a single inspection schedule, that refers to one or more pieces of equipment, looking up the individual equipment via a core system poses no problem. Answering a question such as what is the top-ten pieces of equipment of some given health care inspection schedule that have the most service-incidents is a different case. Providing efficient, sort-able access to any such combined data to end users via independent data retrieval is hard if not a pointless goal in the general case. This is what joins in the relational database world are for after all.

Joining Data between subsystems

Given the construction above, we want to extend the core systems API to not only support single data lookups but also a query abstraction for more clever data retrieval, in particular so that combined data queries, i.e. SQL joins, would be computed on the database rather than in memory.

Unfortunately, a query interface that would include data defined outside of the core application’s data model can typically not be expressed easily.

We have a few choices on providing meaningful access to joinable data between subsystems however.

Exposing Data

For once the core application may simply document database table or database view structures to be used by other subsystems for querying its data.

This way, the health case extension would extend its own database model by those portions of the core systems data base model that are a) of relevance to the extension and b) part of the new API contract that includes these data access definitions. This would only be used for read-only access as there is no natural knowing by the extension what other data update and validation logic may be implemented in the core system.

Something integrated is possible using the Java Persistence API, and should be similarly (if not better) available in Microsoft’s .Net framework via the LINQ and LINQ to SQL features.

Views in Modular JPA

When using the Java Persistence API (JPA), an underlying relational data model is mapped onto a set of Java class definitions and some connecting relationships. Together, the mapping information, some more provider and database specific configuration, is called a persistence unit. At runtime all operations of a JPA Entity Manager, such as performing queries and updates, run within the scope of a persistence unit.

In our example, we would have a core persistence unit for equipment management and one persistence unit for the health care extension. As a principle, both would be private matters of the respective subsystem. We would not want the health care extension to make use of non-public persistence definitions of the core model as that would break any hope for a stable contract between core and extension.

As such there is no simple sharing mechanism for persistence units that would allow to expose a subset of the core persistence unit to other subsystems as part of an API contract.

A close equivalent of a simple database view that is still within the JPA model however is read-only JPA classes that can be included into an extension’s persistence unit definition.

That is, we would have the same data types used in different persistence units. For the extending subsystem, those types appear as a natural extension to its own data model and can hence be used in joining queries, while defined in and hence naturally fitting to the core’s domain model.

As a mix of Entity-Relationship and Class Diagram this would look like this:

Highlighting the scope of persistence units:

Now we are at a point where data access and sharing between subsystems is well-defined and as efficient as if it was not split into separated domain models.

It’s time to move on to the next level.

Data Consistency – or what if data gets deleted?

When split into subsystems, also responsibility of data management gets split. Let’s take a look at our example.

In the health care extension to the equipment management system, inspection schedules, stored in the extension’s domain model refer to equipment data stored in the core application’s domain model. Based on the ideas above, the health care extension can efficiently integrate the core data model into its own.

But then, what happens on updates or on deletions issued by the core application’s equipment management user interface? It would be simplistic to assume that there are no restrictions imposed by the health care extension. Here are some possible considerations:

  • Deletion of equipment should only be possible, if some state has been cleared in the extension.
  • Updating equipment data might be subject to extended validation in the extensions
  • Can the extension subsystem be inactive and miss changes by the core application so that it would end up with logically inconsistent data?

We will look into these exciting topics in a next post: Updates in Modular Data

Java 9 Module System – Useful or Not?

Actually… rather not.

I am currently working on preparing z2-Environment version 2.6. While it is not used by a lot of teams, where it is used we have large, modular, code bases.

This blog is packed with posts on all kinds of aspects of modularization of largish software solutions. Essentially it all boils down to isolation, encapsulation, and sharing to keep complexity under control by separating inner complexities from the “public model” of a solution, in order to foster long-term maintainability and ability to change and extend.

Modularization is a means of preserving structural sanity and comprehensibility over time.

That said, modularization is a concern of developing and evolving software solutions – not libraries.

The Java 9 module system is however exactly that: A means to express relationships between JAR libraries within JAR libraries. I wrote about it before a while back (Java Modularity – Failing once more?). Looking at it again – I found nothing that makes it look more useful.

First of all very few libraries out there will have useful module descriptors – unless they work together trivially anyway. Inconsistent Maven dependencies are bad enough, but can usually worked around in your own assembly. A bad or missing module descriptor essentially requires you to change an existing library.

Even, if all was good, clean, consistent with our popular third-party libraries, what problem would those module-infos and the module system actually solve for us?

The only effective means to really hide implementation details to the extent of keeping definitions completely out of visibility, the means to very controlled expose definitions, even if other versions of identically-named definitions are present in the system is still a class loader based modularization approach. Java 9 modularization does not preclude that. It does not add anything useful either as far as I can tell.

Z2 v2.9 will not have explicit integration for Java 9 module system for now – for lack of usefulness.

References

Some Notes On Working with Non-Transactional Resources

This is a follow up to the article Notes on Working with Transactions.

While there are constraints to keep in mind when working with transactional resources, the main point of the article, there is the one thing that keeps matters in shape: If things go wrong, simply roll back! This is the the all-or-nothing quality of atomic state change in transactional processing: Either the whole state change is applied or none of it.

This post is about handling cases where this assumption cannot be made.

Naturally this occurs when working with an inherently non-transactional resource like the average file system or remote web service.

Another prominent case results from breaking a long running state change, even when implemented over a transactional database, into many small transactions. Even if we are technically working with a transactional resource, due to other constraints such as long execution time, we are forced to implement an overall non-atomic state change.

Unfortunately there is no single generic approach that would fit all cases. There is however ways of reducing complexity into workable pieces.

In order to get there, let’s work out some basic observations:

It is All About Handling Failure

Considering the introduction, this may sound obvious. However, what it is we do, if things go wrong and the system leaves us with a partial state change?

For an automation script that is run once in a while this may not be a crucial question. For a business process running millions of times, failure is a normal and repeating aspect of execution that needs to be taken into account.

The crux is to make sure the system is never in a state that prevents either of the following to actions:

Repetition: If a previous attempt at changing the system state failed due to some external problem (unavailabilty of file system, power outage), the attempt at state change must be repeatable. That is, the system or the user needs to understand that the attempted state change failed and how to start over.

For example: If the state change implies moving a file, a repetition would check if the file was moved and only try again if not.

Compensation: If it is clear that a state change will not be completed, or if that is not desirable, it must be possible for the system or the user to understand the impact of a partial change and possibly how to undo it.

For example: If the state change marked some database entries as deleted by setting a deletion flag, identify deletions by transaction id and unset the delete flag.

For both actions there is an underlying requirement that is even more essential

At any time during a state change the system is always within its consistency model

Technically this means that the scope of what has to be considered consistent, and hence what is acceptable precondition to a state change, has just become considerably broader.

Implement a State Chart

In reality however, processes quickly get complicated and assuring repeatability and compensability becomes a non-trivial exercise.

Consider the following still simple example: Suppose we need to

  1. Pick up a file F from a remote file system
  2. Send its content to some remote REST service – at most once. Ask for help if failing.
  3. Move it to some folder to depending on whether processing completed successfully or not.

Sounds easy enough. A simple flow chart could render this process like this:

flow_chart

However that does tell us very little about how to handle failures – or where to pick up work if an attempt at running the process failed previously. For that it is more suitable to create a state chart. The natural benefit of the state engine model is that it tells us right away, where work may be interrupted and be continued – hence allowing for repeated execution. A trivial state chart would complete work in one go. But as we want to send file content no more than once, we need to safe-guard against duplicate attempts, and as we want to avoid getting tricked into failed operations by broken (remote) file system access, we add some extra states pre-file-moving:

state_chart

Given the state chart we can now concentrate on implementing robust and repeatable state transitions that only need to worry about simple preconditions. For example, in the processing state we would check only for a previous attempt. In the errored or sent state we only need to check for whether the file has already been moved.

Let us consider how an implementation based on the state chart above would behave under failure:

File access failed during read of file Stay in processing. Pause and retry.
Notice a second sending attempt. Give up as we do not know whether the sending actually completed before but we failed to notice. Case to resolve manually.
File move failed Must have completed sending attempts. Stay in sent or errored respectively, pause and retry.

Summary

Obviously, in order to implement that state chart, you need some state persistence model. Describing that and how to provide feedback to users is out of scope of this article. Depending on your needs and scenarios a simple database table to manage a stateful process may be sufficient. Other cases may benefit from implementation tools such as Spring Batch. Others may demand a complete Business Process Management suite – but then you would most likely not read this post.

Going by this artificial example, the point of this post is that in the absence of transactional resources, non-trivial processes may be implemented reliably and robustly but require significant more care and modeling attention. Very much like real world processes involving people and physical resources.

Notes on Working with Transactions

Transactional state handling is nothing we think much about anymore. It’s there. To the extent, that – I suspect – many have not thought much about it in the first place.

This post is about some, say, transaction potholes that I ran into at times – fooled by my own misleading intuition. And then… it is a little database transactions 101 that you may have forgotten again.

Recap

When I left university I had not studied database theory. Many Math and CS students did not. And yet I spent most of my professional career working for software solutions that have a relational database system (RDBMS) such as Oracle, Postgres, DB2, MS SQL Server or even MySQL, if not at their heart, than at least as their spine.

I have also worked on solutions that relied on self-implemented file system storage, when an RDBMS could have done the job. I consider that a delusional phase and waste of time.

There is of course limits, and there is problems where an RDBMS is not suitable – or available. Where it is however, it the one solution because:

  1. There is a rather well-defined and well-standardized storage interface (SQL, Drivers) to store, retrieve, and query your data.
  2. The relational algebra is logically sound, mostly implemented, well-documented, and really flexible, and actually proven to be so.
  3. Any popular RDBMS system provides for an operational environment, can be extended with professional support, has backup & recovery methods, etc, etc.
  4. It is transactional!

Let’s talk about the last bullet point. The key feature of transactional behavior is that of its all or nothing promise: Either all your changes will be applied or none. You will have heard about that one.

This is so important not because it is convenient and saves you some work of change compensation. It is important because normally you have absolutely no idea what are those changes that your code, the code you called, or the code that was called by the code you called actually made! That’s big.

But what is in that “all or nothing” scope? How do you demarcate transations?

That depends a bit on what you are implementing. The principle of least surprise is your friend though. There are some simple cases:

  1. User interactions are always a good transaction scope
  2. Processing an event or a service call with a response is a good transaction scope

Or to cut things short a good scope is every control flow…

  • … that represents a complete state change in your system’s logic and
  • … that does not take long.

Why should it not take long?

There is hardly a system that executes a single control flow. There will be many concurrent control flows – otherwise nobody will want to use the system, right? And they share the same database system. The real problem of a long running transaction is not so much that it holds on to a database connection for a long time, which may or may not be a sparse resource, but that it may prevent other control flows from proceeding by holding on to (pessimistic) locks in your database:

In order to prevent you from creating nonsense updates an RDBMS can and does provide exclusive access to parts of the data stored – e.g. in the form of a row-level lock when updating a record. There is extremely good reasons for that you can read up on elsewhere. Point is: It happens.

notlong

And as you have effectively no idea what updates are caused by your code – as we learned above – you can be sure that your system will run into blocked concurrency situation and will not be responsive and will not scale well in the presence of a long running transaction scheme. You do not want that.

Split Transactions

Coming back to transaction demarcation – here is a classic. In Java EE, a long time ago, when it was still called J2EE, there was really no declarative out-of-box transaction demarcation for Web applications. Following the logic of what is a good transaction demarcation above, the normal case is however that a single Web application request, at least when representing a user interaction, is a premier candidate for a transaction scope.

In an attempt to map what was thought to be useful for a distributed application, most likely because Enterprise Java Beans (EJB) had been re-purposed from remote objects to local application components (from hell), the proposed model du jour was to capture every Web application user transaction into a method invocation of a so-called Session Bean (EJB) – because those were by default transactional. See e.g. Core J2EE Patterns – Session Facade.

Now imagine due to some oversight, you called two of those for a single interaction: Two transactions. If the first invocation completed, a state change would be committed that a failure of the second invocation would not roll back and “boom!” you would have an incomplete or even inconsistent state change. Stupid.

To avoid that, it is best to move transaction demarcation as high as possible, as near as possible to the entry point as possible within your transaction management.

Nested Scopes

Sometimes however, you need more than one transaction within one control flow, even though there is no timing constraint. Sometimes you need nested transactions. That is, even before the current transaction terminates, the control flow starts another transaction.

This is needed if some deeper layer, traversed by the current control flow, needs to commit a state change regardless of the outcome of what is happening after. For example, a log needs to be written that an attempt of an interaction was performed.

Seeing this in code, a nested transaction typically radiates an aura of splendid isolation. But it is deceiving and dangerous. The same problem is as for long running transactions applies here: Your nested transaction may need to wait for a lock. In this case: It may need to wait for a lock that was acquired by the very same control flow – a deadlock!

deadlock

The need is rare – but the concept so tempting that it is used definitely much more frequently than justified.

So what about long running state changes?

Unfortunately some applications need to implement state changes that take long. Some mass update of database objects, some long running stateful interaction. This is a rich subject in its own right and it cannot be excluded that there will be some posts on that in this blog.

References

Applications Create Programming Models

Typically when we think about applications we structure them by business functions that will be implemented based on given, quite generic programming models such as, broadly speaking, Web applications and background jobs. These serve as entry points to our internal architecture comprising of, say, services and repositories.

Any code base that grows requires modularization along its abstraction boundaries if it wants to stay manageable. Contracts between modules are expressed as APIs. We tend to think about APIs as “access points” to methods that the “API consumer” invokes:

simple

But that’s only the trivial direction. It is typical that some work be delegated to the API consumer to fill in some aspect. For example streaming some data, handling some event, consuming some computation result. That is: It is not only the subsystem that implements the API, but the consumer does as well:

bidirectional

In the still simple cases, the APIs implemented by the consumer are passed-on callbacks (or event handlers, etc.). This is by far the most prominent method employed by your typical open source library.

If callbacks need to be invoked asynchronously, or completely independently from some previous invocation, e.g. as a job activity on some other cluster node, this approach becomes increasingly cumbersome: In order to make sure callbacks can be found, they need to be registered with the implementation before any invocation need occurs.

In short: You need to start seriously about how to find and manage callback implementations in your runtime environment. That is, switching to the term extension interfaces, you have a serious case of Extend me maybe….

andfind

Being an API that may be used by a probably yet undefined number of consumers, other aspects may require rules and documentation such as

  • The transactional environment propagated to extension interface invocations.
  • Is there authorization constraints that have to be considered or can be declared by implementors?
  • Is there concurrency/threading considerations to be considered?

And that is where it starts becoming worthy to be called a programming model.

While this looks like just another way of saying callback or extension interface, it is important to consider the weight of responsibility implied by this – better early than later.

For a growing software system, managing extensions in a scalable and robust way is a life saver.

Not considering this in time may well become a deferred death blow.

Z2-environment Version 2.5 Is Out

It took a while, but it got finally done. Version 2.5 of the z2-environment is out. Documentation and samples have been updated and tested.

Here is what version 2.5 was about:

More Flexibility in Component Configuration

A major feature of z2 is to run a system cluster strictly defined by centrally, version-controlled configuration. As there is no rule without an exception, some configuration is just better defined by the less static and local system runtime environment, such as environment variable or scripted system properties.

To support that better and without programming, component properties may now be expressed by an extensible expression evaluation scheme with built-in JEXL support. Expressions are evaluated upon first load of component descriptors after start or after invalidation at runtime.

Some use-cases are:

  • Seamless branch or URL switching based on environment settings.
  • Dynamic evaluation of database config, in particular remote authentication based on custom evaluators or environment settings.
  • Declarative sharing of configuration across components.

Some aspects, such dynamic evaluation of searchable properties, were not made dynamic due to the risk of unpredictable behavior. Future work may show that the concept can be extended further though.

pseudo_uml

Check it out in the documentation.

More Complete in-Container Test Support

Z2 offers a sleek, built-in way of running application-side in-container tests: z2Unit. Previously, the JUnit API had its limits in serializabilty over the wire – which is essential for z2Unit. JUnit improved greatly in that department and after the correspinding adaptation of z2Unit some problematic Unit Test Runner combinations (such as e.g. z2Unit and parameterized tests) now work smoothly.

z2unit

Check it out in the documentation.

Better Windows Support

Some very old problems with blanks in path or parameter names got finally fixed. There is a straight forwared command line specification syntax for worker processes that is (mostly) backward compatible.

Also, and possibly more importantly, system property propagation from Home to Worker processes is now fully configurable.

Check it out in the documentation.

Better Git Support

Z2 can read directly from GIT repositories. However, previously only a branch could be specified as content selector. Now any GIT ref will do.

thatsit

Check it out in the documentation.

There is some more. Please check out the version page, if you care.

What’s next:

The plans for 2.6 are still somewhat open. As the work in Version 3 will not make it into any production version – namespace changes are too harming at this time – some useful structural simplifications implemented in 3.0 are considered, such as:

  • Worker processes as participants in Home process target states (rather than a Home Layout)
  • Introducing a “root” repository that hosts any number of remote or “inline” repositories and so streamlines local core config
  • Supporting a co-located Tomcat Web Container as an alternative to an integrated Jetty Web Container
  • Component templates that provide declarative default configurations and so remove duplications (i.e. any Java component based on Spring+Spring AOP).

Thanks and good luck an whatever you do that needs to be done right!

References

 

 

Posted in z2

A Web App Wrapper or “Why are Web Applications Less Likeable Than Native Applications – Still?”

In between for something completely different.

I use web apps in my daily work. Heavily, if not mostly – except maybe my IDE and the occasional MS Office session. But for reasons that I find not obvious, they are still not on par with native apps. This is not due to lack of responsiveness or desktop integration. There is very little in terms of user experience where web apps that I use lack. And still – if available I’d rather chose the native alternative. So…

Why is it that web apps are still not as “likeable” as native apps?

A few weeks ago mozilla thunderbird, the friendly companion of many years, finally became unusably slow for me. As MS Outlook is no option for me I started looking for an alternative that would be fast at startup and while flipping and searching through mail, would run on Linux, AND has a well-working calendar integration. There are numerous promising candidates for the first two requirements. But, strangely enough, it seems that calendar support is a tough problem.

But then, my e-mail as well as my calendar is perfectly accessible via a Web interface. It is just that I do not use it that much – although it is fast, responsive, usable the same on all machines, and obviously OS-independent (and was made by Google). Duh!

So why not use that instead of a dedicated native client?

Turns out what really turns me off is that the Web client exposes you to a through and through fleeting user experience:

  • As your desktop gets cluttered with open browser tabs, usually the sensible way out is to close them all. Your important stuff got closed as well.
  • You are using multiple users but your browser only manages one session at a time
  • You want to have the right stuff opened at startup – not nothing, not what happened to be open last time – and you want to have multiple such configurations.

None of this seems unreasonable. And yet I have not found anything that does just that for me.

Ta da!

As a conclusion I looked into “how to wrap my favorite web apps into a native application”. Not for the first time – but this time with the necessary frustration to see it through. Such a “wrapper” should fix the problems above and other do absolutely nothing beyond the absolutely required. Here is the result:

https://github.com/ZFabrik/z-front-side

How does it work?

It is based on electron – that is: It is essentially a scripted chrome browser. And it is very basic and does very little beyond showing a few site-buttons, preloading some of them (always the same) and can be loaded several times for different “partitions” – which implements the multi-session capability.

I have been using it with two different configurations (shared on all machines) and two partitions (private/work), for a few weeks now and finally feel like the five to ten Web apps I use all the time, every day, feel completely integrated with the overall desktop experience – just like any other native application.

Feel free to use, enhance, copy whatever.

From Here to the Asteroid Belt (I)

When I came up with the title line, I had a completely different conclusion in mind. It’s a nice line though. In contrast to the conclusion, it stayed.

Oh and by the way: Spring is finally here:

IMG_20170323_1720145.jpg
(spring at the office, so much work lately, so little time to enjoy it)

This is one of those “what’s the right tool for the problem” posts. Most, me being no different, try to use tools they know best for essentially any problem at hand. And this is good instinct. It’s what people have always done and obviously they did something right. Knowing a tool well is of great value and typically supersedes in effectiveness the use of a tool that might be more powerful – if used correctly – but that you are not an expert at.

At scale however, when building something more complex or widely distributed, tool choice becomes decisive and intrinsic qualities such as simplicity, reliability, popularity, platform independence, performance, etc. may outweigh the benefits of tool expertise.

What I want to look at specifically is the applicability of a programming and execution platform for deployment scenarios ranging from an in-house, or SaaS deployments to massively distributed stand-alone applications such as mobile apps or desktop applications.

The latter two form the two endpoints of the custom vs. non-custom development scale and the non-distributed to arbitrarily distributed scale.

The rules are pretty clear:

In-house/SaaS: Complete control. The system is the application is the solution. There is no customization or distribution problem because everything is (essentially) 100% custom and 0% distributed.

Mobile/Desktop: No control over the single instance that is executed somewhere in the wild. Hard to monitor what is going on, minimal to no customization, potentially infinitely many instance in use concurrently.

But what about the places in between. The customized business solutions that drive our economic backbone from production sites to warehouse solutions, from planning to financials, from team productivity to workflow orchestration?

diagram.png

Let’s say you have an application that is part standard solution (to be used as is) but typically requires non-trivial customization, adaptation, extension to be effectively useful.

What are the options?

Option C: Maintain a code line per instance or customer

That is (still?) a popular method – probably because it is simple to start with and it makes sure the original developer is in complete control.

That is also its downside: It does not scale well into any sort of eco-system and licensing model including third-parties. For a buyer it means 100% dependency on a supplier that most likely got paid dearly for a customer specific modification and will asked to be so at any time of further adaptation and extension.

Option P: Build a plugin model on top of a binary platform API

That is the model chosen for browsers and similar applications. It works very well as long as the platform use-case is sufficiently well defined, and the market interesting enough.

It obviously requires to invest significantly into feature rich and stable APIs, as well as into an effective plug-in model, a  development approach for plug-ins, and a distribution channels or bundling/installation model.

In essence you build a little operating system for some specific application case – and that is simply not an easy and cheap task to do right.

Option S: Ship (significant) source code and support extension and customization on site

This approach has several strong advantages: You can supply hot fixes and highly special customization with minimal interference. Customization is technically not limited to particular functions or API. There is no extra cost per installation on the provider side compared to Option C.

It assumes however that the ability to change, version, and deploy is built-in and necessary tools are readily available. As code life-cycle is now managed on-site, some attention need to be paid to handle code life cycle cleanly.

From a consumer’s point of view it reduces dependency and (leaving legal considerations aside) technically enables inclusion of third-party modifications and extensions.

Scenario Determines Tool

In part II we will look at how the three different scenarios above translate into tool approaches. Stay tuned.

 

 

A simple modularization algorithm

Lately I worked on breaking up a module that had grown too big. It had started to feel hard to maintain and getting oriented in the code’s module felt increasingly cumbersome. As we run tests by module, automated tests triggered by check ins started taking too long and as several developers are working on code of the one module, failing module tests became harder to attribute.

In other words: It was time to think about some module refactoring, some house keeping.

There was a reason however that the module had not been broken up already: It had some lurking dependency problems. Breaking it up would mean to change other module’s dependencies just because – which felt arbitrary – and still there was re-use code to be made accessible to any split off.

Comparing to past experience this is the typical situation when everybody feels that something needs to be done but it always turns out to be a little too risky and unpredictable so that no one really dares. And after all: You can always push things still a bit further.

As that eventually leads to a messed up, stalling code-base and we are smart enough (or simply small enough?) to acknowledge that, we made the decision to fix it.

Now – I have done this kind of exercise on and off. It has some unpleasantly tiring parts and overall feels a little repetitive. Shouldn’t there be some kind of algorithm to follow?

That is what this post is about:

A simple modularization algorithm

Of course, as you will notice shortly: We cannot magically remove the inherent complexity of the problem. But nevertheless, we can put it into a frame that takes out some of distracting elements:

algo

Step 1: Group and Classify

It may sound a ridiculous, but the very first thing is to understand what is actually provided by the current module’s code. This may not be as obvious as it sounds. If it would be clear and easy to grasp, you most probably wouldn’t have ended up in the current mess anyway.

So the job to do is to classify contents into topics and use-cases. E.g.

  • API definitions. Possibly API definitions that can even be split into several APIs
  • Implementation of one or more APIs for independent uses
  • Utility code that exists to support implementations of some API

At this stage, we do not refactor or add abstraction. We only assess content in a way that we end up getting a graph of code fragments (a class, a group of classes) with dependencies. Note: The goal of the excercise is not to get a UML class diagram. Instead we aim for groups that can be named by what they are doing: “API for X”, “Implementation of job Y”, “Helper classes for Z”.

Most likely the result will look ugly. You might find an intermingled mess of some fifty different background service implementations that are all tight together by some shared wiring registry class that wants to know them all. You might find some class hierarchy that is deeply clutterd with business logic specific implementation and extending it further is the only practical way of enhancing the application. Remember: If it was not for any of these, you would not be here.

Our eventual goal is to change and enhance the resulting structure in a way that allows to form useful compartmentation and to turn a mess into a scalable module structure:

trans1

That’s what step 2 and step 3 are about.

Step 2: Abstract

The second step is the difficult piece of work. After step one, looking at your resulting graph it should be easy to categorize sub graphs into either one of the following categories:

  1. many of the same kind (e.g. many independent job implementations),
  2. undesirably complex and/or cyclic
  3. a mix of the two

If only the first holds, you are essentially done with step 2. If there is actual unmanageable complexity left, which is why you are here, you need to now start refactoring to get rid of it.

This is the core of the exercise and where you need to apply your design skills. This comes down to applying software design patterns ([1]), using extensibility mechanisms, and API design. The details are well beyond the scope of this post.

After you completed one such abstraction exercise, repeat step 2 until there is no more b) and c) cases.

Eventually you need to be left with a set of reasonably sized, well-defined code fragment sets, that are in an acyclic directed graph linking them up by linking dependencies.

trans2

(For example, removing the cycle and breaking the one-to-many dependency was achieved by replacing A by a delegation interface A’ and some lookup or registration mechanism).

Step 3: Arrange & Extract

After completing step 2 we are still in one module. Now is the time to split fragments up into several modules so that we can eventually reap the benefits of: Less to comprehend at a time, clearer locating of new implementations, a structure that has come back to manageability – provided of course that you did a good job in step 2 (bet, you saw that coming). This post is not about general strategies and benefits for modularization. But there are plenty in this blog (see below) and elsewhere.

Given our graph of fragments from step 2, make sure it is topologically ordered in direction of linking dependency (in the example from upper left to lower right).

Now start extracting graph nodes into modules. Typically this is easy as most of the naming and abstraction effort was done in the previous steps. Also, when starting you probably had other constraints in mind, like extensibility patterns or different life cycle constraints – e.g. some feature not being part of one deployment while being in another. These all play into the effort of extraction.

The nice thing is: At this time, having the graph chart at hand, the grouping can be easily altered again.

Repeat this effort until done:

trans3

Enjoy!

References

  1. Software Design Patterns (Wikipedia)
  2. Directed Acyclic Graphs (Wikipedia)
  3. Dependency Management for Modular Applications
  4. Modularization is more than cutting it into pieces
  5. Extend me maybe…