Die Schlosser von Morgen sind Softwareentwickler

Produzierende Betriebe müssen immer komplexere Qualifikationen mitbringen, um ihre Produktion am Laufen zu halten und optimal zu betreiben.

Hier geht es nicht nur darum, Kosten zu sparen, sondern auch darum, die Kompetenz zu haben, komplexe und innovative Verbesserungen umzusetzen – jenseits dessen, was Lieferanten zu bieten haben.

Früher hat man Maschinen für die Produktion gekauft und vom Lieferanten warten lassen. Dann hat man verstanden, dass es nicht nur günstiger sondern auch klüger ist, die Wartung selber zu unternehmen. Von der Fähigkeit, die Wartung selber durchzuführen ist es kein unüberwindbarer Schritt, die Kompetenz zu erlangen, Maschinen zu erweitern, zu ergänzen und Produktionsprozesse im eigenen Sinne zu optimieren und zu integrieren.

Ganz genauso verhält es sich heute mit der betrieblichen Software. Hat man zunächst Branchenlösungen entwickeln lassen oder von der Stange gekauft und anpassen lassen, so ist der nächste Schritt, die Kompetenz zu erlangen, die Software selber zu erweitern, zu ergänzen oder sogar zu modifizieren.

Während das bei ERP Lösungen (insb. SAP) schon immer explizit möglich war, ein absolut signifikanter Faktor für den lang anhaltenden Erfolg von SAP, so ist das bei anderer Software immer noch die Ausnahme.

Gerade für den Einsatz in der Produktion und im Zusammenhang mit zunehmender Automatisierung ist es jedoch extrem relevant, Software-Lösungen nicht nur miteinander zu kombinieren, sondern auch in der Lage zu sein, vorhandene Lösung modifizierend zu erweitern und bestenfalls komplett zu beherrschen.

Nur mit dieser Fähigkeit, wird man in der Zukunft betriebliche Prozesse komplett beherrschen können. Und nur dann ist es möglich, diese nach Wunsch anzupassen und für den geschäftlichen Erfolg zu gestalten.

The Ability to Create Abstraction Necessarily Wins Over any Ability to Keep Track of Many Pieces

A Simple Thought.

We know that our ability to create abstractions is key to manage complexity – in life, in science, in mastering technology. Without creating abstractions we would not be able to make sense of our daily routine, what we work on, and much less of the constant sensory input we receive.

In fact, I doubt that anybody can meaningfully keep track of more than a handful interconnected things while “thinking”. That is why powerpoint presentations explaining a concept should never have more than three boxes with arrows between them – nobody will buy your idea otherwise. Likewise, any concept described by three connected boxes look convincing for most people – most likely the true reason for the demise of countless companies.

Abstractions are essential to software development. Not only that the whole idea of software requires some serious level of abstraction, but thankfully programming languages provide the means to stack abstractions on top of each other – leading to libraries of libraries of concepts and abstractions borrowed from those around and before us allowing us to create software that encompasses many millions, if not billions of lines of code – while writing only a fraction of that by ourselfs.

All that while being mostly ignorant to the intricacies of the lower layers of the pile of abstractions (actually the shoulders of the giants) we are standing on. So much so, that something like a file system occurs to us as natural a concept as, say, a horse.

And here is the catch: Because any layer of abstraction is hiding a number of lower level concepts, and since that number is naturally at least two (otherwise: Why bother?), the sum of lower level abstractions made tangible by introducing higher level concepts essentially growth exponentially.

Not very scientifically speaking, for code this means:

However the same pattern applies to other realms, be it running an organization or taking care of business or being a school teacher. Somebody good at computing does not necessarily make a good mathematician while being good at computing is not at all required to be a good mathematician. The ability to understand, create, and apply abstractions hands down wins over any “increased clock speed”.

In other words: As long as we are good at building abstractions, it’s ok that we cannot handle more than three boxes with arrows per slide….

If You Want to Make It, You’ve Got to Own It

Imagine you are a high volume manufacturer of vacuum cleaners. Everything runs smoothly, but you feel there may be some business potential for configurable high end vacuum cleaners that are built to spec.

You image a GOLD series of vacuum cleaners for which customers can configure various color schemes and decorations, various sensor add-ons, GPS tracking, and other features that a certain high-profile customer group finds exciting.

Of course, ordering a GOLD configuration from the web site comes at a premium.

Problem is: Your current production process does not accommodate for an ad hoc built-to-spec production. If you cannot reliably produce it, you cannot sell it!

So you come up with a pragmatic process, that makes sure you can track from order to shipment and that everything is consistent with your ERP recorded data. For example something like this (that I just made up – and you will get the point I suppose):

Obviously this requires some software support. It is not huge, but it may need to evolve and go through changes when you evolve your business. Who knows, maybe one day you will want to inform your customers on the production progress of there GOLD product.

Unfortunately you do not have much software development expertise in-house. So where do you get that software from? Do you ask your ERP supplier? Do you ask your Production Automation / MES supplier? Maybe not. Both are not exactly into custom development and will only increase your lock-in with them.

You could ask a software development agency – maybe even something really cheap with developers elsewhere but a local project manager.

Problem is: You might get a great solution but it will be a one-off solution. Who is going to maintain it, if the team that developed it will break up and join other projects right after? How do make sure, you can maintain it later?

The catch is:

You need to own it, if you want to make it!

Developing appropriate software development expertise is difficult. Developing and maintaining a custom business application that manages some long running workflows and integrates with legacy systems in a manageable way is different from developing a Web site. So you should look for a partner that provides

  • The expertise to build a solution;
  • A blueprint on how to extent and expand the solution into YOUR business platform;
  • A technology platform that you can build on, and
  • Support when you feel it is time to take over

This is the essence of digital transformation: It is not about creating digital versions of processes you already have, it is about making use of digital capabilities to implement new business models or process optimizations that were simply not possible before.

Please check out the great article by Volker Stiehl linked below.

References

  1. https://www.volkerstiehl.de/digitalisierung-vs-digitale-transformation/ (German only)
  2. How to Contract a Software Developer

How to Contract a Software Developer

We are a small company developing custom software that typically implements some business critical function. Actual back-ends with lots of asynchronous transactional business workflows, mass-data processing, integration with other back-ends, machine-data and shop floor user interfaces.

We do not design or implement this software from scratch. We have tools and a solid software foundation and experience to analyze business processes, map them into software and eventually implement them. That’s what we bring to the party.

We do in general not do fixed-price projects. We do not do that because – in general – that simply does not make sense – not for us, not for our clients.

This post is on why asking for a fixed-price project is more often than not the wrong thing to ask for, for us as developer and for you as client. It is on why you should not want to contract a developer for a fixed price project and what you should do instead – to make life better for you as client and us as developer.

Groundwork

Normally you will read that the very first step of any software project is to develop an understanding of the actual business problem, its essential data relationships and what users will need to solve it using a software system.

And indeed, while there will be an initial problem description, it is not necessarily describing the problem to solve in terms that map easily to a technical solution approach. So you need to create a more technical and fundamental formulation of the business problem to solve as to create a foundation onto which the project can be planned and implemented.

However that is not the whole story. When you are at that point, you are already in the project. Another indispensable step that comes first is to build ground for common trust between client and developer.

Why would a client trust a software project that potentially evolves into a multi-million euro endeavor to a developer based on an exchange of design ideas and some vague planning?

Why would a software developer risk expensive litigation because of a misunderstanding of what a solution to a million-euro software project is supposed to deliver based on a design that turned out to be wishful thinking?

I believe there are three essential (moving) meta-milestones in any project:

Next: All the features and fixes you know are needed and of which the developer knows (or believes to know) how to do them right. Everything in Next can be done now.

Near: All those features that you believe could be done down the road, possibly relying on the Next, that you think would be really useful to have but you are not sure you are really willing to pay for all them just yet nor is your developer certain how long it will take and how well it will work.

Far: The vision of what could be done, if you had the Next and some of the Near, and maybe some cool idea and the right business framework. You would not know how to plan for it now, but sharing it provides orientation of where, eventually, we want to go.

These moving target meta-milestones define the grounds on which to repeatedly plan and commit. By agreeing on them, we build a common understanding on how we believe the project is to move forward – while committing to the next “realistic” fraction of it:

The Near defines the Next by showing you the boundary of what you feel sure about. The Far on the other hand guides the creation of the Neart and the vision to communicate when justifying the effort as a whole.

While working in the Next, the Near and the Far become clearer – ideally Near flows into Next and there is constantly food for work and success in the project.

Here is the deal however:

  • While agreeing on the Near and communicating the Far, you only contract on the Next.
  • While working on the Next, you fill it up again from the Near.
  • You make sure that splitting up, while not desirable, leaves no more burned ground than the current Next.

Practically Speaking

As potential project partners, developer and client should agree on a first set of a Near and Far. I tend to call them Phase 1 and Phase 2, as that is probably more expected. As the first thing to do however is to come up with an initial high-level design or even somewhat of a specification, that would exactly define the Next.

And that is what should be the first commitment.

The result of the specification will be an understanding of a refined Next, Near, and possibly an updated Far as well. The goal posts will have moved, and you can move forward into the next iteration: Actually implementing the Next.

Speaking in agile development terms: An iteration here is generally not a sprint, but more likely multiple sprints, depending on the size of the project and the planning horizon. You would nevertheless align budgeting and mid-term planning with sprint boundaries as to not interrupt work unnecessarily.

At any time, you make sure that work has been specified and documentation has been updated to the extent that work can be passed on if required.

As a developer, you know that everything is set and you do not have (unexpected) technical or documentation depths that will haunt you later on.

As a client you know that there is no unnecessary dependency that may mean that you lose control over your asset.

In particular this means:

  • Contracts do make sure that anything developed belongs to the client
  • If necessary, the client can continue development with a different team, bring in new developers, move development in-house, if that is desired.

The latter means that project organization tools and content, development and testing infrastructure is either already operated by the client, comes with the project, or can easily be re-created by the client.

It is naturally best, if development and testing is inherently contained with the project sources and mostly independent of other external or proprietary tools.

In order to maintain trust versus the project and in you as a developer, you should make sure to manage a well stuffed backlog for the Near so that continuity of the project is preserved.

From Here to the Asteroid Belt (I)

When I came up with the title line, I had a completely different conclusion in mind. It’s a nice line though. In contrast to the conclusion, it stayed.

Oh and by the way: Spring is finally here:

IMG_20170323_1720145.jpg
(spring at the office, so much work lately, so little time to enjoy it)

This is one of those “what’s the right tool for the problem” posts. Most, me being no different, try to use tools they know best for essentially any problem at hand. And this is good instinct. It’s what people have always done and obviously they did something right. Knowing a tool well is of great value and typically supersedes in effectiveness the use of a tool that might be more powerful – if used correctly – but that you are not an expert at.

At scale however, when building something more complex or widely distributed, tool choice becomes decisive and intrinsic qualities such as simplicity, reliability, popularity, platform independence, performance, etc. may outweigh the benefits of tool expertise.

What I want to look at specifically is the applicability of a programming and execution platform for deployment scenarios ranging from an in-house, or SaaS deployments to massively distributed stand-alone applications such as mobile apps or desktop applications.

The latter two form the two endpoints of the custom vs. non-custom development scale and the non-distributed to arbitrarily distributed scale.

The rules are pretty clear:

In-house/SaaS: Complete control. The system is the application is the solution. There is no customization or distribution problem because everything is (essentially) 100% custom and 0% distributed.

Mobile/Desktop: No control over the single instance that is executed somewhere in the wild. Hard to monitor what is going on, minimal to no customization, potentially infinitely many instance in use concurrently.

But what about the places in between. The customized business solutions that drive our economic backbone from production sites to warehouse solutions, from planning to financials, from team productivity to workflow orchestration?

diagram.png

Let’s say you have an application that is part standard solution (to be used as is) but typically requires non-trivial customization, adaptation, extension to be effectively useful.

What are the options?

Option C: Maintain a code line per instance or customer

That is (still?) a popular method – probably because it is simple to start with and it makes sure the original developer is in complete control.

That is also its downside: It does not scale well into any sort of eco-system and licensing model including third-parties. For a buyer it means 100% dependency on a supplier that most likely got paid dearly for a customer specific modification and will asked to be so at any time of further adaptation and extension.

Option P: Build a plugin model on top of a binary platform API

That is the model chosen for browsers and similar applications. It works very well as long as the platform use-case is sufficiently well defined, and the market interesting enough.

It obviously requires to invest significantly into feature rich and stable APIs, as well as into an effective plug-in model, a  development approach for plug-ins, and a distribution channels or bundling/installation model.

In essence you build a little operating system for some specific application case – and that is simply not an easy and cheap task to do right.

Option S: Ship (significant) source code and support extension and customization on site

This approach has several strong advantages: You can supply hot fixes and highly special customization with minimal interference. Customization is technically not limited to particular functions or API. There is no extra cost per installation on the provider side compared to Option C.

It assumes however that the ability to change, version, and deploy is built-in and necessary tools are readily available. As code life-cycle is now managed on-site, some attention need to be paid to handle code life cycle cleanly.

From a consumer’s point of view it reduces dependency and (leaving legal considerations aside) technically enables inclusion of third-party modifications and extensions.

Scenario Determines Tool

In part II we will look at how the three different scenarios above translate into tool approaches. Stay tuned.

 

 

Not with the target audience anymore?

Lately when I listen to talks on new and hot stuff for developers, be it in person or on the internet, I have the feeling that its not possibly me who’s being talked to.

It’s not just like having seen the latest web development framework over and over again – it’s that’s it’s only ever Web frameworks – sort of. It’s the unpleasant feeling that there is a hell of a lot of noise about stuff that matters very little.

This post got triggered when my long time colleague Georgi told me about Atomist, the new project by Rod Johnson of Spring Framework fame. Atomist is code generation on steroids and will no doubt get a lot of attention in the future. It allows you create and re-use code transformations to quickly create or alter projectsrather than, say, copy-pasting project skeletons.

There may well be a strong market for tools and practices that focus on rapidly starting a project. And that is definitely good for the show. It is really far away from my daily software design and development experience though, where getting started is the least problem.

Problems we work on are the actual business process implementation, code and module structure, extensibility in projects and for specific technology adaptations, data consistency and data scalability, point-in-time recovery, and stuff like “what does it mean to roll this out into production systems that are constantly under load?”.

Not saying that there is no value in frameworks that can do some initial stuff, or even consistently alter some number of projects later (can it?– Any better than a normal refactoring?) – but over the lifetime of a project or even product this seems to add comparatively little value.

So is this because it is just so much simpler to build and marketeer a new Web framework than anything else? Is there really so much more demand? Or is this simply a case of Streetlight effect?

I guess most of the harder problems that show up in the back-ends of growing and heavily used applications cannot be addressed by technology per se – but instead can only addressed by solution patterns (see e.g. Martin Fowler’s Patterns of Enterprise Architecture) to be adhered to. The back-end is where long-lasting value is created though. Far away from the front end. So it should be worthwhile to do something about it.

There is one example of an extremely successful technology that has a solid foundation, some very successful implementations, an impressively long history, and has become the true spine of business computing: The relational database.

Any technology that would standardize and solve problems like application life cycle management, work load management – just to name two – on the level that the relational database model and SQL have standardized data operations should have a golden future.

Links

  1. Atomist
  2. Rod Johnson on Atomist
  3. Streetlight Effect
  4. Martin Fowler’s Patterns of Enterprise Architecture

Some more…

* it may as well be micro-services (a technical approach that aims to become the “Web” of the backend). But then look at stuff like this

IT Projects vs. Product Projects

A while back when I was discussing a project with a potential client, I was amazed at how little willingness to invest into analysis and design there was. Instead of trying to understand the underlying problem space and projecting what would have to happen in near and mid-term future, the client wanted an immediate solution recipe – something that simply would fix it for now.

What had happened?

I acted like a product developer – the client acted like an IT organization

This made me wonder about the characteristic differences between developing a product and solving problems in an IT organization.

A Question of Attitude

Here’s a little – incomplete –  table of attitudes that I find characterize the two mindsets:

IT Organization Product Organization
Let’s not talk about too much – make decisions! Let’s think it through once more.
The next goal counts. No need to solve problems we do not experience today. We want to solve the “whole” problem – once and forever.
Maintenance and future development is a topic for the next bugdet round. Let’s try to build something that has defined and limited maintenance needs and can be developed further with little modification.
If it works for hundreds, it will certainly work perfectly well for billions. Prepare for scalability challenges early.
We have an ESB? Great let’s use that! We do not integrate around something else. We are a “middle” to integrate with.
There is that SAP /ORCL consultant who claims to know how to do it? Let him pay to solve it! We need to have the core know-how that helps us plan for the future.

I have seen these in action more than once. Both points of view are valid and justified: Either you care about keeping something up and running within budget or you care about addressing a problem space for as long and as effectively as possibly. Competing goals.

It gets a little problematic though when applying the one mindset onto the other’s setting. Or, say, if you think you are solving an IT problem but are actually having a product development problem at hand.

For example, a growing IT organization may discover that some initially simple job of maintaining software client installations on workstations reaches a level that procedures to follow have effectively turned into a product – a solution to the problem domain – without anybody noticing nor paying due attention.

The sad truth is that you cannot build a product without some foresight and long lasting mission philosophy. Without growing and cultivating an ever refined “design story” any product development effort will ends up as the stereotypical big ball of mud.

In the case of the potential client of ours, I am not sure how things worked out. Given their attitude I guess they simply ploughed on making only the most obvious decisions – and probably did not get too far.

Conclusion

As an IT organization make sure not to miss when a problem space starts asking for a product development approach – when it will pay off to dedicate resources and planning to beat the day-to-day plumbing effort by securing key field expertise and maintaining a solution with foresight.

Local vs. Distributed Complexity

As a student or programming enthusiast, you will spend considerable time getting your head around data structures and algorithms. It is those elementary concepts that make up the essential tool set to make a dumb machine perform something useful and enjoyable.

When going professional, i. e. when building software to be used by others, typically developers end up either building enabling functionality, e. g. low level frameworks and libraries (infrastructure) or applications or parts thereof, e. g. user interfaces, jobs (solutions).

There is a cultural divide between infrastructure developers and solution developers. The former have a tendency to believe the latter do somehow intellectually inferior work, while the latter believe the former have no clue about real life.

While it is definitely beneficial to develop skills in API design and system level programming, without the experience of developing and delivering an end-to-end solution however, this is like knowing the finest details on kitchen equipment without ever cooking for friends.

The Difference

A typical characteristic of an infrastructure library is a rather well-defined problem scope that is known to imply some level of non-trivial complexity in its implementation (otherwise it would be pointless):

 

Local complexity is expected and accepted.

 

In contrast, solution development is driven by business flows, end-user requirements, and other requirements that are typically far from stable until done and much less over time. Complete solutions typically consists of many spread out – if not distributed – implementation pieces – so that local complexity is simply not affordable.

 

Distributed complexity is expected, local complexity is not acceptable.

 

The natural learning order is from left to right:

local_to_distributed

Conclusion

Unfortunately many career and whole companies do not get past the infrastructure/solution line. This produces deciders that have very little idea about “the real” and tend to view it as a simplified extrapolation of their previous experience. Eventually we see astronaut architectures full of disrespect for the problem space, absurd assumptions on how markets adapt, and eventually how much time and reality exposure solutions require to become solid problem solvers.

 

Not much to say but…

Working on two super interesting posts on z2 v2.3 Maven repository support and smart property pre-processing (check out the roadmap) as well as, and on the other end of the scale,  on how to make secondary indexes for HBase applications.

Anyway, didn’t make it in time and there are seasonal priorities after all.

Hope you have a good start into 2014!

Henning