Scrum Should Indeed Be Run Like Multiple Parallel Waterfall Projects

Normally I am not writing about processes and methodologies. Not my preferred subject really.

Lately however, I read an article (see below), that restated that agile is not like doing small waterfalls. I think that claim is misleading.

Over time, I have been working on all kinds of projects, ranging from proof of concept work to large systems for 24/7 production, from ongoing maintenance to custom extensions to existing solutions.

Each of those seemed to respond best to a different process approach.

For simple projects, it can be best to only have a rough outline or simply start from an existing example and just get going.

For maintenance projects, a Kanban approach, essentially a work stream comprised of  work items of limited conceptual impact can be best.

It gets more interesting when considering projects that are clearly beyond a few days of work and do have a perfectly clear objective. For example consider a specialized front end for some user group over an existing backend service.

As a paying customer, you would want to define (and understand) what should be the very specific result of the development effort as well as how much that will cost you. Therefore, as a customer, you naturally want development to follow a Waterfall Model:

It starts with a (joint) requirement analysis (the “why”) and a specification and design phase (the “how”). Let’s just call this the Definition Phase.

After Definition a time plan is made (implying costs) and the actual implementation can commence.

Once implementation completes the development result is verified and put into use – ideally on time and on budget. Or, as a simplified flow chart:

As we all know this approach does not work all to well for all projects.

Why is that?

Think of a project as a set of design decisions and work packages that have some interdependence, or simpler as a sequence of N work packages, where a single work package is always assumed to be doable by your average developer in one day. So, effectively, there is some prophecy, N steps deep, that after step X all prerequisites for step X+1 are fulfilled and that after step N the specification is implemented.

For very simple tasks, or tasks that have been done many times, the average probability of failure, that is, that the invariant above does not hold can be sufficiently small so that some simple measures like adding extra time buffers will make sure things still work out overall.

In software projects, in particular those that are not highly repetitive (think non-maintenance development projects), we typically find lots of non-repetitive tasks mixed with use of new technologies and designs that are implemented for the first time. In a situation like that, the probability of any sort of accurate project prediction from the start decreases rapidly with the “depth” of planning.

There are ways to counter this risk. Most notably by continuously validating progress and adapting planning in short iterations, for example in the form of a Scrum managed process.

While that sounds that we are discussing opposing, alternative process approaches, each having a sweet spot at some different point on the scale of project complexity, that is not so.

Execute Parallel Waterfalls

In fact: The gist of this post is that an agile process like Scrum is best run when considering it a parallel execution of multiple smaller waterfall projects.

Here is why: Many projects use Scrum as an excuse not to plan and design ahead of time, but instead only focus on short term feature goals – leaving design decisions to an implementation detail of a small increment. That is not only a great source of frustration as it propells the risk that even small increments end up brutally mis-estimated, it also leads to superficially designed architectures that – at best – require frequent and costly re-design.

Instead we should look for a combination of the two that on the one hand makes sure we make an upfront design of aspects of the overall project to an extent that we feel certain they can be done and estimated reliabl and yet, on the other hand preserve flexibility to adapt changed requirements when needed.

As a result we run multiple parallel waterfall projects, let’s call them part-projects that span one to several sprints, but use resources smartly when we need to adapt or for example work on bugs introduced by previous work.

Visualized simply as parallel execution lanes, processing several planned ahead part-projects, at sprint N we work on some subset

(B denoting a bug ticket) while at Sprint n+1 we proceeded and take in the next tasks:

The sprint cycle forces us to re-assess frequently and enables us to make predictions on work throughput and hence helps in planning of resource assignments. Our actual design and estimation process for part-projects is not part of sprint planning but serves as crucial input to sprint planning.

References