Work Stream

I’ve alluded to “THE Plan” before. I hope it’s clear at this point what level of contempt I have for this dinosaur of BDUF (big design up front) and command and control style IT. Today I want to talk about what I perceive as being better: Work Stream, or Flow.

The first major characteristic of work streams for IT delivery is the absence of a “project.” At its essence, a project is just an arbitrary collection of “stuff” that is time boxed or cost boxed (or both!). Arbitrariness increases rather than deceases complexity, forcing teams to not only to focus on the contents of the box, but the box itself. It takes focus off of business value as the prime determinant of success, installing in its place slavish conformance to schedule and “completing the project.”

Instead, imagine a world where we stay laser focused on business value and do little (if any) long-term planning. The walls are unadorned with Work Breakdown and Gantt charts. Obnoxious stoplight reports that are never any color but red are nowhere to be found. Teams spend time doing rather than planning. I can tell, inside the company operating in your head, the inmates are running the asylum. That may be your preconceived notion, but let me explain why this notion is wrong.

In work streams, a continuous flow of value-delivering tasks are ideated by the team, worked, tested, completed, accepted, and deployed, one at a time. Prioritization is based on feedback from real users, the ones who will use the product to deliver business value. The feature in process of being worked is the one decided on by the team, including its embedded users, to have the highest return of value to the business relative to the costs of development. Success is easy to judge: did deploying the feature in question net out the desired business value?

Why is that the correct (and arguably only) definition of success? Because users cannot use software that isn’t deployed. It is better to build, deploy, and get feedback on a simple version, refining as necessary. Capture what business value you can as quickly as possible, and improve it with future iterations. This is true agility. The team can quickly react to not only improve existing lines of business, but to entirely new lines resulting from changes in the organization’s competitive landscape. The application’s features and capabilities are tightly aligned with the needs of today’s users, not with a snapshot of their needs from six, twelve, or eighteen months ago.

Gone is the need for “synchronization” meetings and onerous status reports. The unit of work is so small that its status is apparent without any additional scaffolding or ceremony. Value is always flowing from the team’s efforts, and that value is nearly instantly captured.

This is a vast departure from the old ways, where we were locked in a cycle of chasing buzzwords, missing the opportunity to create real value. We were so tied in knots worried about cost containment, schedule pressures, and scope creep that we got stuck in a Fauxgile rut, beating our heads against gates and review processes that are essentially the same Waterfall hurdles our grandfathers used to lament.

Using outdated development methodologies isn’t working for us. We fall into the control trap, trying to “manage projects” and limit costs. The trap is a vicious cycle. We watch budgets to the penny, track efforts to the second to balance against estimates, and fight tooth and nail against the dreaded “scope creep”. All of this leads to under-delivering value. Yet what if what the business needs most it isn’t cost containment? What if even more value could be obtained by increasing rather than decreasing costs (i.e. more investment)?

Stop and think for a minute… in your project where specifications were laid down by the business, estimates were made, requirements were gathered in excruciating detail, milestones were set; “THE Plan” is perfect. Until it isn’t.

You gather the team together because the stoplight report your executives love so much is deep “in the red”. You’re eighty five percent through your time box and your budget. The backlog shows that the team has only “completed” (whatever that means…) sixty percent of the specified functionality. You’re on the road to time and budget overruns, what can you do? Exhort the team to “work harder” and “catch up”? Compromise (cut) scope? Forge ahead, accepting the overruns? Curl up into a ball and cry?

That last one, obviously, is a joke.

It doesn’t have to be like that because work streams have an ace up their sleeve. Not only do they have the economics on their side, but they actually offer stronger control of the delivery process than BDUF. Yes, you read that right. Agility AND control, coming from a process that looks like chaos? Yup.

In a traditional projects, the only way to clear the hurdle rate and reap value from your investment in the project is to “complete” the entire project (potentially with compromised scope). In work streams, value is captured after each feature is completed and deployed.

In traditional projects, there is a heavy investment and high levels of risk that grow over its life cycle. In work streams, the individual investments are smaller, and the risk never grows beyond the extent of one “dud” feature.

In traditional projects, you have to muck around to establish the team’s definition of “done” (code complete? tested? accepted?). In work streams, there’s no ambiguity, you’re done when the feature is in production and generating business value; the only metric is success, and measuring it is easy!

In traditional projects, as you come to the end of your time box or cost box, you still have to deploy in order to recoup any value. In work streams, if the next feature up won’t justify the time or cost or fit within the budget, you can move on to something else in good conscience because you’ve already delivered numerous value-driving features.

In traditional projects “scope creep” is the enemy, and calls into question the beauty and perfection of “THE plan”. In work streams, new feature requests represent opportunities to adapt to competition or changing market conditions; that new idea might be valuable enough to jump to the front of the queue!

These economics are incredibly powerful, and they give the business what it was looking for all along: limiting risk, investing prudently, moving on from failures with minimal fallout, and capturing value quickly. Which is a more responsible use of organizational resources, the illusion of control (in the form of BDUF bullshitting) or real control?

Hopefully I’ve convinced you of the value of work streams over traditional projects.

What are your experiences with projects? Have you tried work streams or flow-based development? Did you move to work streams full time, or go back to projects? Please share in the comments section below.

Careful with those Policies and Processes Eugene!

I sat through “one of those” meetings yesterday. The language being spoken was English, and the vocabulary sounded like words from the Agile dictionary. Yet I couldn’t parse what was going on, the words were stolen concepts, failing to match the actions and attitudes of the participants. So many attitudes from the old command and control IT mentality were surfacing that you’d swear that an “Agile transformation” was not more than a year underway, but instead had just begun.

This was a process meeting. A member of the DevOps team had asked on a number of occasions for RDP access to servers in various projects to enable deployments from Azure DevOps. Operations and Support, who are responsible for the care and feeding of virtual machines, were raising the concern that they didn’t know specifically what this DevOps team member was doing, and why they needed the access in question. So a meeting was called to develop a process for granting access and documenting server configuration changes.

The solution that the team created on the whiteboard was a set of paperwork to document the access and the changes: the who, what, and why; the preparation of a “run book” to document the configuration changes so they can be reproduced in downstream environments; and lastly, two layers of approvals. Future work included the possible addition of a configuration management product to monitor access and changes.

My skin bristled at this conversation. Why do you suppose? Was it because this new process creates more (tedious, manual) work than a simple touch of automation and a leaner, simpler, policy one-liner? In part, yes, but really what was the most disheartening was the lack of willingness to tear down the walls between project teams and Operations and Support. Participants still wanted to “own” tasks, and parts of the DevOps pipeline.

How would I solve this problem? I’m so glad you asked… Step One, a new policy:

“For application servers deployed by continuous deployment pipelines, no configuration change is to be made at any time except by scripting the change, committing that change to source control, and adding execution of that script as a step to the deployment plan.”

Bonus… there is no Step Two!

What are the consequences of choosing a simply worded policy over a set of policies and processes that are more complex?

  • It deepens the experience and skill sets of everyone involved. The project team learns what sort of issues arise in misconfigured VMs, and Operations and Support get a chance to increase their skill in development and use of development tools.
  • It adds auditability of who made the change, what was changed, and with proper commenting policies on check-in, why the change was made. Using a fully-integrated product like Azure DevOps, the check-in can be associated with a work item for the reconfiguration effort, further increasing auditability.
  • In time, the project team can develop a trust and rapport with Operations and Support, eventually allowing the project team to self-serve for application specific configuration changes.
  • If Operations and Support wish to maintain oversight of configuration changes made to servers, they can require pull requests for scripted changes as a manner of peer code review.
  • Repeat after me: “The deployment plan is the run book.” One that is instantly updated without any effort or anyone needing to remember to change it. Some documentation is better than no documentation, but self-updating documentation is even yet better!
  • Every time code is committed, the deployment plan is tested. For a development team that is following Agile principles regarding code commits, the deployment is tested dozens (hundreds?) of times a day!
  • Instead of an engineer following a run book, and manually making changes, the deployment process takes over, reducing chances for error.
  • Because the deployment plan handles change management, and you are leveraging Infrastructure as Code (ARM templates, or your Cloud provider’s equivalent) then spinning up a blank slate VM, configuring that VM properly, and installing your application should be as simple as cloning the deployment plan for a new environment and updating a few variables.

The moral of the story is that you don’t have to design complicated multi-phase processes with gates and approvals to achieve a sensible and secure state. Develop a process, diagramming it on a whiteboard. Find the step where automation gives you the biggest bang for your buck, and plan to automate it instead. Keep evaluating and planning for automation until it the effort to automate is no longer cost-effective. Distill the policies down to the simplest language that enforce the processes you’ve developed.

What are you experiences with policies and processes within your organization? Where have you had success with automation, and where did you experience problems? What sort of policies and processes work best in your environment? Please share in the comments section below.