The unconventional notion of “scope” in lean-agile delivery
Followers of my work, including those who have attended my recent story slicing webinar or training, will likely know I advocate for (and practice) iterative, incremental and empirical approaches to the majority of software/product development endeavours (and, indeed, work initiatives of pretty much any kind).
Often, when I introduce folks to my story slicing techniques, they are excited about the possibilities, but see roadblocks inherent in the current ways of working in their teams or organisations. These folks consequently ask me for advice on how to to integrate such an agile approach (which I teach and use) with common/traditional portfolio management processes.
In PMOs or similar structures, the convention is usually to determine, commence and deliver initiatives over a far longer time cycle than agile methods advise (namely years or, at shortest, quarters, rather than daily or weekly with a maximum feedback cycle of 2–8 weeks*). The initiatives which expect a sufficiently positive return on investment are prioritised and commenced (or continued), and the ones which don’t are rejected (or paused).
Scope is not pre-ordained; WE control the scope of work initiatives
ROI calculations, by definition, require an upfront idea of each initiative’s cost (or, put another way, the amount we would be happy to invest for the potential return). This notion leaves us feeling that the only way to derive the ROI’s “I” is for us to determine the “scope” of each competing initiative, such that we can estimate its size (either in absolute time, or relative to the other initiatives).
However, the key thing about “scope” is that it is something WE are in control of. In a slicing context, scope is actually determined by the level and amount of slicing we do, i.e. WE decide how “big” or “small” a capability might be (how long we will spend on it). It does not need to be a matter of prediction.
Each time we slice, we narrow the scope of the story or initiative we sliced, i.e. we explicitly define and contain a smaller number of possibilities for what we can enable for customers, and how we might do so. We become more precise about our intentions. This principle applies both at capability and implementation level.
Using an example I cite often: “Enable Acme Bank customers to do their banking online” — we could spend 1-3 months building and releasing a usable and valuable specific banking capability (focusing on, say, bill payments, savings, or mortgages, for one or two customer segments) — or we could just as easily spend a year or more “gathering requirements”, designing, building and (maybe) releasing a fully fledged online banking system for anybody and everybody. The “scope” all depends on how deeply we choose to slice (and build) customer (and our own) capabilities, logistics, functional and technical sophistication, user experience, etc.
Of course, we should always be considering scope when prioritising, but the difference is that, in my approach, we do this by slicing potentially valuable capabilities from something vague and potentially large, and then specifying, from the almost infinite number of options, a capability MVP for implementation**.
That is, we explicitly and proactively narrow the scope through simple slicing activities around a whiteboard, rather than accepting a conventional view of “scope”, assigning numbers to ill-defined initiatives based on flimsy predictions, and then using those numbers to justify and potentially pursue initiatives which we shouldn’t, and reject initiatives which we should.
The delays, “bad estimates” and dysfunction many of us see in our workplaces come when we become fixated on a “how long will it take” approach at the portfolio/initiative level, because to answer such a question requires us to quickly and roughly determine scope for all competing initiatives, both at capability and implementation level (how can we credibly predict how long it will take to deliver something if we don’t decide exactly what we will build and how we will do it?). This is wasteful, unwise and nigh-on impossible to do with any level of certainty.
Even if it were possible, it’s not necessarily desirable because it locks us in the here and now to particular initiatives with particular implementations, making it difficult to pivot away from unprofitable endeavours, or explore new opportunities as they arise.
Thus, the lean-agile (and my preferred) way to approach portfolio prioritisation is to use a fixed delivery and investment cadence, along with an economic value model. CD3 (Cost of Delay divided by duration) and WSJF*** are good candidate frameworks.
Instead of saying “how long will this take” for each initiative, we say “what’s the simplest/quickest thing we can feasibly deliver in 1-x weeks”, and create a cadence of delivery/prioritisation every x weeks, taking into account available teams and capacity.
Even if a proposed initiative is likely a multi-month or even multi-year program of work, with a greater potential “total value” than anything else, it is unwise to batch it up that way. Better to make such an initiative a strategic theme or objective rather than something which conflates directly with “work”. To prioritise effectively we need to compare apples with apples, and frequently inspect and adapt the lay of the land in terms of both the work under way and the unstarted options for what we might do next.
The story slicing philosophy is to continually find the simplest, quickest paths to delivering something useful and valuable to a subset of customers, regardless of the nature and perceived “size” of our initiatives. Some MVP which tests the value proposition, and our ability to deliver on it. Test ALL ideas by saying “can we get something to market in x weeks”. If the answer is “No”, there is likely too much inherent risk in the initiative, so it should either be sliced up into smaller ones, or rejected.
When we do this, it then becomes far easier to prioritise because we don’t need to give everything an effort size — we simply use a delivery/prioritisation cadence coupled with the principle of getting to market in the simplest/quickest way. This allows us to rank initiatives against each other purely by their relative value (against our current definition of value, including time criticality). Then, after e.g. 6 weeks, we do the same prioritisation exercise again, taking into account where we landed with the prioritised initiatives, and any learnings we gained from our activities (along with the natural changes in circumstances which emerge in complex adaptive systems).
We can then continue iterating on initiatives which we have validated as deserving of doing so, and switch away from the ones which aren’t delivering enough value to justify further investment (or, at least, they are not valuable enough to continue pursuing, given our limited capacity).
How are you using the notion of “scope” in your organisation? Are you being proactive or passive with it?
*From the principles behind the Agile manifesto: “Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.”
**While I am not a huge fan of SAFe, the approach I am talking about is consistent with SAFe’s recommended use of Epics, prioritised using WSJF, continually reassessed and de-prioritised once there is greater value in other Epics on which to spend our capacity. I talk about this coherence in the webinar.
**Both CD3 and WSJF recognise the importance of risk management and encourage the creation of “smaller” initiatives by making duration/job size the denominator in the formula which determines the CD3/WSJF score, and thus priority ranking — thus, the larger the job size, the lower the CD3/WSJF score, and the lower ranked the initiative would be. We can either make the duration/job size the same for every initiative (using a frequent fixed cadence coupled with a slicing and “what is possible in 6 weeks” approach), or we can try and be predictive, do no slicing and give a relative effort/job size to all initiatives, however they are initially described. Hopefully you are warming to the idea that the former is both a simpler and more fruitful approach 😊, and reduces the risk of us falling victim to sunk cost fallacy.
If you have any questions about any of the concepts in this article, or need a hand with your service design, lean/agile product development or agile transformation endeavours, please reach out directly to me or my company Hypothesis.