Hi Aram,
Many thanks for your excellent questions. I’m actually really glad the statistical topic has come up.
OK, from a pure estimation/forecasting point of view, you are spot on. There is statistical frailty — or at least risks and assumptions to be aware of — in a throughput with variance model such as the one I am describing. You are correct — given a good enough dataset, using Monte Carlo forecasting would provide a more explicit view of probability and uncertainty, and be a more statistically sound of estimating a project’s outcome.
However, I would counter these points by saying that the intent of this model is actually not as a pure forecasting model, but rather a tool which:
- Introduces the idea of navigating and managing uncertainty to traditional thinking teams and organisations, where deterministic thinking and “being right” are the orders of the day
- Is simple enough to use that people will actually use it on a frequent basis
- Promotes the tracking of actual throughput data and cycle times rather than (or in addition to) the (more abstract and deterministic) story point data (or nothing at all)
- Can support a product owner and their team make early and deliberate scope management — or other corrective — decisions when things look like they are (or are at risk of) going off- course — to steer toward success.
Usually teams choose (or or forced to use) a far frailer model — a burn-up chart with a pleasing looking trend line showing one possible outcome — or nothing at all other than “gut feel” and guesswork (or even false information in situations where there is not enough mutual trust for people to be transparent and honest with each other).
I like the way my model shows very clearly the impact of moving the vertical time line, or the horizontal scope line, on our ability (and confidence) to deliver x amount of scope by y date. While the numbers may not be statistically “correct”, the principles are certainly sound.
The model provokes conversations about predictability, but also (more importantly, in my view) the trade-off between trying to predict increasingly distant deliverables and agility.
If you want our team to be more agile, but you also want us to tell you when we will deliver a feature which is number 52 on our backlog (and use that information to make a potentially irreversible decision) then I am here to tell you those two things are in conflict.
All models are imperfect — the point of them is to be useful, to provoke conversations and help us explore better ways. This model is better than what most teams and organisations are using right now (at least the ones I work with!), at least in the context of software/product development in environments where people want (or want to use) estimates.
It is also simpler to understand than Monte Carlo forecasting. People reject ideas if they see them as over-complicated, so we have to start somewhere. That said, I should point out that my model is not supposed to be a replacement for Monte Carlo or any other model. I would never advise the use of just ONE model. When I talk to companies about their struggles with estimation and forecasting, I typically introduce Monte Carlo forecasting to them as well.
Use more than one delivery progress and forecasting model. Compare results from the different models. Start having conversations about risk, uncertainty, predictability and agility — and their relationship with each other — and start uncovering better ways of doing things in your context.