Building trust through right-sized initiatives
Is a one-month initiative a good idea? Is a year too long?
We’ve gone back and forth on how big our initiatives should be. A couple of years ago, we brought Don Reinertsen in to teach a course about his book, “The Principles of Product Development Flow”. We certainly internalized a sensitivity to batch size. Minimizing feature batch size has been one of our great tools for accelerating flow. (We use a hierarchy of initiatives broken down into features broken down into stories).
It’s easy to jump ahead and assume that you should be striving to minimize batch size at all levels of your portfolio, and we have tried this. For a while, product owners tried to keep their initiatives small - one to two months in size. This had some benefits:
- Each initiative was less expensive, so less money was at risk
- Cards moved across our initiative kanban board regularly - progress was obvious
- It wasn’t practical to approve all of the cards at the portfolio level, so we gave product owners more discretion in deciding how to manage work in their area.
But there were downsides - largely the converse of the benefits:
- Because each initiative was small it was harder to understand exactly how much we were spending on each one.
- Cards moved fast, so whenever stakeholders and executives looked at the board the work was unfamiliar
- Because the product owners had the ability to create and launch new initiatives, stakeholders were not clear on how to evaluate the results.
Reinertsen goes into some detail about how to optimize batch sizes, and I really do recommend you read his section B11 on this. But in a nutshell, we want to balance transaction cost against holding cost.
If you are shipping features as they are done, the holding cost for initiatives is very very low. You can start a large initiative and be realizing value continuously. We are not holding items in a batch, the way we do with the user stories in a feature. The holding cost for initiatives only becomes higher if one of them is not shipping features regularly, or if the features it’s shipping are of lower value than a different initiative’s features would be. For the former, we have a different cadence to ensure features don’t get stick. Since value is hypothetical, you need to be sure you’re testing your value hypotheses to be sure on the latter.
So we’ve transitioned to a model where we actually are increasing the transaction cost of initiatives. We are requiring a portfolio-level conversation about the value hypothesis, the assumptions, and the success criteria. We’re setting a planned end date as a lightweight budget. Mind you, the template is designed to be filled out in 5 minutes and be about 100 words.
The target size of initiatives is 3-6 months. Our reasoning is that even the most senior product owners with the most discretion should still be checking back in with the portfolio council twice a year. The same with long-term investments - even if we intend to make them year after year, we still need to check in on whether their value hypothesis still holds.
Each initiative defines the parameters of trust for particular product owner. We’re saying,
- We trust you to work with these teams for this period of time to achieve these success criteria.
- You get to choose what features you ship to meet those goals.
- If you’re successful, you’ve built trust and we might trust you with more.
- If your hypothesis or assumptions are invalidated but you come back early you’ve built trust.
- If your hypothesis or assumptions are invalidated and you don’t come back to let us know but instead blindly continue with a now-impossible plan, you’ve weakened trust, but we’ll have a pre-arranged checkpoint where we can evaluate together what to do.
- If you’ve weakened trust, we can start rebuilding it with a new, smaller initiative with smaller goals.
How well is your portfolio process working at building trust between your stakeholders and your product owners?