I’ve been interested in the Cynefin framework for a while and have used it to frame some of my own thoughts, but had not used it in any practical sense before.
(I’m not going to go into what Cynefin is here as there is already a lot of material that will explain it much better than I can, such as this video from Dave Snowdon)
Following a day spent with Mike Burrows learning about AgendaShift facilitation techniques involving Cynefin, I tried to tweak one of the exercises we did that day for use in a regular sprint retrospective. Mike was also keen to credit Karl Scotland for the ideas, and of course Dave Snowdon.
I tried this out for the first time recently, this is an overview of how it worked out and what I learned along the way.
Set Up
Firstly, I set out a whiteboard as per the Cynefin model, with some large sticky notes explaining the domains, leaving plenty of space in each domain to stick regular sized stickies for data gathered from the team.
The retro then worked in the classic gather data, generate insights, decide what to do stages.
Gather data
We split the team up into small groups – threes worked well. Working individually first, the team thought of sources of frustration or anything that could be improved, not constrained by the last sprint, just anything they could think of, and wrote them down on a sticky note – 1 sticky per item.
Then in the groups, we had discussions to prioritise the top three issues in each group.
Generate insights
Once we had completed that, we went through the board and how we were going to use it.
Each of the sections on the board represents a domain of complexity within which the issues we identified would fit. Each group decided where they want to place their stickies.
If the issue had an obvious solution that could be quickly agreed, it was placed in the simple domain. If an expert, such as a senior dev/tester or the product owner would be best suited to own the action and come up with a solution, it was placed in complicated. If there are lots of possible solutions and no one right answer, it fitted in complex. And lastly, if we could not think of any possible solutions, we placed it in Chaos.
Each group discussed and brought their sticky notes up to the board to place in the domain they felt it was best suited to. If we could not decide, we would place it in the middle of the board, in disorder, although we didn’t have any of those.
Once we placed all the stickies on the board, we walked through them to discuss what they are and why they fit the domain we placed them in. At this point we were not diving deep into solutions, just explaining the issues and why we put them where we put them.
We explored each with some probing questions to facilitate understanding and see if the team felt the issue was in the right place, if we'd had anything in disorder we could have tried to re-place it at this point.
Decide what to do
Once we all had a good idea about what each item on the board was and why we placed it where we did, back in groups of three, the team talked about some candidate actions to take away.
However, where the issue was on the board determined what type of action we thought about taking.
For anything in simple, we already knew what to do.
For anything in complicated, if the appropriate expert was in the team, that person could come up with an action or take it away to analyse. Otherwise we would have to identify an expert somewhere in the business to help us.
For complex issues, this is where things got interesting, we came up with some candidate experiments.
Each action was written on a sticky and placed at the bottom of the board.
Depending on the number of candidate actions and experiments we could have done some dot voting at this point, but we had a manageable number so just walked through them all.
Thoughts for next time
When explaining the complexity domains, thinking up some examples and where they might fit helped understanding, although next time i might think more about this upfront and have some clear and concise examples ready to go.
I really liked this retro for helping a team make decisions on how to proceed. It helped us make best use of our experts for anything in the complicated domain, which can be a big source of frustration and dysfunction in autonomous teams if people are not getting a chance to use their skills due to group dynamics.
It was great to generate lots of candidate experiments and decide what to try. Deciding on candidate experiments is a part of the retro that could be developed quite a lot next time, perhaps by introducing some A3 templates (as we did on the day with Mike) to frame the experiment as a hypothesis, think about who it effects, what the risks and issues might be, how we track that it is being implemented and how we make it safe to fail.
More info on A3s here on Mike's blog https://blog.agendashift.com/2016/04/28/a3-template-for-hypothesis-driven-change/
However an hour’s retro wasn’t really long enough to get into doing A3s, I think we would need 90 mins minimum, probably 2 hours to do that justice.
Overall I can see the power of running this exercise in a wider context to facilitate a larger organisational change, which is more in line with the AgendaShift exercise.
Lastly Dave Snowden describes Cynefin as a sense making framework rather than a categorisation framework, i'm unsure if we were sense making or categorising, however I think the exercise was valuable either way.
コメント