Fail, Learn and Repeat. Or not really?
Should we always encourage failure-as-learning, and when might this risk de-motivate individual contributors?
Learning by iterating is not the same as accepting overall failure
At this point, most team leads have been told many times why leaders should encourage failure. It is important to support a learning culture, diversify the team’s skillsets and help establish psychological safety that makes people feel safe to try new things.
The concept of not knowing how to perform a certain act, making mistakes, feeling frustrated and coming out of the process as a stronger, more competent individual is supported by endless models. The four stages of competence describe the process as a cycle of conscious incompetence all the way to unconscious competence.
The Kubler-Ross change curve depicts this as a process of shock & denial leading to acceptance where we ‘get on’ with life post change. Even the Kolb Model of Experiential Learning when briefly summarised follows a pattern of experiencing, undergoing uncertain sense-making and finishing with a preliminary understanding as part of the active experimentation phase.
Gladwell pitches the idea of developing mastery in 10,000 hours by going through a process of not knowing how to do something, spending 10,000 hours learning it, and then knowing how to perform it.
With all these models pitching, in some shape or a form, an approach to learning that consists of not knowing how to do something and trying to do it until it works out, it seems like this would be the silver bullet for every situation where one does not know how to proceed. Just try it - until you know how to do it.
Rather than seeking or glorifying failure, we should be seeking iterative improvement and learning. Sometimes you can do this better with small steps than big risky steps where you might not know what caused the failure. So faster, small cycles might get to useful learning quicker than bigger steps and risks. And of course there is the issue of psychological safety and shared accountability for the outcome. The risk should not be shouldered just by an individual contributor if they have sought approval and/or advice from somebody more senior.
What about mission critical issues?
But just trying to do something until it works out is not a universally accepted source of learning, or even a safe thing to do in many areas of business and life.
Consider a surgeon operating, a pilot landing or an architect designing a 15-story building. In many real circumstances, the cost of failing while learning on the job is too high for it to be utilised as a learning opportunity. Of course, learning still takes place in a different form - for example, students taking up these professions, often learn both theoretically and also by shadowing experienced people, or even by using simulations and digital twins before they become practitioners. In these examples it is quite obvious that when lives are at stake, the risk is too high for us to glorify the learning opportunity provided by failing. But even in non-mission-critical situations, repeated “failure” for junior contributors can prove debilitating and de-motivating if they don’t get lucky and find the magic tweak needed to turn failure into success.
How can we expect leaders to know if the risk of failure-led learning is appropriate or not?
W.L. Gore famously operate the waterline principle as a basic framework for risk to the organisation of a proposed new initiative. Imagine a boat out on a lake; anyone on the boat can decide to take risks that cause damage ‘above the waterline’, on the edges of the boat that are not constantly submerged, because these failures can be fixed, and the boat can move on. But taking risks that cause holes in the boat ‘below the waterline’ should be avoided or closely monitored as a hole in this part of the boat is likely to sink the whole thing and everyone on board.
A binary above or below a risk threshold analysis is however simply not enough. Ultimately, we need a much more granular and grown-up approach to risk that looks at probability, impact, mitigation and covers risks of non-action alongside risks of action. The waterline principle, however, is a very good start to assess whether a proposed action might fall into the ‘failure as learning’ bucket or the ‘failure as disaster’ bucket.
Risks can also affect individual contributors. It is all very well saying we are open to failure and that this builds learning and character, but for a relatively inexperienced contributor - or even a team lead - repeated failure can really knock their confidence and make them more risk-averse, so, as responsible leaders, we probably want to avoid that.
Overall, we should be taking some risks within the guardrails of our risk model and expected outcomes, and this can indeed help us learn. But it is not as simple as fail→ learn→ repeat. There is too much false certainty in the way organisations develop and approve business cases for action, which pretend they know the right answer even if a smarter approach would be to start with further discovery.
The cure for this, therefore, is not to jump feet first into ‘try & fail mode’ as if learning will magically appear. Most of what we should be doing is iterating, and in seeking to iterate to achieve a specific fitness function (e.g. product-market fit in the Lean Start-up model), the outcomes are not always as binary as success versus failure, but rather they lie on a graduated scale of improvement until something is ‘good enough’.