In the first article of this three-part blog series, you learned why mistakes can only be permanently eliminated through structured root-cause analysis. In this article I will present the analysis process in detail.
In the first step of the process you have selected the unexpected event that you want to investigate further, and the team that will carry out the analysis. Now you are ready to begin the analysis.
Step 2: The Team verbalizes the gap between expectations and outcomes
The first task of the team is to define the gap between the expected and the actual outcome of a surprising event, because this gap determines the focus of the learning!
In the already mentioned soup example only a mediocre success in the market was expected because of the mediocre taste test results. But the soup becomes a huge success. This can be graphically represented as follows:1:
Step 3: Identify the cause of the gap
Next, the team tries to identify the cause of the expectation or outcome gap. The hypotheses steer the search for meaningful information in the right direction.
3.1 Brainstorming for hypotheses
Eli uses a so-called “funnel” method, which for example is also used in innovation management and is known as 635 method. This brainstorming method helps a group of six people to come up with over 100 partly crazy ideas in just 30 minutes in order to obtain ONE plausible innovation at the end. Each proposal will neither be commented nor rated by the other team members but directly written down. This “opens the mind“ and produces plausible “out of the box” solutions.
Eli recommends a similar approach in the analysis of the unexpected event. First, enough hypotheses /possible explanations about what happened and why it happened should be generated. The hypotheses are initially developed just on the basis of the basic facts. Wild guesses about what could have caused the “gap” between expectation and reality are encouraged. While compiling wild guesses the mind opens up and people often come up with reasonable explanations. The team leader should encourage the team to find as many plausible explanations as possible for what has happened.
3.2 Validate the existence of possible causes
Each hypothesis / assumption (incl. the wild guesses) is logically checked to see whether it provides a complete and plausible explanation without collecting mountains of data. Since many explanations are not plausible, their number decreases considerably.
3.3 Conduct a detailed analysis and identify the flawed paradigm
The team investigates what really happened based on the remaining hypotheses, by collecting data and determining the most probable cause. The hypotheses thereby steer the investigation in the right direction towards the required information. The investigation process is thereby considerably shortened, by focusing on the most likely causes for the “gap” when searching for information
Next, the team analyzes the hypotheses using cause-and-effect.
For the soup example we can build the following cause-and-effect tree2:
Once the cause-and-effect trees have been built, they will be checked for causality. Furthermore, it will be examined whether the individual elements have existed in the past. Sometimes it is possible to obtain direct information that validates the cause. If this is not possible, we can use the method effect-cause-effect – if X caused Y then we should also see Z. Alternatively we can dive even deeper into the cause-and-effect tree, to find out what have might caused X. The team leader should ensure that the sufficiency and clarity of any logical explanation is reasonably assured.
After all facts are known, the team tries to derive a lesson. When an unexpected event occurs for the first time, it is not clear whether the employees are in a conflict between two approaches. It is also possible that the staff are not fully aware of the cause-and-effect which causes this first-time undesirable effect (UDE). Therefore, we must first find out why the operational cause occurred, so that we ultimately can identify the underlying cause and thus the flawed paradigm.
To this end, the team must accurately differentiate between the unexpected event, the operational cause(s) and the hidden flawed paradigm. Based on the operational causes we ask “How come?” to encourage the employees to think and to give profound answers. This should lead us to possible causes and eventually to the flawed paradigm of the key players. We continue to use the effect-cause-effect structure, in order to validate whether the identified causes really existed. The question “How come?” is repeated until a clear paradigm emerges that was not valid in the particular case.
If we look at the soup example, we find that the most likely hypothesis could be an error in the data, or its calculation. This presumption, however, turns out to be wrong. The cause analysis showed that 20% of the testers had awarded the highest score, and 25% the lowest score. Therefore, on average, the soup only scored a mediocre result, although 20% of the testers were impressed. If 20% of the market would buy the soup over and over again, than this would be a big market share. This may explain the success, if the test group is a representative sample.
The analyses team identified following old flawed paradigm in the soup example:3
“The average is an appropriate value to predict the success of a new product.”
To test this new hypothesis, test results of older products were analyzed to find out if their success could have been predicted more accurately when the percentage of the highest ratings would have been used instead of the average score. This was confirmed.
This shows that the old paradigm “The average is an appropriate value to predict the success of a product” is flawed and needs to be adjusted.
Step 4: Update the flawed paradigm or if needed develop a new paradigm
The identification of the flawed paradigm and how it is responsible for the operational cause that caused the “gap” shows us, how the paradigm needs to be adjusted. Keep in mind that the original paradigm should only be changed insofar as is absolutely necessary! As already mentioned in the first blog article in this series, experience shows that people often tend to select exactly the opposite of the original flawed paradigm as the new paradigm. In most cases this is wrong! Instead, we need to keep all the cases in which the old paradigm is still valid, but to specify the cases in which it should be adjusted. The unexpected event has demonstrated that we have at least one case in which the paradigm is certainly wrong.
Starting from the present case, we should now try to generalize. The identification of the limitations of a “common” paradigm within the organization should bring a considerable value. The amount of the benefit depends on the team’s ability to broaden the scope of the paradigm beyond the case! There are two ways to generalize the paradigm. First, one can consider a greater number of possible cases. Another possibility is to extend the direct verbalization of the paradigm. The updated paradigm must reflect the proper cause-and-effect that shows the reality as we perceive it after analyzing the “surprise”.
In the soup-example the new paradigm could be as follows: “The proportion of testers who awarded a high score is a good indicator of the success of a new product.”
Following cause-and-effect tree summarizes the analysis of the surprise:4
Congratulations! You have updated or replaced the old flawed paradigm with a new paradigm, thereby avoiding future mistakes. But if you now sit back and twiddle your thumbs, then you are committing two serious mistakes! Find out what these are, and how you can avoid them, in the final part of this blog series.
______________
1,2,3 and 4: Illustration based on Eli Schragenheim`s expert workshop, 04.12.2011