I am going to court some controversy in this article as there are strong advocates for Lean who do not believe in Six Sigma, but I welcome your thoughts on this topic for an open discussion.
Lean has some great tools and techniques for problem solving and the power of lean comes from engaging the workforce at the grassroots level to solve problems since they are the ones who have an intimate knowledge of the problems they are facing. The typical tools used for this purpose are data collection, some basic time series charts, bar charts, Pareto charts to identify the vital few causes, root cause identification using 5 Why analysis, direct observation of the problems at the Gemba, and using a Plan-Do-Check-Act cycle to solve the problems.
One problem I found was that the people engaged in solving the problems are not taught basic statistical tools and as such the impact of variation in problem solving is usually not considered. When we observe any process, variation is a part of the process and hence in order to attack the problem, we need to understand if the variation is coming from random natural variations in the process or due to some special causes. The problem occurs when the team looks at “patterns” in the data that they have collected and assume that something has gone wrong with the process. Humans are good at finding non-existent patterns in data (like finding faces in the cloud etc.) An example of this is shown in the figure below. In this figure, it looks like our sales revenue is getting worse and is in the decline. We could plot a best fit line to this and show a downward trend in the data. But in fact, this data was obtained from a stable random process and if no action is taken, the process will get better by itself. Of course, you may say that the variation is too large, and actions need to be taken to reduce variation and you would be right. But that is a different issue – what I want to point out is that teams will attack this problem as though the mean sales number is declining!
Along the same lines, when improvement actions are taken, data may be collected for a short period of time and similar problem arise in the analysis by “visually” comparing the before and after situation. Do you think an improvement was made in the figure below? Actually, it is the same data from the same process “without” any improvements. However, it looks like for the last six data points that things are getting better. This is nothing but seeing “face” in the clouds. Vague heuristic rules like “3-points in a row make a pattern” will not work in all cases. If the lean teams were taught about statistical analysis, then they would not make this mistake as there are powerful statistical tools available that will tell you if the improvement was due to chance alone or due to a real difference between the before and after condition.
I would love to hear your thoughts on this topic. What have you found in your experience and how do lean teams actually handle situations like these.
Follow us on LinkedIn to get the latest posts & updates.