In our previous post we described how Win-Win solutions and focus on the constraint can offer improvement initiatives better chances of success. In his article „Continuous Improvement and Auditing“1, Dr. Barnard argues that these are not the only problems change management faces.
Recognizing improvement potential
We have already mentioned how managers often don’t consider improving things until it becomes inevitable, i.e. when the organization is already in crisis. But an improvement initiative can only unfold its full potential if the organization has sufficient resources to dedicate to it. It also seems obvious that a business has potential for improvement even during good times.
So how can you uncover this potential and build upon it? The answer is simple: you artificially generate this “crisis mode” by setting an ambitious target, far removed from the business’s current performance. Taking this target as a baseline, you can then analyze whether it is possible to reach this goal under the current conditions (capacities, resources, performance…). The following aspects should be considered and can help guide the decision:
- Can we better exploit the system’s constraint? Breakdowns, blockages, rework or wastages are some of the issues that can lead to a suboptimal exploitation of the constraint.
- The Critical Chain determines the organization’s throughput time. Does this suffer from delays while waiting for decisions, material or information?
- Every system will have variations in its performance, sometimes substantial ones. What needs to happen for the currently best possible performance to become the standard? Which conditions need to be met?
In this way, you can identify improvement potential which can be further developed. Of course, it may also turn out that the ambitious goal cannot be met under the current conditions and that investments will need to be made.
Measurable impact on the entire system
As we have previously seen, a change can have a measurable effect on a particular area without having any impact on the system as a whole. Many managers have trouble defining these global effects of local changes. This is where Throughput Accounting can be helpful, as it has two notable advantages over traditional accounting methods:
1. It recognizes the importance of the constraint for the performance of the organization. Only changes that impact the constraint will have any real effect on the system’s throughput.
2. It differentiates between Totally Variable Costs (TVC) and Operating Expenses (OE). Allocating Operating Expenses to products and services when doing accounting often results in poor financial decision-making, as it leads managers to believe that the OE have now also become variable.
The effects of any change on three simple measurements determine the impact on the entire system:
- Throughput (= Sales – TVC)
- Investment & Inventory
- Operating Expenses (OE)
Using this simple calculation, it becomes much easier to understand even non-linear dependencies, and thus to make better financial decisions.
Avoiding errors of commission and omission
We have already mentioned these rather common mistakes in the first part of this series:
1. Doing something that should not be done (doing the wrong thing, doing the right thing incorrectly, or doing too many things at the same time). This is based on an untested hypothesis (doing something without checking that it is useful).
2. Not doing something that should be done (because “now is not the right time” or “it will never work”). This is based on not acting despite a tested hypothesis.
To avoid these types of mistakes, you must always test the underlying hypothesis before deciding to do (or not do) something. Each hypothesis is based on an assumption, e.g. “our projects are delayed because there are often parts missing at the start of a task.” Is this actually the case? You can test this assumption by using cause-and-effect logic. Check the validity of each predicted effect in order to confirm (or invalidate) the hypothesis.
Sometimes you may find that an assumption (and the hypothesis based on it) can only be tested through trial and error. In this case you will have to act and observe. Of course, a fast and rigorous feedback and monitoring system must be in place, so that you can quickly adjust or abandon the trial if necessary. At any rate, you should avoid letting an alleged “lack of information” keep you in inertia.
Do not let uncertainty paralyze you
Too often, this fear of uncertainty leads to doing nothing at all. The thought process behind this goes: Of course the organization cannot improve this way, but at least it can’t all go terribly wrong. This fear of failure is deeply ingrained in many organizations, but quantum leaps in improvement cannot be made without the courage to take decisions.
This can only happen through a fundamental paradigm shift: it is wrong to avoid failure at all cost, as we can only really learn from our mistakes. In order to integrate and cement this new approach, it is helpful to perform the following steps proposed by Dr. Barnard and R.L. Ackoff2:
1. Record every decision made in the organization – including the decisions not to do something.
2. Include in these records the trigger, expected effects, underlying assumptions and information, along with the logic used by the decision maker.
3. Monitor the real effects of the decision. Use this information to adjust course where necessary.
4. Record this decision of corrective action with all the data mentioned above.
Make sure to also record all decisions of inaction. After having reviewed these for a while it will be much easier to accept a certain level of uncertainty, as it will become obvious to everyone that inaction also has real, negative consequences.
The next post will explain how you can adapt your decision processes to the new reality and thus open your organization to entirely new opportunities.
1: Dr. Alan Barnard, “Continuous Improvement and Auditing” in Cox III, James F., and Schleier Jr., John G., Ed. Theory of Constraints Handbook. New York: The McGraw-Hill Companies Inc., 2010. p. 403-454
2: Barnard A. 2001. Using TOC to Implement SAP: African Explosives Ltd. Case Study. South Africa: Saphila SAP User Conference Proceedings. und Ackoff, R. L. 2006. “Why few organizations adopt systems thinking,” Systems Research and Behavioral Science 23:705–708. presented in Dr. Alan Barnard, “Continuous Improvement and Auditing” in Cox III, James F., and Schleier Jr., John G., Ed. Theory of Constraints Handbook. New York: The McGraw-Hill Companies Inc., 2010. p. 438