November 7, 2008 | Mark Paradies

Defending Categorization – Why the TapRooT® Root Cause Tree® Works Better Than Unguided Root Cause Analysis

Defending Categorization

The following is copyrighted material used by permission and taken from the TapRooT® Book – Changing the Way the World Solves Problems (Copyright 2008 by System Improvements, Inc.), Chapter 8.

– – –

Some cause-and-effect gurus object to use of the Root Cause Tree® (they call it categorization or a pick-list) because they feel that any categorization restricts the thinking of the investigator.

They maintain that the only way to ensure a complete, unbiased, unbounded root cause analysis is to attack each problem from the viewpoint of basic engineering and human performance principles and let the evidence lead where it may be the use of basic cause-and-effect deductive reasoning, testing of hypotheses, and identification of “factors.”

Our extensive investigation research and development, as well as basic psychological principles, show that this thinking is wrong. This section supplies evidence that will help anyone faced with this argument defend the good practices that TapRooT® and the Root Cause Tree® are based on.

TapRooT® and the Root Cause Tree® have extensive testing and field use that proves the Root Cause Tree® does not limit the thinking of investigators. Just the opposite is true. Once an investigator is trained in using TapRooT®, they find a broader range of causes, and they are less restricted in their thinking than before they were trained in the use of TapRooT®. This is true even if they were previously trained in using a cause-and-effect-based root cause analysis system.

Why do TapRooT® trained analysts find causes that they would have previously overlooked? There are several reasons.

First, when using TapRooT®, investigators use tools in addition to the Root Cause Tree®. These tools that are used before using the Root Cause Tree® encourage a better collection of information before the root cause analysis begins. SnapCharT® is especially helpful for organizing investigation information and spotting missing or conflicting information. Equifactor®, CHAP, Change Analysis, and Safeguards Analysis are excellent tools to help the investigator understand what happened before they start analyzing why it happened. Thus, when using TapRooT®, investigators are often better prepared to find Root Causes and less likely to jump to conclusions than they are when they use systems based primarily on cause-and-effect (which don’t have these built-in information collection tools).

Second, very few investigators have the broad knowledge, training, and experience in all of the fields needed to use cause-and-effect to analyze a complex accident. What kind of knowledge and experience would be needed? A short list includes:

– equipment engineering

– maintenance

– operations research

– management theory

– human factors/ergonomics

– training theory

I often take polls of people getting root cause analysis training and I find that less than 4% have formal training in human factors/ergonomics. And that’s just one of the necessary disciplines.

Therefore, most people need guidance to direct them to the wide variety of Root Causes that should be considered when investigating a problem. They get this guidance when using TapRooT® and the Root Cause Tree®. We have not seen this level of high-quality guidance in any other system.

Third, even experienced gurus fall into a common trap. They develop “favorite cause-itis” – causes that they primarily look for. The concept of “finding the answer you want” has been proven by independent research. Thus, experienced investigators have a tendency to ignore information that does not fit their hypothesis and look for information that confirms their hypothesis. This tendency is called a confirmation bias. A short list (of thousands) of research papers from the past 40 years that confirm the existence of a variety of types of confirmation bias and the effects of confirmation bias on many fields include:

• Peter Watson (Quarterly Journal of Experimental Psychology, 12, pages 129-140, “On the Failure to Eliminate Hypotheses in a Conceptual Task,” 1960), (Quarterly Journal of Experimental Psychology, 20, pages 273-281, “Reasoning about a Rule,” 1968)

• C.R. Matson, M.E. Doherty, and R.D. Tweney (Quarterly Journal of Experimental Psychology, 29, pages 85-95, “Confirmation Bias in a Simulated Research Environment: An Experimental Study of Scientific Inference,” 1977)

• R.A. Griggs and J.R. Cox (British Journal of Psychology, 73, pages 407-420, “The Elusive Thematic Materials Effect in the Wason Selection Task,” 1982).

• Anthony Greenwald, Anthony Pratkanis, Michael Leippe, Michael Baumgardner (Psychology Review, 93-2, pages 216-229, “Under What Conditions Does Theory Obstruct Research Progress,” 1986)

• J. Koehler (Organizational Behavior and Human Decision Processes, 56, pages 28-55, “The Influence of Prior Beliefs on Scientific Judgments of Evidence Quality,” 1993)

• Raymond Nickerson (Review of General Psychology, 2, pages 175-220, “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises,” 1998)

• E. Jonas, S. Schulz-Hardt, D. Frey, N. Thelen (Journal of Personality and Social Psychology, 80-4, pages 557-571, “Confirmation Bias in Sequential Information Search After Preliminary Decisions: An Expansion of Dissonance Theoretical Research on Selective Exposure to Information,” April 2001)

• Ted Kaptchuk (British Medical Journal, 326-7404, pages 1453-1455, “Effect of Interpretive Bias on Research Evidence,” June 2003)

• J. Fugelsang, C. Stein, A. Green, and K. Dunbar (Canadian Journal of Experimental Psychology, 58, pages 132-141, “Theory and Data Interactions of the Scientific Mind: Evidence from the Molecular and the Cognitive Laboratory,” 2004)

• Drew Westen, C. Kilts, P Blagov, K Harenski, and S. Hamann (Journal of Cognitive Neuroscience, “The Neural Bias of Motivated Reasoning: an fMRI Study of Emotional Constraints on Political Judgment During the U.S. Presidential Election of 2004,” 2006)

Thus, experienced investigators trying to confirm a hypothesis (the method used when building a fault tree or implied in the deductive reasoning used in most applications of 5-Why’s and cause-and-effect), or develop lists of “factors” will not have an “unbiased analysis” that they hope to achieve by avoiding categorization. Like everyone else, they need a system (like the Root Cause Tree®) that focuses on a broad spectrum of possibilities. They need to use facts to select or eliminate the conditions under which the problem occurs (and thus, what best practices can be used to eliminate the condition – just like the Root Cause Tree® provides). They need the guidance of the 15 Questions, the Basic Cause Categories of the Root Cause Tree®, and the Root Cause Tree® Dictionary to make sure they avoid the “favorite cause” confirmation bias trap.

Fourth, almost all thinking is categorical in nature. For example, language is a categorization of certain sounds into standard meanings. Thus, a dictionary of a language is a book of categorized meanings and pronunciations. Thus, someone who opposes the use of the Root Cause Tree®, because it is categorical, is just replacing one well-thought-out, well-defined set of categories with another set. The new set is the one that they don’t realize that they have in their mind. Often, we have observed that the set of categories in a guru problem solver’s mind is more restrictive (as measured by the variety of outcomes in their investigations) than the categorization presented by the Root Cause Tree®. You can, therefore, think of using the guru approach (with no well-thought-out categorization) as trying to communicate without a standard language, without a dictionary, and without even having a standard alphabet. Imagine how effective this unstructured communication would be…

Finally, the Root Cause Tree® is not just categorization. The Root Cause Tree® is not a simple checklist (pick-list). It has an expert system (the 15 Questions, the Basic Cause Categories, and the Root Cause Tree® Dictionary questions) built into it. Thus, the problems encountered when using a “pick-list” of Root Causes have been solved by the structure and expert systems built into the Root Cause Tree®.

The comparison of the Root Cause Tree® to a pick-list of Root Causes is a false comparison. When you see this false comparison used by those who wish to justify the use of other, less developed root cause analysis techniques, you will then realize that their system can’t compare to the robust, proven tools used by TapRooT®, including the Root Cause Tree® and Root Cause Tree® Dictionary. Therefore, they chose to call TapRooT® a simple pick list because that makes their system look superior in comparison to a simple pick list. But that is a false comparison because TapRooT® IS NOT a pick-list.

Our research and experience, in addition to independent research on confirmation bias, show that the structure and categorization used in the Root Cause Tree® don’t need to be apologized for. Rather, the structure and categorization of the Root Cause Tree® is a vast advantage over other non-structured, poorly categorized techniques that don’t have expert systems built into them, such as 5-Why’s, cause-and-effect, fishbones, fault trees, and other “factor” trees.

The next time you are asked to defend the Root Cause Tree® versus a system based on cause-and-effect analysis, fault trees, factors, or 5-Why’s, you will be armed with the facts that show the superior design of the TapRooT® System.

Root Cause Analysis
Show Comments

Leave a Reply

Your email address will not be published. Required fields are marked *