August 14, 2025 | Mark Paradies

AI and Root Cause Analysis

AI picture from CANVA

Everyone is Excited About AI

Many folks are excited about AI. Why?

  1. They think AI might take their job.
  2. They think AI might make their job easier.
  3. They think AI will produce better results.
  4. They think AI may take over the world and eliminate humans (think Terminator).

At System Improvements, we are interested in AI too. Read the discussion below for our thoughts about AI and root cause analysis.

AI and Root Cause Analysis

The folks at System Improvements have been studying the progress of AI and how it may be applied to root cause analysis. We have several thoughts about AI that we will share below.

Not a Mature Technology

Our first conclusion is that Artificial Intelligence isn’t a mature technology at this time. The AI products being used are experimental. As such, it can produce results that may not be accurate. We will discuss this more in the sections below.

Who Trains the AI

The “goodness” of AI results depends on the training of the AI. This training is, as of now, dependent on human programmer decisions. I’ve seen several articles describe biased AI results that were due to the bias of the programmer who decided what should be used to train the AI.

This problem may be solved in the future if AI starts deciding for itself what information will be used for training. However, right now I’m not sure how those decisions by AI will be made. Thus, it seems a wait-and-see approach is wise.

May Produce False Results

Several examples have been publicised about AI making mistakes or even making up information.

One example was federal judges using AI to write their opinions about cases they were deciding. Several of the opinions contained references to cases used as precedents that DIDN’T EXIST. AI made them up. How did this happen? I don’t know, but the judges were embarrassed when the false cases they depended upon were identified.

A second, more personal to us, example was an AI comparison of two root cause analysis techniques. One of those techniques was TapRooT® RCA. However, the data in the comparison was inaccurate. It failed to find features included in TapRooT® RCA and added some features that were part of other RCA tools but not used by TapRooT® RCA.

How did this happen? The AI had used a competitor’s article about their root cause analysis compared to other root cause analysis systems. The article was inaccurate. Even worse, AI misread the article and attributed features in other systems to TapRooT® RCA. The features were in the article, but were not part of the TapRooT® System.

The causal user who doesn’t know much about TapRooT® RCA might be misled by the AI article. They would learn false facts.

How can these kinds of mistakes be fixed? We don’t know. The “facts” that AI used were wrong (a training issue?), the AI misused the facts that were there, and/or the AI made things up.

Perhaps this goes back to AI being experimental. A reason that you should not rely on AI results.

Users May Not Learn

I saw two interesting articles about AI and learning.

The overall opinion of the first article was that when people used AI to find facts and “learn,” they actually remembered very little (or none) of what they were supposed to learn.

A second article was from a college professor who immediately recognized students’ papers written by AI. How could he tell? The AI papers didn’t use the same language and phrasing that real students used, and often had errors that real students would not make. He worried that, in addition to not being the students’ work, this “instant” homework was keeping the students from learning. Not only failing to learn the homework assignment, but also failing to learn how to think for themselves.

Would investigators fail to learn if they depended on AI?

Applying AI to Root Cause Analysis

Because of the problems listed above, our general opinion is that it is too early to apply AI to perform a root cause analysis. There are too many problems and too many inaccuracies to trust AI to provide business-critical intelligence from root cause analysis that management depends on for decisions that might stop fatalities and serious injuries.

We believe that life and death decisions shouldn’t be made by experimental AI.

TapRooT® RCA Has AI

What? TapRooT® RCA has AI?

Yes, it does. But it is not Artificial Intelligence. It is ACTUAL INTELLIGENCE.

Brain or AI - Actual Intelligence

What is Actual Intelligence? It is the research, testing, and tools built into the TapRooT® System. The:

  • SnapCharT® Diagram Tool
  • Equifactor® Troubleshooting Tables
  • Root Cause Tree® Diagram and Dictionary
  • Corrective Action Helper® Guide

This intelligence has been reviewed by independent experts and tested by tens of thousands of users over 30+ years. It has been proven to be helpful and accurate. Proven to provide solutions that the users previously would not have discovered. Plus, TapRooT® RCA helps users learn to discover what happened, ask better questions to uncover true root causes, and develop better solutions based on the latest human performance and equipment reliability research. People get smarter when they use TapRooT® RCA. And these techniques are built into the patented, award-winning TapRooT® Software.

That’s Actual Intelligence vs. Artificial Intelligence.

We are watching what computers produce, and the day that AI results start to be dependable to help users produce better root cause analysis and help people learn more from their experience, we will test AI as a root cause analysis tool and incorporate it where it serves a function. Where it makes sense. But for now, experimental AI technology does not produce the results that we think are needed for a quality root cause analysis.

Therefore, for now, we recommend that you attend TapRooT® Root Cause Analysis Training and improve your Actual Intelligence to produce better results to improve performance at your company.

Find out what tens of thousands of TapRooT® Users know:

TapRooT® RCA is Actual Intelligence.

Equifactor® Course Graduates

Categories
Root Cause Analysis, Root Cause Analysis Tips
-->
Show Comments

4 Replies to “AI and Root Cause Analysis”

  • Brent Kamenka says:

    AI Root Cause Analysis: Governance and methods:
    I was intrigued by this article as I am interested in applying root cause analysis to AI. Unlike other systems, I think that AI will require a less reductive approach, and more of a macro approach due to the manner in which capabilities “emerge” as more compute and “attention” are applied to questions.

    This will require not isolating a system, but looking at it in a Macro scope.

    Is there any work at Tap Root on this?

    • Mark Paradies says:

      We are watching progress in the field of AI and have several potentialal ideas about where it could be applied when the AI models/engines are more mature. I’m not sure what a “less reductive” approach is. I think it is unlikely that AI will produce a complete root cause analysis on its own. At least not what we define as a root cause. However, there is alot too discuss on this topic.

  • Nelson Suarez says:

    I have always thought that the way the TapRoot methodology has been conceived from the beginning, as a rule-based investigative framework with a standardized causal language and encoded root-cause tree logic to ensure decisions follow structured pathways, offers fertile ground for AI to make transformative contributions.

    Its emphasis on systematic analysis, causal mapping, and evidence-based decision-making aligns seamlessly with the strengths of AI—particularly in natural language processing (NLP) and machine learning.

    If, at the moment, we can not expect AI to conduct a complete investigation on its own, several AI modules could be incorporated into the software through a partnership between Systems Improvement and an AI development firm. By embedding AI into TapRooT software in a responsible and modular manner, organizations can improve investigative efficiency, reduce human error, and enhance the quality of corrective actions.

    Some tasks that AI could now support are: 1) automated Snapchart generation, 2) transcription and semantic tagging of interviews, converting them into actionable data, 3) preliminary root cause suggestions, 4) corrective actions recommendations, and 5) trend analysis across multiple investigations.

    I see this integration not yet as a replacement for human expertise—but as a strategic augmentation. The AI modules should be designed with explainability in mind—each suggestion must be accompanied by a clear logic path referencing the underlying TapRooT® rules. This “human-in-the-loop” architecture preserves analytical integrity and ensures that investigators remain the ultimate arbiters of conclusions.

    For those who own the TapRoot system materials, exploring AI’s potential becomes a hands-on experience. By providing an AI platform with the essential elements of an incident investigation—such as causal factors, timeline details, and root cause data—and uploading the Corrective Actions Helper in digital form, one can witness AI’s analytical capabilities in action. The system can process the structured input, interpret the methodology’s logic, and propose corrective actions that align with TapRoot’s rigorous standards.

    • Alex Paradies says:

      Thanks for the suggestions Nelson. The main problem with your suggestions is not that they couldn’t be done but that they shouldn’t be done. Go see my article on what GPT is doing to your brain to understand more. The power of TapRooT® is in the way it guides learning. By shortcutting that process you remove the opportunity for the investigators to discover and grow. Meaning the more AI you incorporate into an investigation the worse you become at preforming the investigation. The more dependent you become on AI. When it fails you will lack the skills needed to recognize its failure. There are places we are looking to utilize AI but as a support role not a short cut. I would also comment that I can tell your comment was written/edited by an LLM. The em dashes (ex. AI—particularly) are the biggest give away. Be mindful of the ideas that come from LLMs their goal is to tell you to use them even when they are harming you.

Leave a Reply

Your email address will not be published. Required fields are marked *