AI for Root Cause Analysis – Yes or No?

Problems With AI
You may have read about problems with AI. They include:
- Taking the human out of the loop,
- Making up answers,
- Being opaque (the user can’t easily see why AI arrived at an answer),
- Incomplete training of the AI model,
- Humans place too much confidence in AI solutions,
- Humans will lose skills because of inadequate practice, and
- Inappropriate use of AI for critical decision-making.
Of course, if you oppose AI, you may add that AI could take over the world and cause human extinction (or at least massive loss of jobs), but that goes beyond the purpose of the list above.
AI Advantages
When it comes to root cause analysis, AI may have several advantages:
- AI could help substandard root cause tools like 5-Whys be better,
- AI can look for patterns in root cause analysis data and suggest generic causes, and
- AI could search the web for potential solutions to help fix problems.
These are the types of suggested uses of AI for root cause analysis that I’ve seen.
AI Disadvantages
First, if you turn over root cause analysis to AI, you are turning over a key business function to technology that is generally opaue, may make up answers (corrective actions) not based on facts, and may introduce blame into the system (if AI starts using discipline to change human behaviour or starts replacing humans with automation as a standard corrective action suggestion).
In addition, if the AI replaces the human in the root cause analysis process, investigators/facilitators will experience a loss in the ability to find root causes and develop effective corrective actions. Thus, monitoring the output of the AI root cause analysis will become more difficult until management becomes totally dependent on AI for key business decisions concerning performance improvement.
Current and Future Evaluation
What does the future hold? That remains to be seen. And the future of AI is changing rapidly. With the right base system and the proper development, AI-enabled root cause analysis might be a significant step forward. However, for now, the disadvantages of using AI for root cause analysis outweigh the advantages.
Let’s look at one potential example to illustrate this point…
What if AI couldn’t find a root cause (or causes) and “decided” to make stuff up? If the system was opaque, the human in charge (who no longer has experience in performing root cause analysis) might blindly implement what AI suggests. This is even more likely if the past human “supervisor” had disregarded AI’s suggestions and then got in trouble with management or regulators. The AI suggestions might even cause the system to be less reliable and more prone to, for instance, process safety accidents. This could result in a major release, an explosion and fire, fatalities, and significant regulatory action.
What happens next? Would AI tell the company that it was responsible for the accident? Or would AI shift the blame to operators, mechanics, or management? This could happen. AI has been caught dodging blame and fabricating statistics to make AI’s performance look better. Smart AI understands blame and how to shift blame to others.
So, before AI can become a trusted root cause analysis assistant, it must have a solid design foundation and undergo thorough testing. AI should not replace human-led root cause analysis; instead, it should serve as a powerful assistant to make the process more efficient.
This is an important conversation. AI can accelerate parts of the investigation process like pattern recognition, but what it can’t do is interpret human factors, context or culture. These are things that make TapRooT® Root Cause Analysis so effective. Emotional intelligence is still a leading “technology” for understanding why people do what they do because that includes our ability to listen beyond words, interpret context, and build trust.
Thanks, Barb.
I completely understand the perspectives presented in this article, and I agree that the examples—particularly those highlighting potential disadvantages—are highly plausible and realistic. Given how dependent humans have become on automated systems, these concerns are well-founded. Personally, I use AI as an artificial devil’s advocate and as a tool for verification and validation, while ensuring I maintain my own knowledge base and critical thinking skills. It’s essential for any user to preserve that distinction in order to sustain their professional value within the workplace.