April 29, 2026 | Mark Paradies

Can You Improve Your Investigations with AI?

Review Root Cause Analysis

Recent Article – AI Limitations

I recently saw someone describe how AI improved their investigation system. First, they said that the system before AI was:

  • Manual
  • Took hours to complete
  • Required specialized training in the technique
  • Missed subtle contributing factors
  • Produced an inconsistent investigation depth
  • Resulted in subjective root causes

That didn’t sound very good. So, they decided to power the system with AI.

After applying AI to the system, they said it is much improved…

  • The system analyzes photos and videos automatically
  • Provides a comprehensive report in minutes
  • Requires little or no training in the technique
  • Detects hidden hazards and patterns
  • Provides consistent cause analysis
  • Arrives at systematic, evidence-based conclusions

Somehow, I thought this was too good to be true. So, I read the fine print.

The CCTV Problem

It seems the AI needs CCTV footage of the incident and, perhaps, a written description of what happened. If you don’t have good CCTV footage of the complete incident, or you need to type a written description of what happened, then the Ai doesn’t “provide a comprehensive written report in minutes.”

We know that investigating to determine what happened can be the most time-consuming part of an incident investigation. The claim of saving 85% of the investigation time by using AI only applies if you have comprehensive CCTV footage.

To make matters worse, you probably need good audio and video to make this system work. The odds of that are slim to none.

CCTV coverage

If the new AI-powered system relies heavily on comprehensive CCTV footage. All team members across all locations would need comprehensive 24-hour CCTV coverage, in case their actions were involved in an incident. What about people in the field? Do they need a body cam? How would operators and mechanics react to constant surveillance? Will they get used to it?

Surveilance - CCTV

 

The Investigator Problem

Now, the next problem. If you don’t have comprehensive CCTV footage, where does the evidence come from for the AI to analyze root causes? From the human investigator? And how do investigators know what evidence they should collect (with little to no training)? An AI system fails without adequate information collection. That seems like a bit of a drawback.

Plus, if the old root cause system resulted in subjective results, wouldn’t the new AI system using the same root cause system and the information provided by a human investigator have the same problems? If the AI doesn’t have CCTV footage to review, then it relies on human-provided information and expertise. AI can’t eliminate garbage in = garbage out, unless AI makes stuff up (which it has been known to do).

One more problem with the human in this automated system … If your investigator has little or no training, how will they know whether the AI investigation results are accurate and ready for management review? Are the AI-provided corrective actions going to work?

What Does AI-Enabled Incident Investigation and Root Cause Analysis Require?

Incident investigation is a business-critical tool for preventing major accidents, improving quality, reducing equipment downtime, and enhancing operational efficiency. Automating investigations and root cause analysis – a business-critical function – with AI requires careful design and testing.

My master’s thesis (A Cognitive Allocation to Improve Nuclear Power Plant Performance, Mark Paradies, University of Illinois, 1985) examined the proper role of automation in the next generation of nuclear power plants. The same type of analysis that I performed should be applied to the automation (with AI) of incident investigations and root cause analysis.

As such, AI should be assigned tasks that automate manual tasks, but automation should not be assigned tasks that involve making vital engineering or management decisions that influence the ultimate safety of the process.

Any decision made by AI should be transparent (easy for the human investigator to understand/verify) and reviewed prior to inclusion in any report or the development of corrective actions. This review requires the reviewer to have the knowledge of an expert in root cause analysis.

AI could also be used to analyze multiple incidents to identify generic causes. Once again, the human investigator should be able to easily review the reasons for AI’s conclusions.

Even with this transparency, there is a high risk that human investigators (and management) will place too much faith in AI-generated results. Human investigators (and management) are likely to believe whatever the AI says. A caution to investigators to be extremely careful when reviewing AI results has little long-term impact on their skepticism of AI suggestions.

TapRooT® RCA AI Implementation

As with every major business, System Improvements is looking for potential applications of AI to thheir product – the TapRooT® Root Cause Analysis and Incident Investigation System. As part of our analysis, we decided that the AI will be an optional application that can be added to a TapRooT® RCA implementation.

Brain or AI - Actual Intelligence

Because the TapRooT® RCA System already includes Actual Intelligence, applying AI to automate some tasks seems reasonable and less risky than applying AI to an unguided system.

Also, we decided that the optional AI application to TapRooT® Root Cause Analysis will be thoroughly tested (which has already started) by our team of experts and some of our expert users.

We also decided that, once implemented, the software will include human investigator review and approval of any AI recommendations.

One more thing we decided is that modifications will be made to our TapRooT® Root Cause Analysis Courses to ensure investigators understand how to use AI, including their responsibilities when using the optional AI function.

If you would like more information about our approach to AI for root cause analysis, and perhaps become one of our beta testers, just fill out the form at the link below…

https://share.hsforms.com/1OdJTTDdvToCCteEXFMxicA21qm6

Also, watch for articles on AI implementation and testing, as well as release dates for the optional TapRooT® AI Software, available as an add-on to the hosted TapRooT® Software. Look for our first release in June of 2026 (or perhaps even earlier).

TapRooT® Training / Course

To register for the latest public TapRooT® Root Cause Analysis Training at sites around the world, see THIS LINK.

Categories
Investigations, Root Cause Analysis
-->
Show Comments

Leave a Reply

Your email address will not be published. Required fields are marked *