When AI Means Automated Incompetence

The definition of AI – today at least – is Artificial Intelligence.

This is an unfortunate moniker that has put a shiny, science fiction gloss on a very mundane, IT-old reality called database query.

My earliest encounters with what is now called AI happened 40 years ago.

IBM had recently jumped into the personal computer market. Their new chunky PCs and clicky keyboards had one very normal condition – by times, they would break down.

To fix those computers, the industry began hiring computer repair technicians. That was where my IT career started.

To help the newly minted cohort of computer repair technicians, IBM created PIC (Problem Isolation Charts) manuals that included a diagnostic diskette. The technician would boot the PC with the diagnostic diskette and let this IBM program test the PC. The program, following its AI processes, would find an anomaly and display an error code on the screen. The technician would then use the PIC guide to translate the error code into a meaningful repair activity and replace the specific part the diagnostic program identified as defective.

If the PC was too broken to run the diagnostic disk, the PIC manual had around 200 pages of troubleshooting charts designed to lead the technician to the cause of the PC failure. They had instructions like this:

  1. Does the power supply power on?

    1. Yes: Go to next step.

    2. No: Go to page 46, step 3.

It was a deeply thought-out process. Very intelligent people developed and tested these early AI tools. Often, they were very useful.

But there were also problems that could turn this well-intended AI tool into “Automated Incompetence”

Those same problems bring the same risk to your AI projects today. Here are a few of those problems and some suggestions to mitigate them:

  1. People created the diagnostic tool. People are wonderful, intelligent, and incredibly creative. Those same positive attributes are often reflected in the tools they create. Unfortunately, none of us is flawless. Therefore, nothing we create is flawless. Everything we create will have points of failure.

    Suggestion: Avoid the tyranny of the “error-free solutions”. Nothing is error-free. Mistakes will happen. Make sure every automated process has a log for documenting unexpected results. Use that log to audit the process and the output for potential errors. Oh, and please build a process to fix those errors!

  2. What’s in the box? The IBM Diagnostic Tools were written by IBM engineers using PCs only containing IBM components. They were not QA tested with third-party hardware. IBM did publish specifications to third-party manufacturers on how to make compatible components for the IBM PCs. If you worked that industry, you will remember there were “degrees of compatibility”. Some components worked even better than IBM parts. Other third-party components could run operating systems and applications beautifully - but they could not pass an IBM diagnostic test. Unfortunately, the diagnostic test could only determine the existence of the internal components, not their manufacturer or degree of compatibility. This created a number of false negative and false positive results on the Diagnostic Tools.

    Suggestion: You cannot know everything that is in your AI data sources or processes. People will add non-standard data or unexpected tools that will skew your well-crafted AI solution. Knowing what you cannot know is as important as knowing what you do know. You need to build prompts in your AI solution so that a non-AI contributor can discover what your AI tool cannot know. If you don’t, your results will be skewed by you not knowing what it does not know.

  3. Incompetent technicians. I don’t fault the technicians. Though there were a very few truly incompetent technicians, the majority of the technicians I knew could have been very competent, but they were not sufficiently trained and mentored. “All you need is the PIC guide and anybody can fix a PC”, was the mantra of the day. I have seen that same incompetence mantra propagated through projects, customer support flows, and software solutions for the 40 years of my career. It is sickening. Not only were the technicians not able to fix the PCs consistently, but the servicing costs quickly outstripped the cost of educating the technicians. Customers became frustrated with “unreliable PCs” when the problem was technician and tool incompetence. As well, the people who created the Diagnostic Tools could not receive valuable informed feedback from the field – feedback that could have incrementally improved the tool.

    Suggestion: Nothing is as valuable to you and your company as well-trained, knowledgeable employees. Whatever you spend on your employee training is a token to what you will lose by not investing in them. Everyone who touches your AI tool needs to see behind the “mystique” of the AI moniker. They need to know how it does what it does. Only then they can give you quality feedback to improve your solution.

  4. The GIGO Factor. GIGO stands for “Garbage In – Garbage Out”. That garbage can be caused by:

    • Data and process inputs that contain poor quality and defective information,

    • People and processes that don’t know how to filter or use the data inputs,

    • New factors that impact the process – factors that did not exist when the process was created,

    • Good data, but a flawed process that transforms the input into meaningless or harmful output.

Suggestion: For each problem, there are strategies to mitigate that problem:

  • Limit your data sources. More data does not automatically mean better data. Well vetted data is more valuable than unlimited data.

  • The models were built with expected outputs. Log any unexpected outputs and audit the logs. GIGO is your enemy.

  • Regularly review the current data sources and process flows. Compare them to the design map. Is there anything new or anything that has changed since the last audit? Investigate and accommodate the changes. Then update the design map.

  • A process that outputs something unexpected – even something brilliant but unexpected is not a revolutionary process – it is a broken process. It needs to be fixed.

So, how is your AI solution going?

Are you aware of these potential problems?

How are you addressing them?

Next
Next

Vendor Management: Always Have a Backup