The AI quality control problem

Imagine a company that has successfully integrated AI into its workflows. It will include these kinds of workers:

  • Executives. Decide on priorities. Requires strategic vision, leadership.
  • Strategists and managers. Determine where AI will be deployed and how, manage the deployment. Requires leadership skills.
  • Engineers. Create AI tools and systems. Requires excellent technical skills.
  • Implementors. Operate the AI-enhanced systems. Requires experience, dedication, work ethic.
  • Quality control. Identify problems created and propagated by AI and figure out how to fix them. Requires . . . well, read the rest of this post.

The QC challenge

All the other types of workers exist in today’s companies. They’ll need to learn new skills, but managers are manager, engineers are engineers, and implementors are the rank and file that get work done in every company.

The challenge is that AI does stuff wrong. It does stuff wrong in an increasingly unpredictable set of ways. It’s not just hallucinations. It might recommend a course of action that’s immoral. It might recommend a course of action that has a huge unidentified risk. It could do something that makes sense, but would piss off the company’s largest and most temperamental customer. I can’t list everything it will get wrong, and that is the point: there is no way to list all the ways it can screw up.

AI doesn’t just lack humanity. It lacks common sense.

That fact is going to demand a lot of people that need to check its work.

One problem with this is that the productivity gains everyone is hoping for will at least in part get swallowed up by the need for large numbers of QC people. But that’s not the biggest problem.

The big problem is where those QC people come from. They need to have:

  • Relentless attention to detail.
  • Passion for the success of the business.
  • Creativity to imagine all the things that could go wrong.
  • An enormous level of patience for tedious work.
  • A high tolerance for stress and working under pressure (this job has qualities in common with air traffic control).
  • A lack of ego: a willingness to work in an environment where the machines do most of the work and you spend all day serving their needs.

The people with the skills won’t have the right temperament. Highly creative people don’t like to work for machines. People with the ability to attend to details all day don’t have a high stress tolerance. Passionate people have egos.

We are creating a world that will demand a massive number of people with combination of skills that doesn’t exist. And by definition machines can’t do this work, because it’s work that makes up for machine blind spots.

The cost of failure

Companies are going to attempt to make the transition and “fix the QC problem later.” They’ll be really efficient until things start to go wrong.

And some of the things that will go wrong are likely to disastrous. Yes, disastrous on a corporate level, but possibly disastrous at a more terrifying, societal level.

This seems like a problem we ought to be pay attention to.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

4 Comments

  1. As a Quality Professional, I concur and have experienced your assessment! “Hal” not totally ready for Prime Time. Human scrutiny is a requirement.

  2. Josh — good points that make sense in the present AI environment. But I can imagine AI creators (not me!) saying four of those points can be dealt with by the nature of AI itself, so humans are not needed. Creativity and ego are problematic, the latter since there seems to be evidence that current AIs reject being wrong in some circumstances. And it might be possible for alternative AIs to offer alternative approaches by humans and other kinds of AIs to check each others work — as human scholars do. And these models could be vetted by qualified humans (and there are not a lot of those). But creative AIs and cooperative AIs would have to have a different set of values somehow built into the AI. Such AIs would be fun to work with for qualified AI folks. One underlying problem may be that the extensive data which create all AIs to begin with may compromise these different AIs since unused data are unavailable — except in the future. Another problem is that the resources to do this would likely not be considered given extant competitive pressures to produce AIs that can be monetized now. We seemingly are in a product environment now where complex “finished” products (obviously including AIs) are suspected to be flawed to some degree when offered for use or sale. “Let the customers find the flaws we miss” is a hidden nostrum of companies in our Beta model world.

  3. This is what we get when we refuse to recognize a tool for what it is – a tool, designed for a specific purpose – and instead promote it as the Be-All-And-End-All to do all our work for us and save the world. Our culture consistently obsesses over the latest and greatest Shiny New Object. (I thought we were intellectually superior to crows, but maybe not.) In our society today, an accountant (for example only, not to pick on accountants in particular) who uses a hammer as a screwdriver has unsuccessful results, but blames the hammer for the failure, instead of hiring a carpenter.

    We think working with our own brains is drudgery? Try doing the QC work in the wake of AI-garbage!