By Nicholas Brigman on March 12th, 2021
In my last article, I discussed the importance of identifying anomalies. Identifying that something doesn’t fit within expectations of an organization is the first step. Figuring out what to do with that information is next. Human-Assisted AI moves the conversation from “So what?” to “Now what?”.
Align your resources with what the data is telling you. AI gives you another, low price, weapon. When you are looking toward AI to expand your workforce, keep a couple of things in mind. AI is good at solving SIMPLE challenges. However, it may not solve it the way you intended.
There’s an experiment from the early days of neural networks where random 3D rendered bodies were judged by the vertical height they achieved. The intent was to teach the AI to jump.
The experimenter encountered that the randomness of generation created some bodies taller than others, so just the smaller bodies learned to jump. Knowing that, the experimenter then altered his experiment, instructing the 3D bodies that they were next going to be judged on vertical clearance, where the lowest part of the body would be the victor in achieving the highest vertical height. This time the AI learned to summersault, taking the lowest part of the body and using the rest of the body to thrust it highest into the air.
What you should do is:
When decisions must be made and multiple paths can lead to success, weighing the path is what teaches AI how to decide in the future. When the information necessary is too wide or complex for AI to effectively or consistently process, the human operator becomes key. The human mind can process multiple, seemingly disparate datasets, quickly.
Hybrid automation uses the best of AI together with what humans are good at. AI can quickly regress through a dataset, determine the mathematically relevant features of a problem, and raise the items that don’t fit or predict a future value. Humans are better at categorizing across differing datasets and assigning actions to complex problems, whereas AI can track and prioritize a set of actions that can be applied.
The anomaly model determines which items in the dataset are problematic. These items process through a filter that is determined by the business and assisted by the engine. This raises the proper anomalous data to the operator. The operator, who understands the business and the situation, selects an appropriate action from a predefined list. If there is no available action that covers the issue identified, a new action is authored.
The dependent model receives the original relevant data, the anomaly score, and the action identity, and uses this information to train itself so that it can predict future actions. The dependent model can be used to (1) automatically take action when appropriate confidence of that action is achieved; and/or (2) rank actions that the operator will take in the future.
The more experience this model receives, the more actions can be taken with confidence. Over time, the operator input contains the most important questions of how to service the anomaly.
The unapologetic merger of man and machine. The acknowledgment that the machine or the human alone can get you to the best answer. Once you have embraced the paradox that the best artificial intelligence lives and learns with human intelligence, you can design high-value hybrid systems that take advantage of the unique strengths of both. This isn’t about putting a man in a robot suit, it’s about the operator asking the machine “what do I do here?” and the machine’s response being clear, concise, and actionable.
First, I used the anomaly detection capabilities and models provided by Amazon. This fed my system and gave my human operators access to real problems.
Next, I fed a new model with a combination of my anomaly information and the actual actions my operators were taking.
I did nothing at first, other than listen and watch. I went for a Gemba Walk.
Then I started to promote the machine-derived superior answers to problems. It was unobtrusive and natural to the operators. We curved their behaviors in inches, not feet.
Afterward, we removed high-confidence problems from their display. The machine had displayed enough confidence in the answer, but we also checked the work. Validation would raise again an unsolved issue (more on this later).
Machines make simple decisions and can learn within fairly tight constraints. Humans can further help machines, but don’t expect too much at first. A properly managed system of humans and machines, Assisted AI, can drive real results within the enterprise. Some results might not be what you expect which can be eureka moments or point to new success and validation criteria.
To learn the full story, check out the replay of my webinar.