Archive, Blog 5 key lessons the CTO of Edge Technologies learned while chasing AI

5 key lessons I learned while chasing AI for 20 years

By on July 21st, 2020

[Sassy_Social_Share]

Confessions of a CTO

I was inspired to share my experiences with automation and AI especially experiments with edgeTI edgeCore™ platform. edgeCore is used by large enterprises, service providers, and governments to understand their business and technical operations and then take action with speed and intelligence. While my story started some 20 years ago, the connection to the present proved to be enlightening and assuring that EedgeCore turns out to be a great way to instrument AI and its impact on operations.

The Old Days

I spent over a decade writing self-tuning, tier-1 and 2, threshold-based solutions designed to interact with large amounts of remotely-located, unattended or under-attended machines. Under-attended machines are those machines that do have an operator, but that operator doesn’t have a good grasp of the functions being performed for them by their hardware and software.

Enter Edge

I came to edgeTI in November of 2018. When I arrived, Edge was already bringing API-based and web-based data and presentations through its platform. The platform allowed the user to create actions that would present to the user when certain conditions were met. Our most advanced customers had learned how to take those threshold-based actions and implement simple automation.

Moving the Needle with Robots

Edge was able to drive actions through its pipeline, but those actions were limited in scope to what could be carried out through simple interaction with javascript-based commands. Not to understate the power that this wielded, it could be powerful, but it wasn’t easy and it wasn’t complete. The boundaries of the interaction, more-or-less, ended at the browser. Many of our clients were performing repetitive actions outside of the browser. Some were through virtual computing environments, like Citrix, and others were through fat-clients on Microsoft Windows. To increase the utility of our actions, we would need to grow our reach. edgeCore would need to be able to engage scripts that operated within Windows, both local and remote. In mid-2019, we launched our add-on, aptly named edgeRPA.

With edgeRPA customers could orchestrate activity through our platform that would pass-through Citrix, browser, and directly within Windows. This feature allowed our clients the unprecedented ability to act upon data moving through a remote pipeline to prompt individual operators and to guide those operators through automated steps. The result was dramatic time-savings, less rework, and fewer errors. At this point, we were still threshold-based. The adjustment of thresholds was updated automatically in some places but was otherwise largely manual.

AI as an Activity Sponsor

In mid-2019, we launched a feature that would shift the way our customers used their data. As data moved through our pipeline, on its way to the operator, we could take pieces of that data and process it against external APIs. We used this function to allow our customers to take operator-bound data and begin to train AI models. Our proof-of-concept used an AWS SageMaker model. As rows of data progressed through our pipeline, we would take that opportunity to place observations into the model designed to handle that data.

At first, the observations just accumulated. But once the model was trained and the acceptable confidence-level was established, we shifted the operators’ view.

Where once there was a noisy bunch of data flowing through a data-pipeline, now it was attached to the SageMaker model data source. The data the operators now saw, streamlined to solely highlight the established anomalies. Multi-page datasets could be reduced to just a few lines, and ordered by like severity.

The actions edgeCore had already allowed could now be executed with higher confidence by the operator. But we didn’t stop there.

AI as an Action Arbiter

With streamlined datasets, we undertook the next step of the AI puzzle. We could record the observed anomaly and the follow-up action that the operator used to resolve. This combination of keys was stored in a new model. The new model, once trained, would allow us to re-order the actions offered based upon the historical observation based on the operator-user-base. Eventually, we would automatically be able to take high-confidence actions and report those actions to the operator. Finally, the actions would be taken, recorded, and the anomaly would never appear to the operator.

AI is not a Silver Bullet, nor a Quick Fix

The old adage, automating a mistake just allows you to make them faster, is fair. The steps to automation, with a progressive plan-of-action and the correct tools, can be taken. It’s a tractable problem, with a lot of solutions. The key is don’t automate what you don’t understand. These are the steps to a successful AI solution:

  • Data is Key. Not just quantity, but the organizational intelligence (context) around its use. Context is Key, too.
  • Model. Test. Re-model. Test again. Taking your key data and observing it is important. Testing the model’s        output is even more so. Don’t be afraid to start over if you aren’t seeing what your data says you should.
  • Allow your users to interact with the anomalies and record the results. Not only do you need to test that your assumptions regarding action-automation are correct, but you can use this second-stage of observations to train your action model or create derivative models to explain outliers.
  • Don’t jump to automating the action. Either start to curtail the available actions, re-order their presentation, or allow the user to undo anything done. If you go too early into the automated action step, and the action is a one-way door, you will quickly put yourself in a world of hurt and everyone else that relies on that system, service, or application.
  • Continue to challenge your AI. Use your quality assurance or control to at least spot-check the actions being taken by your AI. Check for drift in your model and if it does, make sure it’s doing so appropriately.

For quick tips and tricks, and a reminder of the most important lessons from this text, be sure to watch our video.
Learn more about new trends in the industry, follow our LinkedIn page.