Failures of Artificial Intelligence

AI Fails!

1. In a recent incident, Amazon had to scrap an AI enabled recruiting tool, owing to the inherent bias it was showing against recruiting women. This bias was an inherent result of the basic statistical dominance of men in the industry.

2. Microsoft unveiled a bot titled Tay to experiment with end users conversational data, learning in the process from users conversations. However, owing to racist and illogical inputs being fed by end users
the bot took an unpleasant tone leading to Microsoft withdrawing the bot.

3. In a popular incident leading to loss of life, an Uber self-driving car killed a pedestrian woman in the night. As per [4], the pedestrian was detected by the car vision system however the advice to brake was not enacted due to automatic braking system being turned off. 4. In a recent security analysis [5], painting of lanes in a wrong direction led the vision system of a Tesla car to wrongly steer the car into the oncoming traffic.

What does all these incidents mean?

These are few of the many real life accidents or erroneous behaviours
exhibited by AI powered systems in modern world. These are as diverse in applications as driverless cars to HR systems to chat applications. These highlight the real dangers as must be taken care of in terms of end to end software management of AI powered systems. Highlighting more specifically is the specific needs of verification and validation in context of AIpowered systems. These needs translate to thorough end to end testing of AI powered systems. In addition to the usual needs for testing functional systems, these systems highlight additional desiderata for testing AI systems primarily the concepts of testing ethics, testing for biases, and testing for explainability

Thus, there is a systematic need for setting an agenda for Testing AI systems. Hence, a clarion call for putting in place processes, technologies and techniques for end to end testing of AI systems.

Is there a notion other way round..??

While all above is the case for testing AI systems with a view to avoiding failure or accident scenarios of AI systems, is there a case for using AI as a technology for current testing processes. The answer is an astounding YES. AI by virtue of its relying on large data repositories, aims to derive practical insights and actionable knowledge from these data repositories. In context of testing processes in Software Development Life Cycle (SDLC), there are a number of data elements which are generated, ranging from test cases, requirement specs, code files, bugs, bug fixes etc. These sources of data are a huge source of data for a range of AI processes enacted upon these data. In view of this, our attempt is to highlight a range of SDLC processes in testing which can use AI techniques on these data sources for actionable insights and/or automation.

Let us list a few for example below:

  • Automated Defect Prediction
  • Automated Test Case generation from Text/Images
  • Automated Test Evaluation
  • Test Management Optimization
  • Automated GUI testing
  • Test prioritization
  • Automated Test Data Generation
  • Test Oracle Emulation
  • Test coverage analysis
  • Automated Traceability

Thus, we are just about to step on an El Dorado of potential gold mines in terms of making test processes cheaper better and faster by using state of the art AI techniques in testing.

Happy testAIing!

Failures of Artificial Intelligence