Why AI is required to have explainability?

In today s world of machine learning dominated Artificial Intelligence applications, there is a renewed push for the agenda of explainability. The triggers for explainability could be multifold:
  • A comfort feeling of knowing what you are handing over control to
  • A knowledge of the influence of the precise input component contributing to a decision helps us troubleshoot better in case of trouble
  • Compliance and legislative issues forcing visibility into models for traceability of decisions
  • Better understanding of the causal relationships between the output and input data helps in prescribing right remedy
  • A deeper insight into local influence of input on specific ouput cases helps us in deriving the right test cases for quality audit of the AI systems
  • A transparent approach helps us in doing a good deal of sensitivity analysis to help perturbing the system and test of outputs for mutated conditions
  • Enhanced coverage of all kinds of scenarios like extreme and corner cases will be triggered if a good base for exlainability is set up
  • Triggering what if analysis for explainability helps in exploring influence of the range of different input features
  • In today s highly successful deep learning driven machine learning models there is an extreme sense of opaqueness which makes them less amenable for explainability. So suitable pertrurbation based on reverse engineering approaches vital for deep learning explainability
  • In safety critical systems like defence, healthcare it becomes vital that diagnoses or decisions powered by machine learning systems be accountable for their decisions
  • Explainable AI is mandatory for such safety critical systems
  • A crucial byproduct of incomplete analysis of inputs for AI systems is biased systems. Explainability as a part of lifecycle enables a more complete coverage for the AI systems
  • Last but not least explainability will be the foundation of trust. This trust will lead to enhanced applicability of AI systems in broader use cases and more widely
So we can safely say explainable AI is the need of the hour and every earnest attempt be made to develop a broadbased agenda in the AI ML DL community for people, process and technology dimensions of explainable AI.

Dr. Srinivas Padmanabhuni

Ph.D. in AI from University of Alberta, Edmonton, Canada

Why we need to let go of our programming instinct in ML based AI?

In the modern world of software we are used to the paradigm of software engineering. The discipline of software engineering is predicated upon following a rigid regimen of quality programming, with deployment of expert coders and programmers to enable building of systems. Hence, a basic necessity often stressed is that developers need to be highly skilled in programming and coding to develop robust systems.

Is this also true for building robust AI/ML systems?

The answer happens to be NO.

Issue is that there is not enough understanding in industry professionals of the the nature of AI applications being widely different from the usual commonplace software applications. The difference comes in the below illustration.

In normal software development our endeavor is to develop a program based on the logic of the expected flow, which is again based on the domain knowledge and know how of the developer who translates requirements to corresponding code.

But in context of AI applications developed via running ML and DL algorithms on a large collection of data, the intent is as outlined in the image below.

The key message that appears here is that in AI we feed data to the AI algorithm and expect a running program (also known as model) as a output which summarizes the hidden patterns and equations hidden and relationships present in the data.

In view of such an inverted operation AI poses several scenarios:

Q1. Can we inject our logic into the program based on our understanding of the domain.

Q2. If we don’t know the program logic how are we expected to ascertain correct behaviour of the generated program from AI.

Q3. Lastly how to observe changes to the behaviour of the program with changed logic.

Yes there are several similar questions that come to mind when we deal with AI based on ML.

Here are some perspectives.

  1. The only way we can generate a new model here is either by changing the data fed into the AI algorithm or by changing the conditions of the AI algorithm like hyperparameter.
  2. No We cannot inject our own logic into the generated model/program
  3. Perturb the data to observe changes in program generated. This is the aspect currently being explored in the context of testing deep learning and machine learning models for explainability.
  4. We do not often have the luxury of knowing the exact logic of the model/program generated by the AI algorithm.
  5. Very often there is a critical need to change the model due to its unsuitability for a certain class of inputs.
  6. Similarly applicability of the model also decreased due to changed data conditions like in case of concept drift where underlying truth of the data has changed considerably.
  7. Last but not the least all this lack of control of being able to debug / modify the program per se puts a lot of risk into deployment of the models/programs.
  8. These risks can be mitigated via a prudent coverage of the input scenarios data fed into the AI algorithm to ensure that fair and unbiased coverage of data has happened to be manifested in the generated model.
  9. Further techniques like cross validation, leave one out validation etc are proposed to be put in place in generation of the models so as to reduce risk of over dependence on a certain part of the data.

In parting our message is : AI is data in, program out.

Keep this phenomenon in sight and accordingly place bets of the right combination of checks, processes and validations as part of the overall process to overcome this lack of visibility and manipulatability of the program.

Hence Follow a strict well laid out to end to end process in AI , example say adopt the CRISP-DM process.

Happy AI-ing..

Dr. Srinivas Padmanabhuni

Ph.D. in AI from University of Alberta, Edmonton, Canada

Failures of Artificial Intelligence

Failures of Artificial Intelligence

AI Fails!

1. In a recent incident, Amazon had to scrap an AI enabled recruiting tool, owing to the inherent bias it was showing against recruiting women. This bias was an inherent result of the basic statistical dominance of men in the industry.

2. Microsoft unveiled a bot titled Tay to experiment with end users conversational data, learning in the process from users conversations. However, owing to racist and illogical inputs being fed by end users
the bot took an unpleasant tone leading to Microsoft withdrawing the bot.

3. In a popular incident leading to loss of life, an Uber self-driving car killed a pedestrian woman in the night. As per [4], the pedestrian was detected by the car vision system however the advice to brake was not enacted due to automatic braking system being turned off. 4. In a recent security analysis [5], painting of lanes in a wrong direction led the vision system of a Tesla car to wrongly steer the car into the oncoming traffic.

What does all these incidents mean?

These are few of the many real life accidents or erroneous behaviours
exhibited by AI powered systems in modern world. These are as diverse in applications as driverless cars to HR systems to chat applications. These highlight the real dangers as must be taken care of in terms of end to end software management of AI powered systems. Highlighting more specifically is the specific needs of verification and validation in context of AIpowered systems. These needs translate to thorough end to end testing of AI powered systems. In addition to the usual needs for testing functional systems, these systems highlight additional desiderata for testing AI systems primarily the concepts of testing ethics, testing for biases, and testing for explainability

Thus, there is a systematic need for setting an agenda for Testing AI systems. Hence, a clarion call for putting in place processes, technologies and techniques for end to end testing of AI systems.

Is there a notion other way round..??

While all above is the case for testing AI systems with a view to avoiding failure or accident scenarios of AI systems, is there a case for using AI as a technology for current testing processes. The answer is an astounding YES. AI by virtue of its relying on large data repositories, aims to derive practical insights and actionable knowledge from these data repositories. In context of testing processes in Software Development Life Cycle (SDLC), there are a number of data elements which are generated, ranging from test cases, requirement specs, code files, bugs, bug fixes etc. These sources of data are a huge source of data for a range of AI processes enacted upon these data. In view of this, our attempt is to highlight a range of SDLC processes in testing which can use AI techniques on these data sources for actionable insights and/or automation.

Let us list a few for example below:

  • Automated Defect Prediction
  • Automated Test Case generation from Text/Images
  • Automated Test Evaluation
  • Test Management Optimization
  • Automated GUI testing
  • Test prioritization
  • Automated Test Data Generation
  • Test Oracle Emulation
  • Test coverage analysis
  • Automated Traceability

Thus, we are just about to step on an El Dorado of potential gold mines in terms of making test processes cheaper better and faster by using state of the art AI techniques in testing.

Happy testAIing!