In today s world of machine learning dominated Artificial Intelligence applications, there is a renewed push for the agenda of explainability. The triggers for explainability could be multifold:
  • A comfort feeling of knowing what you are handing over control to
  • A knowledge of the influence of the precise input component contributing to a decision helps us troubleshoot better in case of trouble
  • Compliance and legislative issues forcing visibility into models for traceability of decisions
  • Better understanding of the causal relationships between the output and input data helps in prescribing right remedy
  • A deeper insight into local influence of input on specific ouput cases helps us in deriving the right test cases for quality audit of the AI systems
  • A transparent approach helps us in doing a good deal of sensitivity analysis to help perturbing the system and test of outputs for mutated conditions
  • Enhanced coverage of all kinds of scenarios like extreme and corner cases will be triggered if a good base for exlainability is set up
  • Triggering what if analysis for explainability helps in exploring influence of the range of different input features
  • In today s highly successful deep learning driven machine learning models there is an extreme sense of opaqueness which makes them less amenable for explainability. So suitable pertrurbation based on reverse engineering approaches vital for deep learning explainability
  • In safety critical systems like defence, healthcare it becomes vital that diagnoses or decisions powered by machine learning systems be accountable for their decisions
  • Explainable AI is mandatory for such safety critical systems
  • A crucial byproduct of incomplete analysis of inputs for AI systems is biased systems. Explainability as a part of lifecycle enables a more complete coverage for the AI systems
  • Last but not least explainability will be the foundation of trust. This trust will lead to enhanced applicability of AI systems in broader use cases and more widely
So we can safely say explainable AI is the need of the hour and every earnest attempt be made to develop a broadbased agenda in the AI ML DL community for people, process and technology dimensions of explainable AI.

Dr. Srinivas Padmanabhuni

Ph.D. in AI from University of Alberta, Edmonton, Canada

Why AI is required to have explainability?