IEEE P2894 PDF
This guide provides a technological framework that facilitates the increase of trustworthiness of AI (Artificial Intelligence) systems, by using explainable artificial intelligence (XAI) technologies and methods including the following aspects: 1) the requirements of providing XAI systems in different application scenarios; 2) the categorization of a series of XAI tools that offer human-understandable explanations; 3) a set of measurable solutions to evaluate XAI systems in terms of performances concerning the accuracy, privacy, and security.
The purpose of this guide is to provide a technological framework that facilitates the adoption and evaluation of appropriate XAI methods by analyzing these methods and showcasing typical scenarios in which XAI can bring great value.
New IEEE Standard – Active – Draft. Dramatic success in machine learning has led to a new wave of artificial intelligence applications that offer extensive benefits to our daily lives. The loss of explainability during this transition, however, means vulnerability to vicious data, poor model structure design, and suspicion of stakeholders and the general public — all with a range of legal implications. The dilemma has called for the study of explainable AI (XAI) which is an active research field that aims to make AI systems results more understandable to humans. This is a field with great hopes for improving the trust and transparency of AI-based systems and is considered a necessary route for AI to move forward. This guide provides a technological blueprint for building, deploying and managing machine learning models while meeting the requirements of transparent and trustworthy AI by adopting a variety of XAI methodologies. It defines the architectural framework and application guidelines for explainable AI, including: 1) description and definition of XAI, 2) the types of XAI methods and the application scenarios to which each type applies, 3) performance evaluation of XAI.