September 20, 2022

Why do we need explainable artificial intelligence?

No items found.
Why do we need explainable artificial intelligence?

With the ever-growing market for artificial intelligence use cases in various industries come challenges, too. One of these issues is that many popular ML/AI systems are so-called 'black box models', meaning that they're not easily interpretable, which is not an acceptable solution in some use cases. If you're looking for more than an output with a prediction, your model will require a level of explainability. What does that mean?

What is explainable artificial intelligence?

Explainable AI (or XAI) is a collection of techniques and processes that make it possible for human users to understand the workings of an AI-based system and gain trust in the output and results produced by machine learning algorithms. Artificial intelligence/machine learning model explainability contributes to defining model correctness, fairness, transparency, and outcomes in decision-making supported by AI. When an organization puts a machine learning model to production, it must establish trust and confidence. With the help of XAI, companies can make more responsible use of artificial intelligence. 

Modern machine learning systems take in data as input, analyze it, and produce an output in the form of an "answer," which is a prediction, suggestion, decision, etc. While we get the answer to the question we asked, we don't know how the model reached a given conclusion - so the decision cannot be explained. For many machine learning use cases, that's not really an issue. In many cases, the output itself constitutes valuable insights, and the "how" is not a necessary part of the equation. For example, let's take recommender systems where user data is analyzed, and where GDPR compliance requires businesses to provide users with information on how their data is processed. In such cases, it is often enough to give a general overview of the process: information including e.g. past purchases and product views is used to provide recommendations of related products you may find interesting. 

However, when human life and well-being are at stake, the decision that has been made needs to be clearly understandable. The lack of reasoning behind the system's decisions exposes organizations to significant risk. Without a human involved in the development process, machine learning models may provide biased results that could cause ethical and regulatory problems. Explainability brings many benefits to the table: it helps engineers ensure that the system is working the way it's supposed to, and, as previously mentioned, it may be necessary to meet regulatory requirements - but it's also essential if it personally affects someone.

Why does explainable AI matter?

Customers need to know that personal data is treated with the highest care and sensitivity, especially in industries like healthcare and finance, and AI is not exempt from that. Legislation, such as the EU's General Data Protection Regulation (GDPR), mandates that businesses give customers an explanation of AI-based decisions. Companies may meet these regulatory requirements and gradually increase end user trust and confidence by using explainable AI systems to show customers exactly where data is coming from and how it is utilized.

What's more, an enterprise must fully comprehend the AI decision-making processes with model monitoring and accountability - you can only trust AI when proven reliable. Humans can benefit from explainable AI by better understanding and explaining machine learning (ML), deep learning, and neural networks. As previously mentioned, ML models are frequently viewed as "black boxes" that we can't make sense of. The fact that we're expanding the use of deep learning and complex neural networks adds to the problem since these are very hard for humans to understand. 

There is also bias based on race, gender, age, and many other factors, which has long been a problem in machine learning models and AI. 

Additionally, because production data differs from training data, AI model performance may drift or deteriorate. It is vital for a company to maintain its models in order to increase AI explainability and gauge the effects of deploying such algorithms to assess their impact and optimize model performance. 

And that's not all! There are still legal, security, and reputational concerns that can all be mitigated with the use of explainable artificial intelligence.

With the widespread adoption of AI use cases, XAI is now gaining traction, though it is still a field that needs more attention. According to 451 Research's report 'Voice of the Enterprise: AI/ML Use Cases 2020', 92% of enterprises believe that explainable AI is important, but few have built or purchased 'explainability' tools for their AI systems. However, there's a growing trend: the explainable artificial intelligence market is estimated to be 4.4 billion USD in 2021 and is predicted to grow to 21 billion USD by 2030, according to a market research company NextMSC.

More on the ReasonField Lab ML blog:
Go to Instance segmentation evaluation criteria

The benefits of explainable AI

As shown in previous paragraphs, XAI is an essential element of responsible AI and an ethical, customer-centric, regulation-compliant approach, but it brings advantages beyond that. Let's have a look at the benefits of XAI:

Reduced bias

Without supervision and explainability, AI models can cultivate bias. Bias can relate to race, gender, age, and many other factors and, put briefly, represents what errors: misconceptions, stereotypes, etc., are carried in data used to train models. We want AI to be bias-free, but that's impossible if we don't spot when it follows the wrong train of thought and why it makes unfair decisions. With the explanation of decision-making criteria, we can see the problem more quickly and fix it before it affects anyone.

Lower cost of errors

Wrong predictions significantly impact decision-sensitive industries like the healthcare industry, finance, or law. Monitoring the results lessens the impact of incorrect outputs and helps discover the underlying causes, which improves the model.

Model performance

Understanding the potential flaws is one of the keys to achieving excellent performance. It is simpler to enhance models when we better understand what's going on 'inside' the model. Explainability is a powerful technique for identifying model flaws and biases in the data, being a helpful addition in evaluating models. It can aid in confirming predictions, enhancing models, and gaining fresh perspectives on the issue at hand. Knowing what the model is doing and how it generates predictions will make it easier to spot biases in the model or the dataset.

Building trust

This benefit has already appeared a few times throughout the article: it's easier to trust a system that explains its decisions. Of course, some use cases simply require explainability, but the added perk here is that the people handling the AI can learn how much they can rely on these decisions.

Where is explainable AI used?

Several industries find the use of XAI critical - let's have a look at what they are.


Explainable AI can provide an explanation in medical diagnosis when identifying a disease. It can assist doctors in explaining to patients their diagnosis and adjusting a treatment plan that would benefit them. Avoiding any potential ethical pitfalls, this will help doctors get a more in-depth look into their patients' condition and provide more personalized care. Artificial intelligence can now support doctors in the diagnosis of various types of cancer, pneumonia, allergies, and many other disorders. 

Autonomous vehicles

Due to widely reported mishaps with autonomous vehicles and even some tragedies involving self-driving cars, explainable AI is becoming increasingly significant in the automotive sector. A focus has been placed on explainability strategies for AI algorithms, particularly when making safety-critical decisions. Explainable AI can be applied to autonomous vehicles to boost situational awareness and help prevent accidents.

Fraud detection

Explainable AI is crucial in the financial sector, especially in use cases tied to fraud detection. When it comes to spotting fraudulent transactions, XAI can be used to justify why a transaction was marked as suspicious. This helps reduce potential ethical problems caused by bias and discrimination in situations of suspected fraud.


Explainable AI can be helpful for applications in the military sector to explain the reasoning behind a choice made by an AI system. This layer of explainability is significant since it lessens potential ethical issues and helps understand the reasons behind possible failures.

More from the ReasonField Lab ML blog:
Go to When do you need ML PoC?

How does XAI work?

There are many approaches to explainability in AI systems - and we're not going to dive deep into the technical details. However, let's look at the principles of explainable artificial intelligence and its general framework.

Principles of XAI

The National Institute of Standards (NIST), part of the U.S. Department of Commerce, defines four principles of explainable artificial intelligence:

Explanation: Systems deliver accompanying evidence or reason(s) for all outputs. 

: Systems provide explanations that are understandable to individual users. 

Accuracy: The explanation correctly reflects the system's process for generating the output. 

Limits: The system only operates under conditions for which it was designed or when the system reaches sufficient confidence in its output.

[The above excerpt comes from Draft NISTIR 8312, Four Principles of Explainable Artificial Intelligence]

This means that explainable artificial intelligence should:

  • inform recipients of why a given conclusion was reached: e.g. why the system believes that a patient has pneumonia,
  • provide an explanation that is understandable for the end user, adjusting the level of detail to the target recipient,
  • the system can identify cases it was not designed or approved to operate or when it can't provide a reliable answer.

Explanation categories

The same paper identifies different categories of explanations:

  • user benefit - showing the user why a given decision was made,
  • societal acceptance - with a goal to increase public trust and acceptance,
  • regulatory and compliance - to provide compliance with the legislature, safety standards, etc.,
  • system development - aiming to facilitate further development, debugging, improving, and maintaining the system,
  • owner benefit - bringing value to the operator, e.g., increasing revenue through a boost in customer satisfaction levels.

Looking at how explainable AI works, we can divide it into three groups:

  • Explainable data: What kind of data was used to train a model? Why was the information chosen? How was justice judged? Was bias eliminated in any way?
  • Explainable predictions: What aspects of a model were turned on or utilized to produce a specific result?
  • Explainable algorithms: What are the various model layers, and how do they result in the output or prediction?


Leaders in academia, business, and government have been researching the topic of XAI, exploring various explainability techniques and creating algorithms that may solve this challenge. Explainability has been cited as a prerequisite for AI clinical decision support systems in the healthcare area, for example, because the capacity to decipher system outputs enables shared decision-making between medical professionals and patients and offers much-needed system transparency. Financial analysts are given the knowledge they need to audit high-risk judgments by providing explanations of AI systems in order to satisfy regulatory standards.