People learn causal relations since childhood using counterfactual reasoning. Counterfactual reasoning uses counterfactual examples which take the form of “what if this has happened differently”. Counterfactual exam...People learn causal relations since childhood using counterfactual reasoning. Counterfactual reasoning uses counterfactual examples which take the form of “what if this has happened differently”. Counterfactual examples are also the basis of counterfactual explanation in explainable artificial intelligence (XAI). However, a framework that relies solely on optimization algorithms to find and present counterfactual samples cannot help users gain a deeper understanding of the system. Without a way to verify their understanding, the users can even be misled by such explanations. Such limitations can be overcome through an interactive and iterative framework that allows the users to explore their desired “what-if” scenarios. The purpose of our research is to develop such a framework. In this paper, we present our “what-if” XAI framework (WiXAI), which visualizes the artificial intelligence (AI) classification model from the perspective of the user’s sample and guides their “what-if” exploration. We also formulated how to use the WiXAI framework to generate counterfactuals and understand the feature-feature and feature-output relations in-depth for a local sample. These relations help move the users toward causal understanding.展开更多
文摘People learn causal relations since childhood using counterfactual reasoning. Counterfactual reasoning uses counterfactual examples which take the form of “what if this has happened differently”. Counterfactual examples are also the basis of counterfactual explanation in explainable artificial intelligence (XAI). However, a framework that relies solely on optimization algorithms to find and present counterfactual samples cannot help users gain a deeper understanding of the system. Without a way to verify their understanding, the users can even be misled by such explanations. Such limitations can be overcome through an interactive and iterative framework that allows the users to explore their desired “what-if” scenarios. The purpose of our research is to develop such a framework. In this paper, we present our “what-if” XAI framework (WiXAI), which visualizes the artificial intelligence (AI) classification model from the perspective of the user’s sample and guides their “what-if” exploration. We also formulated how to use the WiXAI framework to generate counterfactuals and understand the feature-feature and feature-output relations in-depth for a local sample. These relations help move the users toward causal understanding.