Our Paper on Causality-Aware Shapley Values for Global XAI Accepted at World XAI Conference.
I’m excited to share that a paper I co-authored with my talented Master’s student Nils Ole Breuer and collaborators Majid Mohammadi and Erman Acar was accepted at the World Conference on Explainable AI (XAI). This paper, was the result of a very rewarding collaboration and the hard work of Nils, who I had the pleasure of supervising for his Master’s thesis. Collaborating with this team was a real pleasure.
In the field of XAI, global explanations aim to elucidate the overall feature importance of a machine learning model. Shapley values, provide a solid mathematical framework for quantifying each feature’s contribution. However, most Shapley value-based methods assume feature independence, overlooking the potential causal relationships between input features. Inspired by recent works incorporating causality into local explanations, we sought to address this gap for global explanations.
Our proposed method, CAGE, introduces a novel sampling procedure that respects the causal relations between features when computing Shapley values. We prove that CAGE satisfies desirable theoretical properties that ensure causally sound explanations. The key insight is intervening on the “known” features and sampling the “unknown” features from the resulting post-interventional distribution, rather than the standard conditional distribution.
We evaluated CAGE on both synthetic and real-world datasets, comparing it to the popular SAGE method which assumes feature independence. The results show that CAGE provides more intuitive explanations that better align with the actual causal relationships in the data. On synthetic data where the true causal structure is known, CAGE assigns higher importance to root causes and lower importance to features that are effects of other features. On the real-world Alzheimer’s dataset, CAGE similarly assigns lower importance to biomarkers that are known to be effects rather than causes.
This work is an important step towards causally grounded global explanations. It highlights the benefits and challenges of incorporating causal knowledge into XAI. While CAGE requires a known causal graph, which may not always be available, we show how partial causal knowledge in the form of a causal chain graph can still be utilized. Future directions include exploring causal discovery methods to infer causal structures from data. The code for the CAGE method and experiments is available on our anonymous repository.
Look forward to more research on causality and XAI from our group! As always, feel free to reach out with any questions or thoughts.
Leave a Reply