Rationale Discovery and Explainable AI

Cor Steging, Silja Renooij, Bart Verheij

The justification of an algorithm’s outcomes is important in many domains, and in particular in the law. However, previous research has shown that machine learning systems can make the right decisions for the wrong reasons: despite high accuracies, not all of the conditions that define the domain of the training data are learned. In this study, we investigate what the system does learn, using state-ofthe-art explainable AI techniques. With the use of SHAP and LIME, we are able to show which features impact the decision making process and how the impact changes with different distributions of the training data. However, our results also show that even high accuracy and good relevant feature detection are no guarantee for a sound rationale. Hence these state-of-the-art explainable AI techniques cannot be used to fully expose unsound rationales, further advocating the need for a separate method for rationale evaluation.

Manuscript (in PDF-format)
Paper at publisher (open access)

Reference:
Steging, C., Renooij, S., & Verheij, B. (2021). Rationale Discovery and Explainable AI. Legal Knowledge and Information Systems. JURIX 2021: The Thirty-fourth Annual Conference (ed. Schweighofer, E.), 225-234. Amsterdam: IOS Press. https://doi.org/10.3233/FAIA210341


Bart Verheij's home page - research - publications