[ad_1]
Introducing a new model-agnostic, post hoc XAI approach based on CART to provide local explanations improving the transparency of AI-assisted decision making in healthcare
In the realm of artificial intelligence, there is a growing concern regarding the lack of transparency and understandability of complex AI systems. Recent research has been dedicated to addressing this issue by developing explanatory models that shed light on the inner workings of opaque systems like boosting, bagging, and deep learning techniques.
Local and Global Explainability
Explanatory models can shed light on the behavior of AI systems in two distinct ways:
- Global explainability. Global explainers provide a comprehensive understanding of how the AI classifier behaves as a whole. They aim to uncover overarching patterns, trends, biases, and other characteristics that remain consistent across various inputs and scenarios.
- Local explainability. On the other hand, local explainers focus on providing insights into the decision-making process of the AI system for a single instance. By highlighting the features or inputs that significantly influenced the model’s prediction, a local explainer offers a glimpse into how a specific decision was reached. However, it’s important to note that these explanations may not be applicable to other instances or provide a complete understanding of the model’s overall behavior.
The increasing demand for trustworthy and transparent AI systems is not only fueled by the widespread adoption of complex black box models, known for their accuracy but also for their limited interpretability. It is also motivated by the need to comply with new regulations aimed at safeguarding individuals against the misuse of data and data-driven applications, such as the Artificial Intelligence Act, the General Data Protection Regulation (GDPR), or the U.S. Department of Defense’s Ethical Principles for Artificial Intelligence.
Source link