Prototype-based Explanations
Nearest Neighbor Explanation
Interpretability technique that explains a model's decision by identifying and presenting the training examples closest to the instance to be explained.
← Indietro