Glosarium AI
Kamus lengkap Kecerdasan Buatan
Model agnostic
Explanation approach that works with any predictive model without requiring knowledge of its architecture or internal parameters. Agnostic methods treat the model as a black box and rely solely on its inputs and outputs.
Local explanation
Interpretation that explains a model's prediction for a specific instance rather than the model's global behavior. Local explanations are particularly useful for understanding why a particular decision was made.
Data perturbation
Technique involving creating slight variations of an original instance to generate a set of neighboring synthetic data. In LIME, these perturbations serve to build a training set for the local simple model.
Feature weights
Quantitative measures indicating the relative importance of each feature in the model's local decision. These weights allow identifying the most influential factors for a specific prediction.
Prediction neighborhood
Set of data points close to the original instance in the feature space, used to learn the local simple model. The definition of this neighborhood is crucial for the relevance of the generated explanation.
Surrogate model
Simple and interpretable model (such as linear regression or decision tree) that locally approximates the behavior of the complex model. This model is trained on perturbed data to generate explanations.
Explanation fidelity
Measure of how accurately the local simple model reproduces the predictions of the complex model in the considered neighborhood. High fidelity ensures that the explanation faithfully represents the behavior of the original model.
Segment
Contiguous region of data used to explain predictions on images or texts in LIME. Segments allow grouping adjacent pixels or words for a more coherent explanation.
Superpixel
Group of adjacent pixels sharing similar characteristics (color, texture, intensity), used as the basic unit for image explanations in LIME. Superpixels reduce computational complexity while preserving relevant visual information.
Relevance score
Numerical value assigned to each feature or segment indicating its influence on the local prediction. This score allows for ranking the most important elements in the model's decision-making.
Similarity kernel
Mathematical function that defines the proximity between the original instance and the perturbed instances in the feature space. This kernel weights the importance of points in learning the local model.
Counterfactual explanation
Type of explanation that shows how the prediction would change if certain features were modified. Complementary to LIME, this approach helps understand the necessary conditions to obtain a different prediction.
Explanation stability
Measure of the consistency of explanations generated for similar instances or during multiple executions. Good stability is essential for trust in the produced explanations.
Area of influence
Region in the feature space where the local explanation remains valid and faithful to the complex model's behavior. Determining this area is crucial for evaluating the scope of the explanation.
Explanation complexity
Number of features or segments used in the local explanation, often limited to maintain interpretability. A trade-off between fidelity and simplicity must be found for effective explanations.
Heatmap
Visualization that spatially represents the importance of different regions of an image in the model's prediction. In LIME, heatmaps use relevance scores to highlight influential areas.