🏠 Startseite
Vergleiche
📊 Alle Benchmarks 🦖 Dinosaurier v1 🦖 Dinosaurier v2 ✅ To-Do-Listen-Apps 🎨 Kreative freie Seiten 🎯 FSACB - Ultimatives Showcase 🌍 Übersetzungs-Benchmark
Modelle
🏆 Top 10 Modelle 🆓 Kostenlose Modelle 📋 Alle Modelle ⚙️ Kilo Code
Ressourcen
💬 Prompt-Bibliothek 📖 KI-Glossar 🔗 Nützliche Links

KI-Glossar

Das vollständige Wörterbuch der Künstlichen Intelligenz

162
Kategorien
2.032
Unterkategorien
23.060
Begriffe
📖
Begriffe

Model agnostic

Explanation approach that works with any predictive model without requiring knowledge of its architecture or internal parameters. Agnostic methods treat the model as a black box and rely solely on its inputs and outputs.

📖
Begriffe

Local explanation

Interpretation that explains a model's prediction for a specific instance rather than the model's global behavior. Local explanations are particularly useful for understanding why a particular decision was made.

📖
Begriffe

Data perturbation

Technique involving creating slight variations of an original instance to generate a set of neighboring synthetic data. In LIME, these perturbations serve to build a training set for the local simple model.

📖
Begriffe

Feature weights

Quantitative measures indicating the relative importance of each feature in the model's local decision. These weights allow identifying the most influential factors for a specific prediction.

📖
Begriffe

Prediction neighborhood

Set of data points close to the original instance in the feature space, used to learn the local simple model. The definition of this neighborhood is crucial for the relevance of the generated explanation.

📖
Begriffe

Surrogate model

Simple and interpretable model (such as linear regression or decision tree) that locally approximates the behavior of the complex model. This model is trained on perturbed data to generate explanations.

📖
Begriffe

Explanation fidelity

Measure of how accurately the local simple model reproduces the predictions of the complex model in the considered neighborhood. High fidelity ensures that the explanation faithfully represents the behavior of the original model.

📖
Begriffe

Segment

Contiguous region of data used to explain predictions on images or texts in LIME. Segments allow grouping adjacent pixels or words for a more coherent explanation.

📖
Begriffe

Superpixel

Group of adjacent pixels sharing similar characteristics (color, texture, intensity), used as the basic unit for image explanations in LIME. Superpixels reduce computational complexity while preserving relevant visual information.

📖
Begriffe

Relevance score

Numerical value assigned to each feature or segment indicating its influence on the local prediction. This score allows for ranking the most important elements in the model's decision-making.

📖
Begriffe

Similarity kernel

Mathematical function that defines the proximity between the original instance and the perturbed instances in the feature space. This kernel weights the importance of points in learning the local model.

📖
Begriffe

Counterfactual explanation

Type of explanation that shows how the prediction would change if certain features were modified. Complementary to LIME, this approach helps understand the necessary conditions to obtain a different prediction.

📖
Begriffe

Explanation stability

Measure of the consistency of explanations generated for similar instances or during multiple executions. Good stability is essential for trust in the produced explanations.

📖
Begriffe

Area of influence

Region in the feature space where the local explanation remains valid and faithful to the complex model's behavior. Determining this area is crucial for evaluating the scope of the explanation.

📖
Begriffe

Explanation complexity

Number of features or segments used in the local explanation, often limited to maintain interpretability. A trade-off between fidelity and simplicity must be found for effective explanations.

📖
Begriffe

Heatmap

Visualization that spatially represents the importance of different regions of an image in the model's prediction. In LIME, heatmaps use relevance scores to highlight influential areas.

🔍

Keine Ergebnisse gefunden