KI-Glossar
Das vollständige Wörterbuch der Künstlichen Intelligenz
Saliency Maps
Heat maps indicating the importance of each pixel or input feature for the model's prediction, calculated by measuring the sensitivity of the output relative to variations in inputs.
Feature Visualization
Process of generating synthetic inputs that maximize the activation of specific neurons or entire layers, allowing visual understanding of what each neuron has learned to detect.
Activation Maximization
Optimization technique that finds input patterns that most strongly activate a specific neuron or layer, revealing the abstract features learned by the network.
SmoothGrad
Saliency map regularization technique that sums multiple saliency maps computed on noisy versions of the input, thereby reducing visual noise and improving interpretation clarity.
Guided Backpropagation
Backpropagation variant that modifies gradient flow by propagating only positive gradients, eliminating negative contributions to produce clearer visualizations of important features.
Class Activation Maps (CAM)
Visualization method that identifies discriminative regions in an image by weighting the feature maps of the last convolutional layer with the weights of the final classification layer.
Attention Mechanisms Visualization
Graphical representation of attention weights in Transformer models, explicitly showing the relationships and dependencies that the model establishes between different parts of the input during the decision-making process.
Neural Network Pruning for Interpretability
Process of selectively removing unimportant connections or neurons to simplify the model structure while preserving its performance, thereby facilitating understanding of the network's internal functioning.
Concept Activation Vectors (TCAV)
Global explanation method that tests the importance of human-interpretable concepts in model decisions by measuring the sensitivity of predictions to the presence of these concepts via directional activation vectors.
Neuron Coverage
Interpretability evaluation metric measuring the proportion of neurons activated by a test set, indicating which parts of the network are actually used and how the model distributes its processing.