Interpretability of Neural Networks
Feature Visualization
Process of generating synthetic inputs that maximize the activation of specific neurons or entire layers, allowing visual understanding of what each neuron has learned to detect.
← Wstecz