Glossario IA
Il dizionario completo dell'Intelligenza Artificiale
Teacher-Student Architecture
Framework where a teacher model trains a student model by transferring its implicit knowledge through soft targets and regularizations.
Feature Map Distillation
Knowledge transfer method at the level of intermediate model representations rather than at the final prediction level.
Attention Transfer
Transfer of attention maps from the teacher to the student to preserve the important regions identified by the complex model.
Relation Knowledge Distillation
Approach preserving structural relationships between training samples rather than individual knowledge.
Self-Distillation
Process where a model self-improves by transferring its knowledge to a deeper or wider version of itself.
Progressive Distillation
Iterative distillation method where the student gradually becomes the teacher for even more compact models.
Online Knowledge Distillation
Approach where multiple models train each other in real-time without requiring a pre-trained teacher.
Cross-Domain Distillation
Technique for transferring knowledge between models operating on different domains but sharing similar underlying structures.
Lifelong Learning via Distillation
Application of distillation to preserve acquired knowledge during continuous learning and avoid catastrophic forgetting.
Ensemble Distillation
Compression of an ensemble of models into a single compact model preserving the diversity of collective knowledge.
Neural Architecture Search with Distillation
Integration of distillation into the NAS process to guide the search for efficient architectures preserving performance.
Contrastive Knowledge Distillation
Approach using positive and negative contrasts to transfer discriminative representations from teacher to student.