Communication and Network Optimization
Model Compression via Knowledge Distillation
Method where a compact model learns to mimic the predictions of a larger model, thereby reducing the transmitted model size while preserving its performance.
← Quay lại