Revolutionizing Neural Network Compression: Teacher-Guided One-Shot Pruning via Context-Aware Knowledge Distillation
Highlights: Introduces a new one-shot pruning method for deep neural networks guided by a teacher model. Integrates Knowledge Distillation during pruning to improve performance retention. Eliminates repetitive train-prune-retrain cycles, reducing…
