1 G. K. Nayak, "Zero-shot knowledge distillation in deep networks"
2 K. Ullrich, "Soft weight-sharing for neural network compression" 2017
3 R. Krishnamoorthi, "Quantizing deep convolutional networks for efficient inference : A whitepaper"
4 Y. Wei, "Quantization mimic : Towards very tiny CNN for object detection" 267-283, 2018
5 J. Kim, "QKD : Quantization-aware knowledge distillation"
6 L. Deng, "Model compression and hardware acceleration for neural networks : A comprehensive survey" 108 (108): 485-532, 2020
7 A. G. Howard, "MobileNets : Efficient convolutional neural networks for mobile vision applications"
8 S. Narang, "Mixed precision training" 2018
9 S. Jung, "Learning to Quantize Deep Networks by Optimizing Quantization Intervals with Task Loss" 2019
10 Z. Liu, "Learning efficient convolutional networks through network sliming" 2736-2744, 2017
1 G. K. Nayak, "Zero-shot knowledge distillation in deep networks"
2 K. Ullrich, "Soft weight-sharing for neural network compression" 2017
3 R. Krishnamoorthi, "Quantizing deep convolutional networks for efficient inference : A whitepaper"
4 Y. Wei, "Quantization mimic : Towards very tiny CNN for object detection" 267-283, 2018
5 J. Kim, "QKD : Quantization-aware knowledge distillation"
6 L. Deng, "Model compression and hardware acceleration for neural networks : A comprehensive survey" 108 (108): 485-532, 2020
7 A. G. Howard, "MobileNets : Efficient convolutional neural networks for mobile vision applications"
8 S. Narang, "Mixed precision training" 2018
9 S. Jung, "Learning to Quantize Deep Networks by Optimizing Quantization Intervals with Task Loss" 2019
10 Z. Liu, "Learning efficient convolutional networks through network sliming" 2736-2744, 2017
11 A. Zhou, "Incremental network quantization:Towards lossless CNNs with low-precision weights" 2017
12 S. I. Mirzadeh, "Improved knowledge distillation via teacher assistant : Bridging the gap between student and teacher"
13 G. Jeon, "Improved knowledge distillation via softer target" 993-994, 2019
14 A. Romero, "Fitnets: Hints for thin deep nets" 2015
15 A. Zhou, "Explicit loss-error-aware quantization for low-bit deep neural networks" 2018
16 G. Hinton, "Distilling the knowledge in a neural network" 2014
17 K. He, "Deep residual learning for image recognition" 770-778, 2016
18 S. Han, "Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding" 2016
19 R. Banner, "Advances in Neural Information Processing Systems" 2018
20 X. Zhang, "Adaptive precision training : Quantify back propagation in neural networks with fixed-point numbers"