http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Batch Normalization을 위한 향상된 Hardware 친화적 Quantization 기법
제현승(Hyeonseung Je),이혁재(Hyuk-Jae Lee),이규중(Kyujoong Lee) 대한전자공학회 2020 대한전자공학회 학술대회 Vol.2020 No.8
In this paper, a hardware friendly quantization for batch normalization is proposed. The previous quantization that batch normalization scale is folded into weight value is not compatible with general fixed bit multiplication hardware because the extension of the weight range causes inaccurate quantized value. In the proposed method, only weight values are quantized and the scale factors of weight quantization are folded into batch normalization so that the range of weights is not enlarged. As a result, the pixel accuracy is improved from 94.96% to 95.00% and the SQNR(Signal to Quantization Noise Ratio) for weights is also improved.
Convolution Kernel 크기 변경을 통한 Neural Networks Corruption Robustness 향상
차민철(Min-Cheol Cha),황현하(Hyunha Hwang),이규중(Kyujoong Lee),이혁재(Hyuk-Jae Lee) 대한전자공학회 2023 대한전자공학회 학술대회 Vol.2023 No.11
This paper presents a simple yet effective modification of an object detection network to improve corruption robustness. We utilize the retinanet as a base network and PCB dataset as our reference dataset. To improve corruption robustness, our design change the convolution kernel size of lateral convolutions which are placed in between the backbone and the neck of the retinanet. We proceed the experiment with the kernel size 3x3, 5x5, 7x7 and achieve the best result with 5x5 convolution kernel. The results show that the modified network achieves 6% increase in mAP for defocus blur and slight increase in several blur type corruptions while maintaining the mAP of baseline.