http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
자율주행용 임베디드 플랫폼 탑재를 위한 YOLOv4 기반 객체탐지 경량화 모델 개발
심이삭(Isaac Shim),임주형(Ju-Hyung Lim),장영완(Young-Wan Jang),유지환(JiHwan You),오선택(SeonTaek Oh),김영근(Young-Keun Kim) 한국자동차공학회 2021 한국 자동차공학회논문집 Vol.29 No.10
The latest CNN-based object detection models are quite accurate, but they require a high-performance GPU to run in real-time. For an embedded system with limited memory space, they are still are heavy in terms of memory size and speed. Since the object detection for autonomous system is run on an embedded processor, it is preferable to compress the detection network as light as possible while preserving detection accuracy. There are several popular lightweight detection models; however, their accuracy is too low for safe driving applications. Therefore, this paper proposes YOffleNet, a new object detection model that is compressed at a high ratio while minimizing the accuracy loss for real-time and safe driving application on an autonomous system. The backbone network architecture is based on YOLOv4, but we could significantly compress the network by replacing the high-calculation-load CSP DenseNet with the lighter modules of ShuffleNet. Experiments with KITTI dataset showed that the proposed YOffleNet is compressed by 4.7 times than the YOLOv4-s that could achieve as fast as 32 FPS on an embedded GPU system(NVIDIA Jetson AGX Xavier). When compared to the high compression ratio, the accuracy is reduced slightly to 85.8 % mAP, which is only 3.6 % lower than YOLOv4-s. As a result, the proposed network showed a high potential for deployment on the embedded system of the autonomous system for real-time and accurate object detection applications.