http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
오늘 본 자료
유찬희,김유선,박경석 한국정보과학회 2023 정보과학회 컴퓨팅의 실제 논문지 Vol.29 No.8
With recent development of deep learning technology, models with enhanced prediction performance have been proposed. If a model with many weights or large input data are used to improve prediction performance, memory of the accelerator might be exceeded during training. To overcome this problem, model parallelism has been proposed. However, model parallelism has a disadvantage of slow training speed due to bubbles. G-pipe using micro-batch techniques can reduce these bubbles. In this paper, we analyze limitations of G-Pipe and propose a bypass technique that could accelerate G-Pipe using minimal additional memory to overcome it. Experiment results reveals that the bypass technique showed a performance improvement of about 13.34% by additionally using 67.51MB of memory compared to G-Pipe using DenseNet201 model.