http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
오늘 본 자료
Deep Switch: 자원이 제약된 기기에서 동적 데이터 변화에 적응하는 모델을 위한 전문화된 경량 신경망 교체 시스템
김학빈(HakBin Kim),김종영(JongYeong Kim),최홍준(HongJun Choi),진영화(YeongHwa Jin),김성웅(SeongWoong Kim),이건호(KeonHo Lee),김현준(HyunJun Kim),한예지(YeJi Han),김다솔(DaSol Kim),김덕환(DeokHwan Kim),최동완(DongWan Choi) 한국정보과학회 2020 한국정보과학회 학술발표논문집 Vol.2020 No.12
김학빈(Hakbin Kim),최동완(Dong-Wan Choi) Korean Institute of Information Scientists and Eng 2021 정보과학회논문지 Vol.48 No.4
Many recent works on model compression in neural networks are based on knowledge distillation (KD). However, since the basic goal of KD is to transfer the entire knowledge set of a teacher model to a student model, the standard KD may not represent the best use of the model’s capacity when a user wishes to classify only a small subset of classes. Also, it is necessary to possess the original teacher model dataset for KD, but for various practical reasons, such as privacy issues, the entire dataset may not be available. Thus, this paper proposes conditional knowledge distillation (CKD), which only distills specialized knowledge corresponding to a given subset of classes, as well as data-free CKD (DF-CKD), which does not require the original data. As a major extension, we devise Joint-CKD, which jointly performs DF-CKD and CKD with only a small additional dataset collected by a client. Our experimental results show that the CKD and DF-CKD methods are superior to standard KD, and also confirm that joint use of CKD and DF-CKD is effective at further improving the overall accuracy of a specialized model.