http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
효율적 미세 조정을 통한 사전학습 모델의 일반화 성능 손실 방지
이규성(Gyuseong Lee),장우석(Wooseok Jang),김진현(Jin Hyeon Kim),정재우(Jaewoo Jung),조한상(Hansang Cho),안승준(Seungjun An),김경년(Gyeongnyeon Kim),김승룡(Seungryong Kim) 대한전자공학회 2023 대한전자공학회 학술대회 Vol.2023 No.6
Learning robust vision models that perform well in situations of out-of-distribution (OOD) is an important task for model deployment in real-world settings. This field has been actively researched for a long time and many proposed works achieved small performance gain compared to the simplest empirical risk minimization (ERM) method which was evaluated on a benchmark of restricted hyperparameter search space. We verify that the most effective methods are ensembling diverse models and scaling up the size of pretraining. Next, we focus on exploiting the knowledge of large pretrained models to solve domain generalization problems. However previous works have proven that naively fine-tuning a large pretrained model harms the OOD robustness. By incorporating parameter-efficient fine-tuning (PEFT) methods to large pretrained models we can effectively mitigate the loss of OOD robustness and achieve state-ofthe-art performance in domain generalization tasks.