RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        Knowledge Transfer Based Spatial Embedding Network for Plant Leaf Instance Segmentation

        Joo-Yeon Jung,Sang-Ho Lee,Jong-Ok Kim 대한전자공학회 2023 IEIE Transactions on Smart Processing & Computing Vol.12 No.2

        This paper proposes a method to segment plant leaves using knowledge distillation. Unlike the existing knowledge distillation method aimed at lightening the model, we use knowledge distillation to achieve good performance even with a small amount of dataset. Plants have many leaves, and each leaf is very small. Therefore, the leaf instance segmentation is performed based on spatial embedding. The teacher network is trained with a large dataset and then distills its segmentation knowledge into the student network. Two types of knowledge are distilled from the teacher network: attention distillation and region affinity distillation. The results of the experiment demonstrate that better instance segmentation can be achieved when knowledge distillation is used.

      • HSKD: Hidden State Knowledge Distillation을 이용한 배터리 SoC 예측 모델 경량화

        강수혁(Soohyeok Kang),박규도(Guydo Park),심동훈(Donghoon Sim),조선영(Sunyoung Cho) 한국자동차공학회 2023 한국자동차공학회 학술대회 및 전시회 Vol.2023 No.11

        Knowledge Distillation (KD) is one of the representative methods for AI model compression, where a student model learns by imitating the output of a teacher model. The student model has a smaller network than the teacher model, which can reduce inference time and save memory. This method should be applied for efficient AI model inference in limited computing environments, such as the vehicle controller. In this paper, we applied the Hidden State Knowledge Distillation (HSKD) method to a Bi-LSTM (Bidirectional Long Short Term Memory) model for predicting the State of Charge (SoC) of an electric vehicle battery. This model predicts the SoC 5 minutes ahead using the SoC of the past 5 minutes. In the experiment, we selected a teacher model with a hidden size of 1,024, which showed the highest accuracy, and compared the performance of hidden state knowledge distillation and general knowledge distillation models for models with a hidden size smaller than 1,024. And, we measured the inference time of the compressed models on controllers equipped with ARM Cortex-A53. As a result, the model with a hidden size of 32 had a loss of 0.008 in terms of R2 score compared to the teacher model, but the inference time was reduced by approximately 20.1x and the file size was compressed by 750.6x from 33,028 [KB] to 44 [KB].

      • KCI등재

        지식 증류를 이용한 DA-FSL 모델의 학습 수렴 곡선 및 학습 시간 개선 방법

        임혜연(Hye-Youn Lim),이준목(Jun-Mock Lee),강대성(Dae-Seong Kang) 한국정보기술학회 2020 한국정보기술학회논문지 Vol.18 No.10

        It is easy to train a machine learning model well in a field with a lot of data. However, it is not easy to train the learning model well where learning data is deficient. In order to solve this problem, knowledge distillation through transfer learning has recently attracted attention. Knowledge distillation refers to a methodology for transferring knowledge from a large number of large networks learned through ensemble techniques to one small network. In this paper, we propose a method to improve the learning accuracy as well as the learning time by applying the knowledge distillation technique to the DA-FSL model. Three knowledge distillation methods were used to apply it to the DA-FSL model, and through experiments, the curve converging to the initial target learning accuracy was evaluated. We compared the learning time required for model applying knowledge distillation through the proposed method and normal DA-FSL model without knowledge distillation. As a result, it was confirmed through an experiment that it was reduced from a minimum of 3% to a maximum of 35%.

      • KCI등재

        지식증류 방안을 활용한 무인 군사 이미지 분류 AI 모델의 데이터 부족 및 경량화 모델 한계 극복

        정자훈,송윤호,강인욱,류준열 한국산학기술학회 2024 한국산학기술학회논문지 Vol.25 No.3

        Developing AI models for military unmanned systems requires consideration of the unique operational environment. Constraints like limited battery power and the high risk of destruction at the frontline necessitate restrictions on using costly, high-performance chips. In this study, we explored methods to enhance image classification performance of AI models under two key challenges. Firstly, constraints such as power and cost limit the utilization of high-capacity, high-performance models in unmanned systems. Secondly, there's a shortage of sufficient training data to ensure the performance of military AI models. To address these issues, we propose knowledge distillation. We selected EfficientNetB4 as the Teacher model, known for its superior performance despite high computational complexity, and SqueezeNet, ShuffleNetV2, and MobileNetV3 small as Student models. Through knowledge distillation, the high-accuracy knowledge of the Teacher model effectively enhanced the Student models, improving classification performance even under constraints. Such results are expected to enhance military utility by addressing the performance limitations of lightweight models applied to on device AI model in scenarios with limited training data.

      • [AI-11] DualKD-Net: Dual Knowledge Distillation for Real-Time Object Detection

        Akhrorjon Rakhmonov,Taehun Kim,Jeonghong Kim 한국정보통신학회 2024 INTERNATIONAL CONFERENCE ON FUTURE INFORMATION & C Vol.15 No.1

        Knowledge distillation thrives in many computer vision areas, yet struggles in object detection due to its complexity. Thus, in this paper we propose a novel object detector that uses dual knowledge distillation, intrinsic distillation, and broad distillation. The former compels the student to concentrate on the teacher's essential pixels and channels, the latter aims to restore the connection between various pixels and transfers this information from teacher to student. The conducted experiments on Resnet50 based Faster RCNN and RetinaNet using the proposed method achieve 3.6% and 3.4% higher mAP than the baseline on COCO2017 dataset and 4.1% and 4.0% higher mAP than the baseline on custom dataset.

      • KCI등재

        Lightweight fault diagnosis method in embedded system based on knowledge distillation

        Ran Gong,Chenlin Wang,Jinxiao Li,Yi Xu 대한기계학회 2023 JOURNAL OF MECHANICAL SCIENCE AND TECHNOLOGY Vol.37 No.11

        Deep learning (DL) has garnered attention in mechanical device health management for its ability to accurately identify faults and predict component life. However, its high computational cost presents a significant challenge for resource-limited embedded devices. To address this issue, we propose a lightweight fault diagnosis model based on knowledge distillation. The model employs complex residual networks with high classification accuracy as teachers and simple combinatorial convolutional networks as students. The student model has a similar structure to the teacher model, with fewer layers, and uses pixel-wise convolution and channel-wise convolution instead of the original convolution. Students learn the probability distribution rule of the output layer of teacher models to enhance their fault classification accuracy and achieve model compression. This process is called knowledge distillation. The combination of a lightweight model structure and the model training method of knowledge distillation results in a model that not only achieves higher classification accuracy than other small-sized classical models, but also has faster inference speed in embedded devices.

      • KCI등재

        이중수준 지식 증류를 통한 강건한 퓨샷 분류

        홍정빈,최장훈 한국멀티미디어학회 2024 멀티미디어학회논문지 Vol.27 No.3

        In this paper, we propose a few-shot learning method for robust image classification using knowledge distillation. Existing benchmark datasets for few-shot learning only consists of clean images. These datasets have the problem of not being able to reflect the real-world degradations such as noise and corruptions. In addition, there are still few studies on few-shot learning for robust image classification in corruption. Therefore, in this study, we first propose four novel datasets, Mini-ImageNet-C, CUB-200-C, CIFAR-FS-C and FGVC-Aircraft-C for evaluating the robustness of few-shot learning algorithms. As the baseline few-shot learning method, we employ the most representative meta-learning approach, especially Model-Agnostic Meta-Learning (MAML). Afterwards, we incorporate knowledge distillation (KD) into MAML to distill corruption robustness from the large teacher model to the small student model where KD is performed in both inner-loop and outer-loop of MAML. Our ‘Bilevel KD’ allows the student models to achieve better performance while maintaining low memory usage. At meta-test stage, experiments showed that in all cases, our method performed significantly better than the baseline.

      • KCI등재

        T5-GPT 간 상징적 증류 지식 활용 프롬프트 엔지니어링

        백지수,방나모,연희연,김민주,구명완 한국정보과학회 2024 정보과학회 컴퓨팅의 실제 논문지 Vol.30 No.3

        This study proposes a prompt engineering method for 'Cross-model Symbolic Knowledge Distillation' of generative natural language models (LM)s. Our approach defines the text outputs generated from the generative LMs’ reasoning for a specific downstream task as the 'Symbolic Distilled Knowledge (SDK)'. We aim to improve the reasoning abilities of each generative LM for downstream tasks by training each model in such a way that leverages the SDK from the counterpart model, with the goal of minimizing human labor. We implemented our approach using GPT-J and T5, which differ in model structure and parameter scale. The models that were semi-pretrained by prompting for cross-model symbolic knowledge distillation showed better downstream task performance compared to the baseline. For example, on the SLURP benchmark, which is used for the Intent Classification task, GPT-J-distillated T5 showed an accuracy of 81.95%, which was approximately 10% higher than those achieved by the standard T5 models. T5-distillated GPT-J also showed an accuracy of 29.76% on the SLURP benchmark, representing an improvement of approximately 7.38% over the standard GPT-J.

      • KCI등재

        데이터 증강 기반 Saliency U-Net을 이용한 저해상도 객체 검출 방법에 대한 연구

        안명수,임혜연,강대성 한국정보기술학회 2023 한국정보기술학회논문지 Vol.21 No.8

        Object detection in low-resolution images is often vulnerable to noise and results in performance degradation when relying only on traditional object detection algorithms in industrial applications due to low image quality. In order to solve this problem in this paper, the knowledge distillation-based GAN (Generative Adversarial Networks) algorithm is applied to convert high-resolution image data such as rotation, size, inversion, resolution, etc. and generation. In addition, we study a method of detecting and tracking a specific object using U-Net based on the Saliency mechanism for the generated data. Through comparative analysis experiments with existing detection methods, it was verified that the proposed method provides higher accuracy and robustness in low-resolution images. As for the experimental approach, the performance was evaluated in three open data sets related to industrial detection, and it was confirmed that the mAP was improved by more than 5% compared to the existing method.

      • KCI등재

        Efficiency Enhanced Super Resolution Generative Adversarial Network via Advanced Knowledge Distillati

        Hussain,신정훈,Syed Asif Raza Shah,조금원 한국멀티미디어학회 2023 멀티미디어학회논문지 Vol.26 No.12

        Super-resolution (SR) stands as a prominent challenge in computer vision with diverse applications. Generative adversarial networks (GANs) yield impressive SR outcomes by restoring high-quality images from low-resolution input. However, GAN-based SR (particularly generators) have high memory demands, leading to performance degradation and energy consumption, making them unsuitable for resource-limited devices. Addressing this concern, our paper introduces a novel and efficient SR-GAN (generator) model architecture by strategically leveraging knowledge distillation, which results in reducing storage demands by 58% while enhancing performance. Our approach involves extracting feature maps from a resourceintensive model to design a lightweight model with minimal computational and memory requirements. Experiments across several benchmarks demonstrate that the proposed compressed model outperforms existing knowledge distillation-based techniques, particularly in regard to SSIM, PSNR, and overall image quality in x4 super-resolution tasks. In the future, this compressed model will be implemented and benchmarked with existing models in resource-limited devices such as tablet and wearing devices.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼