RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
        • 발행연도
          펼치기
        • 작성언어

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        Knowledge Transfer Based Spatial Embedding Network for Plant Leaf Instance Segmentation

        Joo-Yeon Jung,Sang-Ho Lee,Jong-Ok Kim 대한전자공학회 2023 IEIE Transactions on Smart Processing & Computing Vol.12 No.2

        This paper proposes a method to segment plant leaves using knowledge distillation. Unlike the existing knowledge distillation method aimed at lightening the model, we use knowledge distillation to achieve good performance even with a small amount of dataset. Plants have many leaves, and each leaf is very small. Therefore, the leaf instance segmentation is performed based on spatial embedding. The teacher network is trained with a large dataset and then distills its segmentation knowledge into the student network. Two types of knowledge are distilled from the teacher network: attention distillation and region affinity distillation. The results of the experiment demonstrate that better instance segmentation can be achieved when knowledge distillation is used.

      • HSKD: Hidden State Knowledge Distillation을 이용한 배터리 SoC 예측 모델 경량화

        강수혁(Soohyeok Kang),박규도(Guydo Park),심동훈(Donghoon Sim),조선영(Sunyoung Cho) 한국자동차공학회 2023 한국자동차공학회 학술대회 및 전시회 Vol.2023 No.11

        Knowledge Distillation (KD) is one of the representative methods for AI model compression, where a student model learns by imitating the output of a teacher model. The student model has a smaller network than the teacher model, which can reduce inference time and save memory. This method should be applied for efficient AI model inference in limited computing environments, such as the vehicle controller. In this paper, we applied the Hidden State Knowledge Distillation (HSKD) method to a Bi-LSTM (Bidirectional Long Short Term Memory) model for predicting the State of Charge (SoC) of an electric vehicle battery. This model predicts the SoC 5 minutes ahead using the SoC of the past 5 minutes. In the experiment, we selected a teacher model with a hidden size of 1,024, which showed the highest accuracy, and compared the performance of hidden state knowledge distillation and general knowledge distillation models for models with a hidden size smaller than 1,024. And, we measured the inference time of the compressed models on controllers equipped with ARM Cortex-A53. As a result, the model with a hidden size of 32 had a loss of 0.008 in terms of R2 score compared to the teacher model, but the inference time was reduced by approximately 20.1x and the file size was compressed by 750.6x from 33,028 [KB] to 44 [KB].

      • KCI등재

        지식증류 방안을 활용한 무인 군사 이미지 분류 AI 모델의 데이터 부족 및 경량화 모델 한계 극복

        정자훈,송윤호,강인욱,류준열 한국산학기술학회 2024 한국산학기술학회논문지 Vol.25 No.3

        Developing AI models for military unmanned systems requires consideration of the unique operational environment. Constraints like limited battery power and the high risk of destruction at the frontline necessitate restrictions on using costly, high-performance chips. In this study, we explored methods to enhance image classification performance of AI models under two key challenges. Firstly, constraints such as power and cost limit the utilization of high-capacity, high-performance models in unmanned systems. Secondly, there's a shortage of sufficient training data to ensure the performance of military AI models. To address these issues, we propose knowledge distillation. We selected EfficientNetB4 as the Teacher model, known for its superior performance despite high computational complexity, and SqueezeNet, ShuffleNetV2, and MobileNetV3 small as Student models. Through knowledge distillation, the high-accuracy knowledge of the Teacher model effectively enhanced the Student models, improving classification performance even under constraints. Such results are expected to enhance military utility by addressing the performance limitations of lightweight models applied to on device AI model in scenarios with limited training data.

      • KCI등재

        Lightweight fault diagnosis method in embedded system based on knowledge distillation

        Ran Gong,Chenlin Wang,Jinxiao Li,Yi Xu 대한기계학회 2023 JOURNAL OF MECHANICAL SCIENCE AND TECHNOLOGY Vol.37 No.11

        Deep learning (DL) has garnered attention in mechanical device health management for its ability to accurately identify faults and predict component life. However, its high computational cost presents a significant challenge for resource-limited embedded devices. To address this issue, we propose a lightweight fault diagnosis model based on knowledge distillation. The model employs complex residual networks with high classification accuracy as teachers and simple combinatorial convolutional networks as students. The student model has a similar structure to the teacher model, with fewer layers, and uses pixel-wise convolution and channel-wise convolution instead of the original convolution. Students learn the probability distribution rule of the output layer of teacher models to enhance their fault classification accuracy and achieve model compression. This process is called knowledge distillation. The combination of a lightweight model structure and the model training method of knowledge distillation results in a model that not only achieves higher classification accuracy than other small-sized classical models, but also has faster inference speed in embedded devices.

      • KCI등재

        T5-GPT 간 상징적 증류 지식 활용 프롬프트 엔지니어링

        백지수,방나모,연희연,김민주,구명완 한국정보과학회 2024 정보과학회 컴퓨팅의 실제 논문지 Vol.30 No.3

        This study proposes a prompt engineering method for 'Cross-model Symbolic Knowledge Distillation' of generative natural language models (LM)s. Our approach defines the text outputs generated from the generative LMs’ reasoning for a specific downstream task as the 'Symbolic Distilled Knowledge (SDK)'. We aim to improve the reasoning abilities of each generative LM for downstream tasks by training each model in such a way that leverages the SDK from the counterpart model, with the goal of minimizing human labor. We implemented our approach using GPT-J and T5, which differ in model structure and parameter scale. The models that were semi-pretrained by prompting for cross-model symbolic knowledge distillation showed better downstream task performance compared to the baseline. For example, on the SLURP benchmark, which is used for the Intent Classification task, GPT-J-distillated T5 showed an accuracy of 81.95%, which was approximately 10% higher than those achieved by the standard T5 models. T5-distillated GPT-J also showed an accuracy of 29.76% on the SLURP benchmark, representing an improvement of approximately 7.38% over the standard GPT-J.

      • KCI등재

        이중수준 지식 증류를 통한 강건한 퓨샷 분류

        홍정빈,최장훈 한국멀티미디어학회 2024 멀티미디어학회논문지 Vol.27 No.3

        In this paper, we propose a few-shot learning method for robust image classification using knowledge distillation. Existing benchmark datasets for few-shot learning only consists of clean images. These datasets have the problem of not being able to reflect the real-world degradations such as noise and corruptions. In addition, there are still few studies on few-shot learning for robust image classification in corruption. Therefore, in this study, we first propose four novel datasets, Mini-ImageNet-C, CUB-200-C, CIFAR-FS-C and FGVC-Aircraft-C for evaluating the robustness of few-shot learning algorithms. As the baseline few-shot learning method, we employ the most representative meta-learning approach, especially Model-Agnostic Meta-Learning (MAML). Afterwards, we incorporate knowledge distillation (KD) into MAML to distill corruption robustness from the large teacher model to the small student model where KD is performed in both inner-loop and outer-loop of MAML. Our ‘Bilevel KD’ allows the student models to achieve better performance while maintaining low memory usage. At meta-test stage, experiments showed that in all cases, our method performed significantly better than the baseline.

      • KCI등재

        데이터 증강 기반 Saliency U-Net을 이용한 저해상도 객체 검출 방법에 대한 연구

        안명수,임혜연,강대성 한국정보기술학회 2023 한국정보기술학회논문지 Vol.21 No.8

        Object detection in low-resolution images is often vulnerable to noise and results in performance degradation when relying only on traditional object detection algorithms in industrial applications due to low image quality. In order to solve this problem in this paper, the knowledge distillation-based GAN (Generative Adversarial Networks) algorithm is applied to convert high-resolution image data such as rotation, size, inversion, resolution, etc. and generation. In addition, we study a method of detecting and tracking a specific object using U-Net based on the Saliency mechanism for the generated data. Through comparative analysis experiments with existing detection methods, it was verified that the proposed method provides higher accuracy and robustness in low-resolution images. As for the experimental approach, the performance was evaluated in three open data sets related to industrial detection, and it was confirmed that the mAP was improved by more than 5% compared to the existing method.

      • KCI등재

        Efficiency Enhanced Super Resolution Generative Adversarial Network via Advanced Knowledge Distillati

        Hussain,신정훈,Syed Asif Raza Shah,조금원 한국멀티미디어학회 2023 멀티미디어학회논문지 Vol.26 No.12

        Super-resolution (SR) stands as a prominent challenge in computer vision with diverse applications. Generative adversarial networks (GANs) yield impressive SR outcomes by restoring high-quality images from low-resolution input. However, GAN-based SR (particularly generators) have high memory demands, leading to performance degradation and energy consumption, making them unsuitable for resource-limited devices. Addressing this concern, our paper introduces a novel and efficient SR-GAN (generator) model architecture by strategically leveraging knowledge distillation, which results in reducing storage demands by 58% while enhancing performance. Our approach involves extracting feature maps from a resourceintensive model to design a lightweight model with minimal computational and memory requirements. Experiments across several benchmarks demonstrate that the proposed compressed model outperforms existing knowledge distillation-based techniques, particularly in regard to SSIM, PSNR, and overall image quality in x4 super-resolution tasks. In the future, this compressed model will be implemented and benchmarked with existing models in resource-limited devices such as tablet and wearing devices.

      • KCI등재

        부드러운 소프트맥스 함수를 적용한 신경망 앙상블 개선

        이재민,최태영 한국콘텐츠학회 2024 한국콘텐츠학회논문지 Vol.24 No.4

        When solving classification problems with AI, Ensembles are often used as a technique to improve performance. However, if the probability distribution of general softmax, which changes the weight of selected classes into a probability distribution, is applied to the ensemble as it is, the probability of the class with the highest weight becomes too high value because of the effect of the exponential function in softmax, which reduces the effectiveness of the ensemble. In this paper, we propose smoothed softmax which introduces a smoothing factor to alleviate the problem of softmax, and show that the ensemble performance can be improved. The paper also shows that smoothed softmax improves the accuracy of the neural network when a neural network trained with coarse labels is included in the ensemble, and the ensemble is used as a teacher model to distill knowledge into a small-sized student model neural network. Compared to the conventional softmax, the smoothed softmax proposed in the paper increased the accuracy by 1.38% when it is ensembled with a neural network trained with fine labels, increased the accuracy by 5.21% when a neural network trained with coarse labels was added to the ensemble, and increased the accuracy by up to 7.0% when the ensemble neural network was reduced in size through knowledge distillation.

      • KCI등재

        燒酒(소주)의 흥기 -몽골 시기 (1206-1368) "중국"에서 한반도에로 증류기술의 전파-

        박현희 ( Hyunhee Park ) 중앙아시아학회 2016 中央아시아硏究 Vol.21 No.1

        The paper re-examined the rise of soju at the end of the Koryo period, which marked a new era in Korean drinking history from the perspective of distillation-technology transfer in Eurasia during the Mongol period. While making use of the sources available to date, the relative lack of material forces us to rely on reasoning and inference to create the most comprehensive and convincing explanation possible. By comparing it with earlier traditional Korean alcoholic drinks, we have clearly seen how soju was distinctive and new. Yet sources do not clearly say when and how soju spread to and in Koryo at that time. That is why many different theories have competed for preeminence. The paper reviewed earlier theories including those by Chang Chihyon and Yi Songu, and also examined the most recent studies done in different languages and also new archeological findings. We can propose the following provisional conclusion from the current examination. First, distillation developed independently in China. Yet it was the Mongols who adopted distillation technology from other cultures such as China to make distilled alcohols using mare’s milk drink that they enjoyed, and named it arakhi, a foreign word from West Asia that migrated through overland and sea routes, and popularized it in large parts of Eurasia including China and Korea under the Mongol influence in the course of mobilizing goods and people including soldiers and merchants. Merchants from different societies active in the international trades that traveled along the expanded trade routes of the time probably accelerated the transfer processes. The case of Korea, where soju became popular right after the coming of the Mongols, is supported by a good number of documents and historical contexts. That some Mongol soldiers recruited to Korean army camps were possibly from craftsmen families who were able to introduce distillation technology suggests a quite likely scenario. While we cannot deny the possibility that soju was transferred earlier from China to Korea, no evidence supports this so far. Available pieces of evidence all clearly say that distilled alcohol spread widely only after it was transferred from China to Korea during the late Koryo period. The case of soju transfer clearly shows that a big cultural influence could occur through exceptional historical changes. Unlike some foreign alcoholic drinks, which transferred beyond their cultural zone as tribute and then spread very slowly among kings and nobles, soju spread quickly for a short period of time under unprecedented historical conditions, such as “Korea’s close connection to wider parts of Eurasia” through the Mongol empire. It is furthermore intriguing as it involves a transfer of technological knowledge. The story of soju’s rise in Korea is a good example of the rise of a new cultural element based on tradition and innovation, involving both adaptation and localization of new technologies. A further investigation as part of a larger study of the history of distillation on a worldwide basis will help us explore significance of the case of Korean soju in global history.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼