RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
          펼치기
        • 등재정보
        • 학술지명
          펼치기
        • 주제분류
        • 발행연도
          펼치기
        • 작성언어

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        Atrous Convolution과 Grad-CAM을 통한 손 끝 탐지

        노대철,김태영 (사)한국컴퓨터그래픽스학회 2019 컴퓨터그래픽스학회논문지 Vol.25 No.5

        딥러닝 기술의 발전으로 가상 현실이나 증강 현실 응용에서 사용하기 적절한 사용자 친화적 인터페이스에 관한 연구가 활발히 이뤄지고 있다. 본 논문은 사용자의 손을 이용한 인터페이스를 지원하기 위하여 손 끝 좌표를 추적하여 가상의 객체를 선택하거나, 공중에 글씨나 그림을 작성하는 행위가 가능하도록 딥러닝 기반 손 끝 객체 탐지 방법을 제안한다. 입력 영상에서 Grad-CAM으로 해당 손 끝 객체의 대략적인 부분을 잘라낸 후, 잘라낸 영상에 대하여 Atrous Convolution을 이용한 합성곱 신경망을 수행하여 손 끝의 위치를 찾는다. 본 방법은 객체의 주석 전처리 과정을 별도로 요구하지 않으면서 기존 객체 탐지 알고리즘 보다 간단하고 구현하기에 쉽다. 본 방법을 검증하기 위하여 Air-Writing 응용을 구현한 결과 평균 81%의 인식률과 76 ms 속도로 허공에서 지연 시간 없이 부드럽게 글씨 작성이 가능하여 실시간으로 활용 가능함을 알 수 있었다. With the development of deep learning technology, research is being actively carried out on user-friendly interfaces that are suitable for use in virtual reality or augmented reality applications. To support the interface using the user's hands, this paper proposes a deep learning-based fingertip detection method to enable the tracking of fingertip coordinates to select virtual objects, or to write or draw in the air. After cutting the approximate part of the corresponding fingertip object from the input image with the Grad-CAM, and perform the convolution neural network with Atrous Convolution for the cut image to detect fingertip location. This method is simpler and easier to implement than existing object detection algorithms without requiring a pre-processing for annotating objects. To verify this method we implemented an air writing application and showed that the recognition rate of 81% and the speed of 76 ms were able to write smoothly without delay in the air, making it possible to utilize the application in real time.

      • KCI등재

        XAI Grad-CAM 기반 궤양병 감귤 이미지 분류 CNN 모델의 점검

        이동찬(Dongchan Lee),변상영(Sangyoung Byeon),김기환(Keewhan Kim) 한국자료분석학회 2022 Journal of the Korean Data Analysis Society Vol.24 No.6

        하드웨어의 성능 및 정보처리 기술이 급격히 발전하면서 비정형 데이터의 처리 및 가치 창출에 관한 관심이 증가하고 있다. 이를 위한 다양한 인공지능 아키텍처들이 개발되고 있으며, 모델의 의사결정 분기점이 기하급수적으로 늘어나면서 큰 성능의 개선이 이루어지고 있다. 그러나복잡한 모델 구조는 연구자의 결과 해석 용이성을 저해하는 주요한 원인이 되며, 모델 성능의발전 속도와는 달리 설명 능력에 대해서는 진척이 더딘 실정이다. 설명 가능한 인공지능, 이하XAI(eXplainable Artificial Intelligence)는 위와 같은 문제를 해결하기 위해 등장하였으며, 모델의블랙박스를 이해 가능한 수준으로 분해하여 해석 가능성 및 신뢰도 제고에 도움을 준다. 본 연구에서는 CNN(Convolutional Neural Network) 모델을 사용하여 궤양병 감귤 이미지 분류 문제에접근하였으며, 최종적으로 설계한 모델은 약 97% 수준의 정확도를 보였다. 이후 모델의 신뢰성제고 및 개선 방향 판단을 위해 XAI 기법 중 하나인 Grad-CAM(Gradient-weighted Class Activation Mapping)을 적용하였으며, 이를 통해 구축한 모델이 최종적인 판단을 내리는데 중요한 역할을한 이미지의 특정 영역을 파악하는 과정을 진행하였다. 점검 결과 이미지 외곽의 형태가 객체와구분이 되지 않아 영향을 크게 받는 경우 및 특정 객체의 고유한 형태가 오분류 원인으로 감지되었다. By the rapid development of hardware performance and information processing technology, interest in processing unstructured data and creating value is increasing. Various types of AI architectures are being developed and as the decision-making junction of the model increased exponentially, the performance is being improved. However, complex model structure is a major cause of hindering researchers' ease of interpret results and unlike the speed of development of model performance, the progress is slow on explanatory ability. Explainable artificial intelligence (XAI) has emerged to solve this problem and decomposes the model's black box to an understandable level to help improve interpretability and reliability. In this research, we approach the ucler disease citrus image classification problem by using CNN model, and the final model showed approximately 97% accuracy. After that, to improve the reliability of the model and to determine the specific area of the image that played a major role in making the final judgment, Gradient-weighted Class Activation Mapping (Grad-CAM), one of the XAI techniques was applied. As a result of the inspection, it was detected that the shape outside the image wasn't distinguished from the object which was greatly affected. So, the unique shape of a specific object was the main cause of misclassification.

      • KCI등재

        볼 베어링 고장진단 기법 비교 및 XAI Grad-CAM을 이용한 분류결과 해석 연구

        김영근,김예진,전현직 대한전기학회 2022 전기학회논문지 Vol.71 No.9

        Various machine learning and deep learning methods were proposed to monitor and classify the bearing's health state using vibration signals since bearing faults are one of the most causes of failure of rotationary machine. The process of diagnosing bearing faults using machine learning is as follows. First, the features, including the fault characteristic of the vibration signals, are extracted, and these features are selected to reduce the dimension of the features. These features are input into the machine learning classifier to diagnose the system's health. In addition to machine learning methods, CNN, one of the deep learning methods, is widely used. Since the deep learning model extracts features by itself, only the preprocessing process of converting the bearing signals into 2D is needed. The fault classification accuracy of two vibration signal transformation methods as preprocessing methods for the CNN model was compared. This paper compares the bearing fault classification performance of several machine learning commonly used and the CNN model for the lab-made wind turbine machinery testbed. By comparing different feature extraction, feature selection, and classification methods, the most appropriate pipeline is selected for the testbed. Also, grad-cam, an explainable AI(XAI) technique, is applied to interpret the CNN based classification in terms of interested frequency bandwidth. The XAI analysis was verified by designing preprocessing filters based on the grad-cam outputs for enhancing classification performance.

      • Automatic detection of icing wind turbine using deep learning method

        Hasan Basri Başağa,Hasan Basri Başağa,Selen Ayas,Mohammad Tordi Karimi 한국풍공학회 2022 Wind and Structures, An International Journal (WAS Vol.34 No.6

        Detecting the icing on wind turbine blades built-in cold regions with conventional methods is always a very laborious, expensive and very difficult task. Regarding this issue, the use of smart systems has recently come to the agenda. It is quite possible to eliminate this issue by using the deep learning method, which is one of these methods. In this study, an application has been implemented that can detect icing on wind turbine blades images with visualization techniques based on deep learning using images. Pre-trained models of Resnet-50, VGG-16, VGG-19 and Inception-V3, which are well-known deep learning approaches, are used to classify objects automatically. Grad-CAM, Grad-CAM++, and Score-CAM visualization techniques were considered depending on the deep learning methods used to predict the location of icing regions on the wind turbine blades accurately. It was clearly shown that the best visualization technique for localization is Score-CAM. Finally, visualization performance analyses in various cases which are close-up and remote photos of a wind turbine, density of icing and light were carried out using Score-CAM for Resnet-50. As a result, it is understood that these methods can detect icing occurring on the wind turbine with acceptable high accuracy

      • KCI등재

        Analyze weeds classification with visual explanation based on Convolutional Neural Networks

        Vo, Hoang-Trong,Yu, Gwang-Hyun,Nguyen, Huy-Toan,Lee, Ju-Hwan,Dang, Thanh-Vu,Kim, Jin-Young THE KOREAN INSTITUTE OF SMART MEDIA 2019 스마트미디어저널 Vol.8 No.3

        To understand how a Convolutional Neural Network (CNN) model captures the features of a pattern to determine which class it belongs to, in this paper, we use Gradient-weighted Class Activation Mapping (Grad-CAM) to visualize and analyze how well a CNN model behave on the CNU weeds dataset. We apply this technique to Resnet model and figure out which features this model captures to determine a specific class, what makes the model get a correct/wrong classification, and how those wrong label images can cause a negative effect to a CNN model during the training process. In the experiment, Grad-CAM highlights the important regions of weeds, depending on the patterns learned by Resnet, such as the lobe and limb on 미국가막사리, or the entire leaf surface on 단풍잎돼지풀. Besides, Grad-CAM points out a CNN model can localize the object even though it is trained only for the classification problem.

      • KCI등재

        Grad-CAM을 활용한 EfficientNetV2 기반 웨이퍼 맵 불량분석 연구

        이한성,조현종 대한전기학회 2023 전기학회논문지 Vol.72 No.4

        In semiconductors, various defect patterns appear on the wafer map due to problems in the design and manufacturing process. Analysis of generated defect patterns will reduce the rate of defects and enable the production of high-quality semiconductors. Considering the number of semiconductors produced, performing defect analysis by human resources is inefficient. Recently, as hardware performance has improved, high-performance deep learning models have been designed. These models show high performance in image classification and have advantages in terms of processing speed. Therefore, this paper used EfficientNetV2 designed to achieve maximum efficiency with few parameters for semiconductor failure analysis. Identifying the location of defects is a critical element of defect analysis, not just classifying defect patterns. Therefore, in this study, we used Grad-CAM to identify the classification of defect patterns and their approximate location. Wafer map dataset is difficult to collect as data includes defects in manufacturing companies' processes. To train EfficientNetV2, we used the WM-811K dataset, a publicly available dataset on Kaggle. This dataset has an imbalance in the number of data between classes. We increased the data using Flip and Rotate to address the dataset's imbalances, ultimately improving the classification performance. The test results showed an accuracy of 0.944 and an F1-score of 0.929.

      • KCI등재

        한정된 군사 데이터를 활용한 이미지 분류 AI의 성능 향상 방안: Grad-CAM을 활용한 준지도학습 적용

        정자훈(Ja-Hoon Jeong),김용기(Yong-Gi Kim),나성중(Seong-Jung Na),류준열(Jun-yeol Ryu) 한국산학기술학회 2023 한국산학기술학회논문지 Vol.24 No.9

        무인 지상 차량, 무인 비행체 등과 같은 자율 무인체계에 탑재된 AI 모델은 센서를 통해 획득한 적 인원, 무기체계 등을 탐지하고 분류한다. 이때 무기체계를 정확하게 분류하는 것은 화력 및 장애물의 운용 등 작전 수행에 있어 중요한 사안이다. AI 모델의 성능 향상을 위해서는 적 인원, 무기체계에 대한 학습데이터가 필요하다. 그러나 평시 적 무기체계에 대한 이미지 데이터 등을 확보하기 어려울 뿐만 아니라 위장, 부착 무장의 변경 등의 다양한 요인으로 인해 전쟁초기에는 평소 학습한 형태와 다른 상태의 적 무기체계를 분류해야 한다. 이 경우 초기에 확보된 부족한 적 무기체계 데이터를 학습하여 AI 모델의 분류성능을 향상해야 한다. 본 연구에서는 Grad-CAM을 활용하여 이미지 분류 모델이 학습한 데이터 영역을 분석하고, 분류를 위한 관심 구역에 맞춰 노이즈를 추가하는 준지도학습을 사용하여 적 무기체계에 대한 데이터가 부족한 상황에서도 AI 모델의 분류성능을 향상하는 방법을 제안하였다. 본 연구에서의 준지도학습을 적용했을 때 비교적 적은 수의 데이터를 학습시켜도 VGG-16과 MobilNetV2의 분류성능이 향상되는 것을 확인할 수 있었다. 향후 준지도학습을 적용하여 제한된 군사데이터 상황에서도 작전수행 역량을 향상시키는데 활용되기를 기대한다. AI models embedded in autonomous unmanned systems classify enemy personnel and weapon systems acquired through sensors. Accurate classification of weapon systems is crucial for operational tasks. Training data on enemy weapon systems are required to enhance the performance of AI models. On the other hand, Acquiring image data during peacetime is challenging. During the initial stages of war, the AI model must classify enemy weapon systems in states different from what it was trained on because of factors such as camouflage and changes in attached armaments. In such cases, it is necessary to improve the classification performance of AI models by training them on the limited data of enemy weapon systems acquired at the early stages. In this study, Grad-CAM was utilized to analyze the data regions learned by image classification models. A Weakly Supervised Learning approach was proposed, which added noise to the regions of interest for classification, addressing situations with a shortage of data for enemy weapon systems. The classification performance of VGG-16 and MobileNetV2 improved even when trained on a relatively small amount of data. Weakly supervised learning can improve operational capabilities, even in limited military data.

      • KCI등재

        딥러닝을 이용한 직물의 결함 검출에 관한 연구

        남은수,이충권,최윤성 (사)한국스마트미디어학회 2022 스마트미디어저널 Vol.11 No.11

        Identifying defects in textiles is a key procedure for quality control. This study attempted to create a model that detects defects by analyzing the images of the fabrics. The models used in the study were deep learning-based VGGNet and ResNet, and the defect detection performance of the two models was compared and evaluated. The accuracy of the VGGNet and the ResNet model was 0.859 and 0.893, respectively, which showed the higher accuracy of the ResNet. In addition, the region of attention of the model was derived by using the Grad-CAM algorithm, an eXplainable Artificial Intelligence (XAI) technique, to find out the location of the region that the deep learning model recognized as a defect in the fabric image. As a result, it was confirmed that the region recognized by the deep learning model as a defect in the fabric was actually defective even with the naked eyes. The results of this study are expected to reduce the time and cost incurred in the fabric production process by utilizing deep learning-based artificial intelligence in the defect detection of the textile industry. 섬유산업에서 생산된 직물의 결함을 식별하는 것은 품질관리를 위한 핵심적인 절차이다. 본 연구는 직물의 이미지를 분석하여 결함을 검출하는 모델을 만들고자 하였다. 연구에 사용된 모델은 딥러닝 기반의 VGGNet과 ResNet이었고, 두 모델의 결함 검출 성능을 비교하여 평가하였다. 정확도는 VGGNet 모델이 0.859, ResNet 모델이 0.893으로 ResNet 모델의 정확도가 더 높은 결과를 보여주었다. 추가적으로 딥러닝 모델이 직물의 이미지 내에서 결함으로 인식한 부분의 위치를 알아보기 위하여 XAI(eXplainable Artificial Intelligence)기법인 Grad-CAM 알고리즘을 사용하여 모델의 관심영역을 도출하였다. 그 결과 딥러닝 모델이 직물의 결함으로 인식한 부분이 육안으로도 실제 결함이 있는 것으로 확인되었다. 본 연구의 결과는 직물의 결함 검출에 있어서 딥러닝 기반의 인공지능을 활용함으로써 섬유의 생산과정에서 발생하는 시간과 비용을 줄일 수 있을 것으로 기대된다.

      • KCI등재

        딥러닝 기반 토마토 잎 병충해 분류 및 시각화를 위한 연구

        채종욱,신영학 한국지능시스템학회 2022 한국지능시스템학회논문지 Vol.32 No.2

        Factors that damage crops are largely natural disasters and pests, and pests occur more frequently than natural disasters and are easily contagious. Therefore, it is necessary to control the disease so that it does not occur, and it is necessary to suppress it immediately at the beginning of the outbreak. In this study, a representative pest of tomato leaf is classified using a deep learning model, and the basis for classification is presented to the user by visualizing which part of the image the deep learning model focuses on and judging using Grad-CAM. In addition, a web application applied with a technology that informs the presence and type of pests using a photo of tomato leaves and visualizes the basis for judgment is developed. In this study, comparison and analysis are performed on various deep learning classification models and training methods for efficient classification using a small number of training data. As a results, the EfficientNetB0 model shows a 99.16% test classification accuracy with 0.032sec inference speed. 농작물에 피해를 주는 요인은 크게 자연재해와 병충해가 존재하며 병충해는 자연재해보다발생 빈도가 높고 전염이 쉽다. 따라서 병충해가 발생하지 않도록 방제해야 하고, 발생 초기에 바로 진압하는 것이 필요하다. 본 연구에서는 딥러닝 모델을 이용하여 토마토잎의 대표적인 병충해를 분류하고, Grad-CAM 기술을 이용하여 딥러닝 모델이 이미지의 어느 부분을보고 판단하였는지 시각화하여 사용자에게 분류에 대한 근거를 제시하도록 하였다. 또한, 이를 통해 토마토 잎 사진을 이용한 병충해 유무 및 종류를 알려주고 판단의 근거를 시각화해주는 기술을 적용한 웹 애플리케이션을 개발하였다. 본 연구에서는 적은 수의 학습데이터를이용한 효율적인 분류를 위해 다양한 딥러닝 분류 모델 및 학습 방법에 대한 비교 및 분석을 수행하였다. 실험 결과 최신 CNN 모델인 EfficientNetB0 모델로 테스트 분류 정확도99.16%와 0.032초의 추론속도 성능을 보였다

      • KCI등재

        딥러닝 알고리즘을 이용한 매설 배관 피복 결함의 간접 검사 신호 진단에 관한 연구

        조상진,오영진,신수용 한국압력기기공학회 2023 한국압력기기공학회 논문집 Vol.19 No.2

        In this study, a deep learning algorithm was used to diagnose electric potential signals obtained through CIPS and DCVG, used indirect inspection methods to confirm the soundness of buried pipes. The deep learning algorithm consisted of CNN(Convolutional Neural Network) model for diagnosing the electric potential signal and Grad CAM(Gradient-weighted Class Activation Mapping) for showing the flaw prediction point. The CNN model for diagnosing electric potential signals classifies input data as normal/abnormal according to the presence or absence of flaw in the buried pipe, and for abnormal data, Grad CAM generates a heat map that visualizes the flaw prediction part of the buried pipe. The CIPS/DCVG signal and piping layout obtained from the 3D finite element model were used as input data for learning the CNN. The trained CNN classified the normal/abnormal data with 93% accuracy, and the Grad-CAM predicted flaws point with an average error of 2m. As a result, it confirmed that the electric potential signal of buried pipe can be diagnosed using a CNN-based deep learning algorithm.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼