RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        VGG16 기반 기갑 및 기계화 부대의 무기체계 상태분류 AI 모델

        류준열,김태완,정자훈 한국산학기술학회 2022 한국산학기술학회논문지 Vol.23 No.12

        In the future battlefield environment where MUM-T is established, unmanned systems are mainly deployed before manned systems in areas with a high risk of human casualties. Important information when performing combat missions is state information of the enemy weapon system. After the preparatory fires, damage assessments, such as the possibility of performing combat missions on the enemy weapon system state, were conducted based on information collected by an RCV. The classification of the weapon system state using AI models is an important problem because the damage assessment and countermeasures should be determined as soon as possible. This study proposed a military classifier (MC) model that can effectively classify the state of the enemy weapon system. The weapon system state data of armored and mechanized units obtained in the Russia-Ukraine war were analyzed. A comparison of the performance of the existing VGG16 model with the MC model confirmed that the MC model proposed in this study improves its performance in the state classification of weapon system state even if it learns fewer image data than the existing VGG16 model. 유·무인 복합체계가 구축된 미래 전장 환경에서 무인체계는 주로 인명피해의 위험성이 높은 지역에서 유인체계보다 먼저 투입되어 임무를 수행할 것으로 예측된다. 전투 임무를 수행할 때 중요한 정보는 적 무기체계의 상태 정보이다. 공격 준비 사격 후 본대가 투입되기 전 무인체계인 로봇전투차량이 수집한 정보에 기반하여 적 무기체계 상태에 대해전투 임무 수행 가능 여부 등의 피해평가를 진행한다. 이때 최대한 단시간에 피해평가 및 대응책이 결정되어야 하므로AI 모델을 활용한 무기체계의 상태분류는 중요한 문제이다. 본 연구에서는 적 무기체계의 상태를 효과적으로 분류할수 있는 합성곱신경망 MC(Military Classifier) 모델을 제안하였다. 러시아-우크라이나전에서 확보한 기갑 및 기계화부대의 무기체계 상태 데이터를 분석하고, 각 무기체계의 피해 상태가 효과적으로 분류될 수 있도록 VGG16 모델을활용하여 MC 모델을 구축하였다. 기존 VGG16 모델과 MC 모델의 성능을 비교한 결과, 본 연구에서 제안하는 MC 모델이 기존 VGG16 모델보다 적은 수의 이미지 데이터를 학습시켜도 무기체계의 상태분류에 있어 성능이 향상되는것을 확인할 수 있었다.

      • KCI등재

        VGG-16 딥러닝 알고리즘을 활용한 우식치아와 건전치아 분류

        변민지 ( Min-ji Byon ),전은주 ( Eun-joo Jun ),김지수 ( Ji-soo Kim ),황재준 ( Jae-joon Hwang ),정승화 ( Seung-hwa Jeong ) 대한예방치과·구강보건학회 2021 大韓口腔保健學會誌 Vol.45 No.4

        Objectives: Diagnosis of dental caries is based on the dentist’s observation and subjective judgment; therefore, a reliable and objective approach for diagnosing caries is required. Intraoral camera images combined with deep learning technology can be a useful tool to diagnose caries. This study aimed to evaluate the accuracy of the VGG-16 convolutional neural network (CNN) model in detecting dental caries in intraoral camera images. Methods: Images were obtained from the Internet and websites using keywords linked to teeth and dental caries. The 670 images that were obtained were categorized by an investigator as either sound (404 sound teeth) or dental caries (266 dental caries), and used in this study. The training and test datasets were divided in the ratio of 7:3 and a four-fold cross validation was performed. The Tensorflow-based Python package Keras was used to train and validate the CNN model. Accuracy, Kappa value, sensitivity, specificity, positive predictive value, negative predictive value, ROC (receiver operating characteristic) curve and AUC (area under curve) values were calculated for the test datasets. Results: The accuracy of the VGG-16 deep learning model for the four datasets, through random sampling, was between 0.77 and 0.81, with 0.81 being the highest. The Kappa value was 0.51- 0.60, indicating moderate agreement. The resulting positive predictive values were 0.77-0.82 and negative predictive values were 0.80-0.85. Sensitivity, specificity, and AUC values were 0.66-0.74, 0.81-0.88, and 0.88-0.91, respectively. Conclusions: The VGG-16 CNN model showed good discriminatory performance in detecting dental caries in intraoral camera images. The deep learning model can be beneficial in monitoring dental caries in the population.

      • KCI등재

        VGG16을 활용한 미학습 농작물의 효율적인 질병 진단 모델

        정석봉,윤협상 한국시뮬레이션학회 2020 한국시뮬레이션학회 논문지 Vol.29 No.4

        Early detection and classification of crop diseases play significant role to help farmers to reduce disease spread and to increase agricultural productivity. Recently, many researchers have used deep learning techniques like convolutional neural network (CNN) classifier for crop disease inspection with dataset of crop leaf images (e.g., PlantVillage dataset). These researches present over 90% of classification accuracy for crop diseases, but they have ability to detect only the pre-trained diseases. This paper proposes an efficient disease inspection CNN model for new crops not used in the pre-trained model. First, we present a benchmark crop disease classifier (CDC) for the crops in PlantVillage dataset using VGG16. Then we build a modified crop disease classifier (mCDC) to inspect diseases for untrained crops. The performance evaluation results show that the proposed model outperforms the benchmark classifier. 농작물 질병에 대한 조기 진단은 질병의 확산을 억제하고 농업 생산성을 증대하는 데에 있어 중요한 역할을 하고 있다. 최근 합성곱신경망(convolutional neural network, CNN)과 같은 딥러닝 기법을 활용하여 농작물 잎사귀 이미지 데이터세트를 분석하여 농작물 질병을 진단하는 다수의 연구가 진행되었다. 이와 같은 연구를 통해 농작물 질병을 90% 이상의 정확도로 분류할 수 있지만, 사전 학습된 농작물 질병 외에는 진단할 수 없다는 한계를 갖는다. 본 연구에서는 미학습 농작물에 대해 효율적으로 질병 여부를 진단하는 모델을 제안한다. 이를 위해, 먼저 VGG16을 활용한 농작물 질병 분류기(CDC)를 구축하고 PlantVillage 데이터세트을 통해 학습하였다. 이어 미학습 농작물의 질병 진단이 가능하도록 수정된 질병 분류기(mCDC)의 구축방안을 제안하였다. 실험을 통해 본 연구에서 제안한 수정된 질병 분류기(mCDC)가 미학습 농작물의 질병 진단에 대해 기존 질병 분류기(CDC)보다 높은 성능을 보임을 확인하였다.

      • KCI등재

        단락흔 및 열흔 판별을 위한 CNN 기반 알고리즘의 모델별 성능 비교 분석에 관한 연구

        박형균,방준호,김준호,소병문,송제호,박광묵 국제차세대융합기술학회 2023 차세대융합기술학회논문지 Vol.7 No.4

        본 논문에서는 전기화재 발생 시 생성되는 1차, 2차 단락흔 및 열흔의 판별을 진행하기 위해 CNN 기반 의 분류 알고리즘인 Inception v3, Googlenet, Vgg16, Resnet50, 4가지 알고리즘의 성능을 비교 분석하여 1차, 2 차 단락흔 및 열흔 판별에 가장 적합한 알고리즘을 선별하였다. 학습에 사용된 데이터는 HIV 전선의 1차, 2차 단 락흔과 열흔 시료를 현미경으로 촬영한 각 1차, 2차 단락흔 2,000여 장 열흔 2,000여 장의 사진을 데이터로 활용 하였다. 각 알고리즘에 대한 검증정확도는 Inception v3 97.80%, Googlenet 96.11%, Vgg16 93.27%, Resnet50 96.54%로 각각 얻어 Inception v3 가장 높은 검증정확도를 보였다. In this paper, we compared and analyzed the performance of four algorithms, Inception v3, Googlenet, Vgg16, and Resnet50, which are CNN-based classification algorithms, to Primary and Secondary Arc-bead and Molten mark generated in the event of an electrical fire. The most suitable algorithm for Primary and Secondary Arc-bead and Molten mark. The data used for learning were about 2,000 photos of 1st and 2nd short scars and 1000 photos of 1st and 2nd short scars of HIV wires taken under a microscope, respectively. Verification accuracy for each algorithm was 97.80% for Inception v3, 96.11% for Googlenet, 93.27% for Vgg16, and 96.54% for Resnet50, respectively, showing Inception v3 highest verification accuracy.

      • KCI등재

        Classification of Short Circuit Marks in Electric Fire Case with Transfer Learning and Fine-Tuning the Convolutional Neural Network Models

        Batool Shazia,Bang Junho 대한전기학회 2023 Journal of Electrical Engineering & Technology Vol.18 No.6

        One of the most essential substances for detecting electric fire is electric fire short-circuit marks. The traces of which can be found before and after the electric fire as the short circuit occurs. There are different kinds of electric fire short circuit marks, for instance, grounded, primary, and secondary molten marks these are categorized into the different types of short-circuit marks primary short circuit marks appear before the electric fire occurrence, and secondary short circuit marks appear after an electric fire to identify and classify them is crucial and time-consuming steps and procedures are needed for that purpose in this study we have used five convolutional neural network models such as VGG16, VGG19, Xception, InceptionV3, and Resnet50 to classify the short-circuit marks image data. Furthermore, according to our experiment on dataset among these five models, the best result was of VGG16 because the model performed well without any overfitting problems when we trained the sets of electric fire short circuit image data by applying the data augmentation, transfer learning, and fine-tuning techniques. The validation accuracy result of the VGG16 model at 50 epochs was 92.7% with a validation loss rate of 0.2.

      • Optimized Deep Learning Techniques for Disease Detection in Rice Crop using Merged Datasets

        Muhammad Junaid,Sohail Jabbar,Muhammad Munwar Iqbal,Saqib Majeed,Mubarak Albathan,Qaisar Abbas,Ayyaz Hussain International Journal of Computer ScienceNetwork S 2023 International journal of computer science and netw Vol.23 No.3

        Rice is an important food crop for most of the population in the world and it is largely cultivated in Pakistan. It not only fulfills food demand in the country but also contributes to the wealth of Pakistan. But its production can be affected by climate change. The irregularities in the climate can cause several diseases such as brown spots, bacterial blight, tungro and leaf blasts, etc. Detection of these diseases is necessary for suitable treatment. These diseases can be effectively detected using deep learning such as Convolution Neural networks. Due to the small dataset, transfer learning models such as vgg16 model can effectively detect the diseases. In this paper, vgg16, inception and xception models are used. Vgg16, inception and xception models have achieved 99.22%, 88.48% and 93.92% validation accuracies when the epoch value is set to 10. Evaluation of models has also been done using accuracy, recall, precision, and confusion matrix.

      • A Comparative Study of Deep Learning Techniques for Alzheimer's disease Detection in Medical Radiography

        Amal Alshahrani,Jenan Mustafa,Manar Almatrafi,Layan Albaqami,Raneem Aljabri,Shahad Almuntashri International Journal of Computer ScienceNetwork S 2024 International journal of computer science and netw Vol.24 No.5

        Alzheimer's disease is a brain disorder that worsens over time and affects millions of people around the world. It leads to a gradual deterioration in memory, thinking ability, and behavioral and social skills until the person loses his ability to adapt to society. Technological progress in medical imaging and the use of artificial intelligence, has provided the possibility of detecting Alzheimer's disease through medical images such as magnetic resonance imaging (MRI). However, Deep learning algorithms, especially convolutional neural networks (CNNs), have shown great success in analyzing medical images for disease diagnosis and classification. Where CNNs can recognize patterns and objects from images, which makes them ideally suited for this study. In this paper, we proposed to compare the performances of Alzheimer's disease detection by using two deep learning methods: You Only Look Once (YOLO), a CNN-enabled object recognition algorithm, and Visual Geometry Group (VGG16) which is a type of deep convolutional neural network primarily used for image classification. We will compare our results using these modern models Instead of using CNN only like the previous research. In addition, the results showed different levels of accuracy for the various versions of YOLO and the VGG16 model. YOLO v5 reached 56.4% accuracy at 50 epochs and 61.5% accuracy at 100 epochs. YOLO v8, which is for classification, reached 84% accuracy overall at 100 epochs. YOLO v9, which is for object detection overall accuracy of 84.6%. The VGG16 model reached 99% accuracy for training after 25 epochs but only 78% accuracy for testing. Hence, the best model overall is YOLO v9, with the highest overall accuracy of 86.1%.

      • KCI등재

        딥러닝 기반 이미지 특징 추출 모델을 이용한 유사 디자인 검출에 대한 연구

        이병우(Byoung Woo Lee),이우창(Woo Chang Lee),채승완(Seung Wan Chae),김동현(Dong Hyun Kim),이충권(Choong Kwon Lee) 한국스마트미디어학회 2020 스마트미디어저널 Vol.9 No.4

        디자인은 섬유패션 산업에서 제품의 경쟁력을 결정짓는 핵심요인이다. 무단복제를 방지하고 독창성을 확인하기 위하여 제시된 디자인의 유사도를 측정하는 것은 매우 중요하다. 본 연구에서는 딥러닝 기법을 이용하여 섬유 디자인의 이미지로 부터 특징(feature)을 수치화하고, 스피어만 상관계수를 이용하여 유사도를 측정하였다. 유사한 샘플이 실제로 검출되는지 검증하기 위하여 300장의 이미지를 임의로 회전 및 색상을 변경하였다. 유사도 수치가 높은 순으로 Top-3와 Top-5의 결과에 회전을 하거나 색상을 변경한 샘플이 존재하는지 측정하였다. 그 결과, AlexNet 보다 VGG-16 모델이 월등히 높은 성능을 기록하였다. VGG-16 모델의 성능은 회전 이미지의 경우에 유사도 결과값이 높은 Top-3와 Top-5에서 64%, 73.67%로 가장 높게 나타났다. 색상변경의 경우에는 Top-3와 Top-5에서 각각 86.33%, 90%로 가장 높게 나타났다. Design is a key factor that determines the competitiveness of products in the textile and fashion industry. It is very important to measure the similarity of the proposed design in order to prevent unauthorized copying and to confirm the originality. In this study, a deep learning technique was used to quantify features from images of textile designs, and similarity was measured using Spearman correlation coefficients. To verify that similar samples were actually detected, 300 images were randomly rotated and color changed. The results of Top-3 and Top-5 in the order of similarity value were measured to see if samples that rotated or changed color were detected. As a result, the VGG-16 model recorded significantly higher performance than did AlexNet. The performance of the VGG-16 model was the highest at 64% and 73.67% in the Top-3 and Top-5, where similarity results were high in the case of the rotated image. appear. In the case of color change, the highest in Top-3 and Top-5 at 86.33% and 90%, respectively.

      • KCI등재

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼