RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 음성지원유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        계층적 군집화 기반 Re-ID를 활용한 객체별 행동 및 표정 검출용 영상 분석 시스템

        이상현(Sang-Hyun Lee),양성훈(Seong-Hun Yang),오승진(Seung-Jin Oh),강진범(Jinbeom Kang) 한국지능정보시스템학회 2022 지능정보연구 Vol.28 No.1

        Recently, the amount of video data collected from smartphones, CCTVs, black boxes, and high-definition cameras has increased rapidly. According to the increasing video data, the requirements for analysis and utilization are increasing. Due to the lack of skilled manpower to analyze videos in many industries, machine learning and artificial intelligence are actively used to assist manpower. In this situation, the demand for various computer vision technologies such as object detection and tracking, action detection, emotion detection, and Re-ID also increased rapidly. However, the object detection and tracking technology has many difficulties that degrade performance, such as re-appearance after the object’s departure from the video recording location, and occlusion. Accordingly, action and emotion detection models based on object detection and tracking models also have difficulties in extracting data for each object. In addition, deep learning architectures consist of various models suffer from performance degradation due to bottlenects and lack of optimization. In this study, we propose an video analysis system consists of YOLOv5 based DeepSORT object tracking model, SlowFast based action recognition model, Torchreid based Re-ID model, and AWS Rekognition which is emotion recognition service. Proposed model uses single-linkage hierarchical clustering based Re-ID and some processing method which maximize hardware throughput. It has higher accuracy than the performance of the re-identification model using simple metrics, near real-time processing performance, and prevents tracking failure due to object departure and re-emergence, occlusion, etc. By continuously linking the action and facial emotion detection results of each object to the same object, it is possible to efficiently analyze videos. The re-identification model extracts a feature vector from the bounding box of object image detected by the object tracking model for each frame, and applies the single-linkage hierarchical clustering from the past frame using the extracted feature vectors to identify the same object that failed to track. Through the above process, it is possible to re-track the same object that has failed to tracking in the case of re-appearance or occlusion after leaving the video location. As a result, action and facial emotion detection results of the newly recognized object due to the tracking fails can be linked to those of the object that appeared in the past. On the other hand, as a way to improve processing performance, we introduce Bounding Box Queue by Object and Feature Queue method that can reduce RAM memory requirements while maximizing GPU memory throughput. Also we introduce the IoF(Intersection over Face) algorithm that allows facial emotion recognized through AWS Rekognition to be linked with object tracking information. The academic significance of this study is that the two-stage re-identification model can have real-time performance even in a high-cost environment that performs action and facial emotion detection according to processing techniques without reducing the accuracy by using simple metrics to achieve real-time performance. The practical implication of this study is that in various industrial fields that require action and facial emotion detection but have many difficulties due to the fails in object tracking can analyze videos effectively through proposed model. Proposed model which has high accuracy of retrace and processing performance can be used in various fields such as intelligent monitoring, observation services and behavioral or psychological analysis services where the integration of tracking information and extracted metadata creates greate industrial and business value. In the future, in order to measure the object tracking performance more precisely, there is a need to conduct an experiment using the MOT Challenge dataset, which is data used by many international conferences. We will invest

      • KCI우수등재

        객체 검출 인식률 향상을 위한 다중 객체 추적 기반 강건한 트랙 관리 기법

        김민기,이동석,최병인 대한전자공학회 2023 전자공학회논문지 Vol.60 No.12

        Recently, deep learning-based object detection and multi-object tracking used in autonomous driving technology have been widely studied. The two technologies consist of one sequence. Taking advantage of this, we propose a robust track management based on multi-object tracking technology that complements the limitations of object detection using only a single image. This paper propose the following method. Hungarian algorithm cost matrix for inter-class using class information,, Track management technique that induces data association through Template Matching, Track update that improves reliability for object detection by utilizing additional information of class and score. Through three types of robust track management, it compensate for non-detection or mis-classification problems that occur in object detection. Additionally, it show stable results in object detection and multi-object tracking. As a result, compared to the model using only object detection, the mAP increase by about 4%, and the precision result increased by about 10%. we were tested in an actual autonomous driving environment and recorded high performance improvement in objects with little learning data or small sizes. In addition, it enabled stable object detection and tracking even in sudden image shaking such as bumps. 최근 자율주행 기술에 활용되는 딥러닝 기반 객체 검출과 다중 객체 추적은 많은 연구가 이루어지고 있다. 두 개의 기술은 하나의 시퀀스로 이루어져 있고, 이 점을 이용하여 단일 영상만을 사용하여 객체 검출의 한계점을 보완하는 다중 객체 추적 기반 강건한 트랙 관리 기법을 제안한다. 본 논문은 클래스 정보를 활용한 클래스 간의 헝가리안 알고리즘 코스트 매트릭스(cost matrix), 템플릿 매칭을 통한 데이터 연관 유도 트랙 관리 기법, 클래스와 스코어의 추가 정보를 활용하여 검출 신뢰성을 향상하는 트랙 업데이트를 제안한다. 3가지 강건한 트랙 관리를 통해 객체 검출에서 생기는 미검출 혹은 오분류의 문제를 보완하고 객체 검출과 다중 객체 추적의 안정화된 결과를 보여준다. 그 결과, 객체 검출만을 사용한 모델과 비교하여 mAP가 약 4% 증가했고, 정밀도(Precision)의 결과는 약 10% 증가했다. 본 논문에서 제안한 기법은 실제 도로 주행 환경에서 테스트 되었고, 학습 데이터의 수가 적거나 작은 크기의 객체에서 높은 성능 향상을 기록했다. 또한, 방지턱과 같은 급격한 영상의 흔들림에서도 안정적인 객체 검출 및 추적을 가능하게 한다.

      • KCI등재

        터널 내 딥러닝 객체인식 오탐지 데이터의 반복 재학습을 통한 자가 추론 성능 향상 방법에 관한 연구

        이규범,신휴성 사단법인 한국터널지하공간학회 2024 한국터널지하공간학회논문집 Vol.26 No.2

        터널 내 CCTV를 통한 딥러닝 객체인식 적용에 있어서 터널의 열악한 환경조건, 즉 낮은 조도 및 심한 원근현상으로 인해 오탐지가 대량 발생한다. 이 문제는 객체인식 성능에 기반한 영상유고시스템의 신뢰성 문제로 직결되므로 정탐지 향상과 더불어 오탐지의 저감 방안이 더욱 필요한 상황이다. 이에 본 논문은 딥러닝 객체인식 모델을 기반으로, 오탐지 데이터의 재학습을 통해 오탐지의 저감뿐만 아니라 정탐지 성능 향상도 함께 추구하는 오탐지 학습법을 제안한다. 본 논문 의 오탐지 학습법은 객체인식 단계를 기반으로 진행되며, 학습용 데이터셋 초기학습 - 검증용 데이터셋 추론 - 오탐지 데이터 정정 및 데이터셋 구성 - 학습용 데이터셋에 추가 후 재학습으로 이어진다. 본 논문은 이에 대한 성능을 검증하기 위해 실험을 진행하였으며, 우선 선행 실험을 통해 본 실험에 적용할 딥러닝 객체인식 모델의 최적 하이퍼파라미터를 결정하였다. 그리고 본 실험에서는 학습영상 포맷을 결정하기 위한 실험, 반복적인 오탐지 데이터셋의 재학습을 통해 장기적인 성능향상을 확인하기 위한 실험을 순차적으로 진행하였다. 그 결과, 첫 번째 본 실험에서는 추론된 영상 내에서 객체를 제외한 배경을 제거시키는 경우보다 배경을 포함시키는 경우가 객체인식 성능에 유리한 것으로 나타났으며, 두 번째본 실험에서는 재학습 차수별 독립적으로 오탐지 데이터를 재학습시키는 경우보다 차수마다 발생하는 오탐지 데이터를 누적시켜 재학습 시키는 경우가 지속적인 객체인식 성능 향상 측면에서 유리한 것으로 나타났다. 두 실험을 통해 결정된 방법으로 오탐지 데이터 재학습을 진행한 결과, 차량 객체 클래스는 1차 재학습 이후부터 AP값이 0.95 이상 우수한 추론 성능이 발현되었으며, 5차 재학습까지 초기 추론 대비 약 1.06배 추론성능이 향상되었다. 보행자 객체 클래스는 재학습 이 진행됨에 따라 지속적으로 추론 성능이 향상되었으며, 18차 재학습까지 초기 추론대비 2.3배 이상 추론성능이 자가 향상될 수 있음을 보였다. In the application of deep learning object detection via CCTV in tunnels, a large number of false positive detections occur due to the poor environmental conditions of tunnels, such as low illumination and severe perspective effect. This problem directly impacts the reliability of the tunnel CCTV-based accident detection system reliant on object detection performance. Hence, it is necessary to reduce the number of false positive detections while also enhancing the number of true positive detections. Based on a deep learning object detection model, this paper proposes a false positive data training method that not only reduces false positives but also improves true positive detection performance through retraining of false positive data. This paper’s false positive data training method is based on the following steps: initial training of a training dataset - inference of a validation dataset - correction of false positive data and dataset composition - addition to the training dataset and retraining. In this paper, experiments were conducted to verify the performance of this method. First, the optimal hyperparameters of the deep learning object detection model to be applied in this experiment were determined through previous experiments. Then, in this experiment, training image format was determined, and experiments were conducted sequentially to check the long-termperformance improvement through retraining of repeated false detection datasets. As a result, in the first experiment, it was found that the inclusion of the background in the inferred image was more advantageous for object detection performance than the removal of the background excluding the object. In the second experiment, it was found that retraining by accumulating false positives from each level of retraining was more advantageous than retraining independently for each level of retraining in terms of continuous improvement of object detection performance. After retraining the false positive data with the method determined in the two experiments, the car object class showed excellent inference performance with an AP value of 0.95 or higher after the first retraining, and by the fifth retraining, the inference performance was improved by about 1.06 times compared to the initial inference. And the person object class continued to improve its inference performance as retraining progressed, and by the 18th retraining, it showed that it could self-improve its inference performance by more than 2.3 times compared to the initial inference.

      • KCI등재

        다양한 화소기반 변화탐지 결과와 등록오차를 이용한 객체기반 변화탐지

        정세정,김태헌,이원희,한유경 한국측량학회 2019 한국측량학회지 Vol.37 No.6

        Change detection, one of the main applications of multi-temporal satellite images, is an indicator that directly reflects changes in human activity. Change detection can be divided into pixel-based change detection and object-based change detection. Although pixel-based change detection is traditional method which is mostly used because of its simple algorithms and relatively easy quantitative analysis, applying this method in VHR (Very High Resolution) images cause misdetection or noise. Because of this, pixel-based change detection is less utilized in VHR images. In addition, the sensor of acquisition or geographical characteristics bring registration noise even if co-registration is conducted. Registration noise is a barrier that reduces accuracy when extracting spatial information for utilizing VHR images. In this study object-based change detection of VHR images was performed considering registration noise. In this case, object-based change detection results were derived considering various pixel-based change detection methods, and the major voting technique was applied in the process with segmentation image. The final object-based change detection result applied by the proposed method was compared its performance with other results through reference data. 다시기 위성 영상을 이용한 변화탐지 분석은 인간 활동의 변화를 직접 반영하는 지표이다. 변화탐지는 크게 화소 기반 변화탐지(PBCD: Pixel-Based Change Detection)와 객체 기반 변화탐지(OBCD: Object-Based Change Detection)로 구분한다. 화소 기반 변화탐지는 알고리즘이 간단하고 비교적 쉽게 정량적 분석이 가능해 전통적으로 많이 쓰여온 기법이나 고해상도 영상에서의 화소 기반 변화탐지는 오탐지나 노이즈(noise)가 발생하기 때문에 고해상도 영상에서의 활용도가 떨어진다. 또한, 고해상도 다시기 영상은 취득 당시 센서의 자세나 지형적 특성으로 인해 영상 등록(image registration)을 수행한 이후에도 지형적 불일치가 발생한다. 등록오차(registration noise)라고 불리는 이 지형 불일치는 고해상도 다시기 영상 활용을 위한 공간정보 추출 시 정확도를 떨어뜨리는 방해요인으로 작용한다. 이에 본 연구에서는 등록오차를 고려한 고해상도 영상의 객체 기반 변화탐지를 수행하였다. 이 때, 다양한 화소 기반 변화탐지 결과를 모두 고려한 객체 기반 변화탐지 결과를 도출하였으며 이 과정에서 분할 영상(segmentation image)과의 major voting을 적용하였다. 제안 기법과 화소 기반 변화탐지 결과, 그리고 화소 기반 변화탐지 결과를 객체 기반 변화탐지로 확장한 결과의 비교를 통해 제안 기법의 우수성을 평가하였다

      • Performance Indicator Survey for Object Detection

        Inho Park,Sungho Kim 제어로봇시스템학회 2020 제어로봇시스템학회 국제학술대회 논문집 Vol.2020 No.10

        In recent image processing, beyond the object recognition problem, deep learning has been used in various aspects such as object detection, semantic segmentation. In addition, classic technique-based detection has been performed variously. These technologies are applied in various systems such as factory automation systems, automatic target recognition (ATR) systems, autonomous driving systems, etc. Object detection is performed in various categories such as people, vehicles and animals, etc. And it is operated for various situations which contain different object size, image size, distance range from near to remote, changeable environment, etc. For the situation analysis, indicators need to be used appropriately. And when researchers make some algorithm for object detection, if there are no any evaluation indicators, the algorithm can’t be demonstrated. So, it is important to know about performance indicators of object detection. Various indicators are used in object detection. As a result, this paper introduces performance indicators of object detection. The main purpose of the survey is that researchers find the proper performance indicator for object detection. And It can help to compare the detection result with a different algorithm result, exactly and effectively.

      • KCI등재

        태양과 플랫폼의 방위각 및 고도각을 이용한 이종 센서 영상에서의 객체기반 건물 변화탐지

        정세정,박주언,이원희,한유경 대한원격탐사학회 2020 大韓遠隔探査學會誌 Vol.36 No.5

        Building change monitoring based on building detection is one of the most important fields in terms of monitoring artificial structures using high-resolution multi-temporal images such as CAS500- 1 and 2, which are scheduled to be launched. However, not only the various shapes and sizes of buildings located on the surface of the Earth, but also the shadows or trees around them make it difficult to detect the buildings accurately. Also, a large number of misdetection are caused by relief displacement according to the azimuth and elevation angles of the platform. In this study, object-based building detection was performed using the azimuth angle of the Sun and the corresponding main direction of shadows to improve the results of building change detection. After that, the platform’s azimuth and elevation angles were used to detect changed buildings. The object-based segmentation was performed on a highresolution imagery, and then shadow objects were classified through the shadow intensity, and feature information such as rectangular fit, Gray-Level Co-occurrence Matrix (GLCM) homogeneity and area of each object were calculated for building candidate detection. Then, the final buildings were detected using the direction and distance relationship between the center of building candidate object and its shadow according to the azimuth angle of the Sun. A total of three methods were proposed for the building change detection between building objects detected in each image: simple overlay between objects, comparison of the object sizes according to the elevation angle of the platform, and consideration of direction between objects according to the azimuth angle of the platform. In this study, residential area was selected as study area using high-resolution imagery acquired from KOMPSAT-3 and Unmanned Aerial Vehicle (UAV). Experimental results have shown that F1-scores of building detection results detected using feature information were 0.488 and 0.696 respectively in KOMPSAT-3 image and UAV image, whereas F1-scores of building detection results considering shadows were 0.876 and 0.867, respectively, indicating that the accuracy of building detection method considering shadows is higher. Also among the three proposed building change detection methods, the F1-score of the consideration of direction between objects according to the azimuth angles was the highest at 0.891. 건물탐지 기반의 건물 변화 모니터링은 발사예정인 차세대 중형위성 1, 2호와 같은 고해상도 다시기 광학 위성영상을 이용한 인공 구조물 모니터링 측면에서 가장 중요한 분야 중 하나이다. 하지만 지표면에 위치하는 건물들의 형태와 크기는 다양하며, 이들 주변에 존재하는 그림자 또는 나무 등에 의해 정확한 건물탐지에 어려움이 따른다. 또한, 영상 촬영 당시의 플랫폼의 방위각(Azimuth angle)과 고도각(Elevation angle)에 따라 생기는기복 변위로 인해 건물 변화탐지 수행 시 다수의 변화 오탐지가 발생하게 된다. 이에 본 연구에서는 건물 변화탐지 결과 향상을 위해 다시기 영상 취득 당시의 태양의 방위각과 그에 따른 그림자의 주방향(Main direction)을이용한 객체기반 건물탐지를 수행하였으며, 이후 플랫폼의 방위각과 고도각을 이용한 건물 변화탐지를 수행하였다. 고해상도 영상에 객체 분할 기법을 적용한 후, Shadow intensity를 통해 그림자 객체만을 분류하였으며, 건물 후보군 탐지를 위해 각 객체의 Rectangular fit, GLCM(Gray-Level Co-occurrence Matrix) homogeneity 그리고 면적(Area)과 같은 특징(Feature) 정보들을 이용하였다. 그 후, 건물 후보군으로 탐지된 객체들의 중심과 태양의 방위각에 따른 건물 그림자 사이의 방향과 거리를 이용하여 최종 건물을 탐지하였다. 각 영상에서 탐지된 건물 객체 간 변화탐지를 위해 객체들 간의 단순 중첩, 플랫폼의 고도각에 따른 객체의 크기 비교, 그리고플랫폼의 방위각에 따른 객체 간의 방향 비교 총 3가지의 방법을 제안하였다. 본 연구에서는 주거 밀집 지역을연구지역으로 선정하였으며, KOMPSAT-3와 무인항공기(Unmanned Aerial Vehicle, UAV)의 이종 센서에서 취득된 고해상도 영상을 이용하여 실험 데이터를 생성하였다. 실험 결과, 특징 정보를 이용해 탐지한 건물탐지결과의 F1-score는 KOMPSAT-3 영상과 무인항공기 영상에서 각각 0.488 그리고 0.696인 반면, 그림자를 고려한 건물탐지 결과의 F1-score는 0.876 그리고 0.867로 그림자를 고려한 건물탐지 기법의 정확도가 더 높은 것을확인할 수 있었다. 또한, 그림자를 이용한 건물탐지 결과를 바탕으로 제안한 3가지의 건물 변화탐지 제안기법중 플랫폼의 방위각에 따른 객체 간의 방향을 고려한 방법의 F1-score가 0.891로 가장 높은 정확도를 보이는 것을 확인할 수 있었다.

      • KCI등재

        Object detection and tracking using a high-performance artificial intelligence-based 3D depth camera: towards early detection of African swine fever

        Harry Wooseuk Ryu,Joo Ho Tai 대한수의학회 2022 Journal of Veterinary Science Vol.23 No.1

        Background: Inspection of livestock farms using surveillance cameras is emerging as a means of early detection of transboundary animal disease such as African swine fever (ASF). Object tracking, a developing technology derived from object detection aims to the consistent identification of individual objects in farms. Objectives: This study was conducted as a preliminary investigation for practical application to livestock farms. With the use of a high-performance artificial intelligence (AI)-based 3D depth camera, the aim is to establish a pathway for utilizing AI models to perform advanced object tracking. Methods: Multiple crossovers by two humans will be simulated to investigate the potential of object tracking. Inspection of consistent identification will be the evidence of object tracking after crossing over. Two AI models, a fast model and an accurate model, were tested and compared with regard to their object tracking performance in 3D. Finally, the recording of pig pen was also processed with aforementioned AI model to test the possibility of 3D object detection. Results: Both AI successfully processed and provided a 3D bounding box, identification number, and distance away from camera for each individual human. The accurate detection model had better evidence than the fast detection model on 3D object tracking and showed the potential application onto pigs as a livestock. Conclusions: Preparing a custom dataset to train AI models in an appropriate farm is required for proper 3D object detection to operate object tracking for pigs at an ideal level. This will allow the farm to smoothly transit traditional methods to ASF-preventing precision livestock farming.

      • Detection of Mobile Object in Workspace Area

        Shah, H.N.M,Rashid, M.Z.A,Abdollah,M.F,Kamarudin, M.N,Kamis, Z,Khamis, A 보안공학연구지원센터 2016 International Journal of Signal Processing, Image Vol.9 No.4

        This paper introduces the detection of mobile object in intelligent space robot application. There are three major algorithms, namely object detection, object classification and object tracking. The core of the detection of mobile object comprise of two processes: offline and online. An offline process consists of the training of the model using deference input sources that depend on the application. An online process consists of the matching process and the result of the object poses. The main idea of object classification is to classify into two categories depending on the dimension of object, mobile object and non-mobile object. By using an offline and an online process the whole process becomes faster because there only have object classification and object tracking involved in real time. The positions of the mobile object are represented by symbol X with difference colors for easy comparison with non-mobile object. One of the unique advantages mentioned in this paper, the detection of mobile object only uses image processing that are generated by the algorithms without additional sensor like sonar or IR sensor.

      • KCI등재

        저채널 3차원 라이다를 이용한 폴라뷰 기반의 객체인식 알고리즘

        권순섭,박태형 제어·로봇·시스템학회 2019 제어·로봇·시스템학회 논문지 Vol.25 No.1

        In order for an autonomous vehicle to move, object detection is required to recognize the surrounding environment. The sensors used for object detection mainly use a camera and lidar. However, it is difficult to detect the camera because of its influence on the surrounding environment. Therefore, object detection using lidar is required. For lidar-based object detection, we mainly use a high-channel lidar with a high resolution. However, high-channel lidar is expensive and is difficult to commercialize. To solve this problem, object detection studies using low-channel lidar are underway. In this paper, we present an algorithm to find an object (vehicle or pedestrian) using three 3D low-channel lidar systems. First, we converted the data from the lidar to the polar view. Then, we input the converted polar view into YOLO v3 to predict the class and the region of interest (ROI) of the object. We used K-means to separate the background and the object from the image in the predicted ROI to find the object except for the background. Only the object area found last was converted back into 3D space to find the location of the object.

      • 빗줄기 제거를 위한 잔차 네트워크와 Fast R-CNN을 통한 폭우 환경에서의 객체 검출 성능 비교

        김주찬(Ju-Chan Kim),손창환(Chang-Hwan Son) 한국정보기술학회 2019 Proceedings of KIIT Conference Vol.2019 No.11

        최근 인공지능 기술을 기반으로 자율 주행 기술이 발달함에 따라 객체 검출 기법에 대한 연구가 활발히 진행되고 있다. 하지만 컴퓨터 비전 기반의 객체 검출 기법은 폭설, 폭우, 안개와 같은 악천 후 환경에 따라 상당한 영향을 받게 된다. 악천후 조건에서 수집된 영상에서 객체 검출 성능을 높이기 위해서는 빗줄기 패턴이 객체 검출에 필요한 특징 추출에 부정적인 영향을 미치는지를 조사할 필요가 있다. 기존의 연구는 객체 검출과 빗줄기 제거 과정이 서로 독립적으로 연구되고 있으므로 폭우 환경에서의 빗줄기 패턴과 객체 검출 성능과의 연관성을 제시하지 못하고 있다. 따라서 본 논문에서는 빗줄기 패턴이 객체 검출 성능에 미치는 영향을 조사하고자 한다. 이를 위해, 빗줄기 제거를 위한 잔차 네트워크와 객체 검출을 위한 Fast RCNN 네트워크를 각각 학습한 후, 빗줄기 제거 전후의 객체 검출 성능을 정량적으로 비교하고자 한다. 실험을 통해 빗줄기 영상에서 빗줄기를 제거한 후에 평균정밀도가 11.1만큼 향상된 것을 확인하였다. Recently, as the autonomous driving technology is developed based on the artificial intelligence, the study on the object detection is being actively conducted. However, computer vision-based object detection methods are significantly affected by bad weather conditions such as heavy snow, heavy rain, and fog. To improve the performance of object detection for images collected under bad weather conditions, it is necessary to investigate whether the rain pattern has a negative effect on feature extraction required for object detection. Existing studies have not been able to suggest the correlation between rain patterns and object detection performance in heavy rain environment because object detection and rain removal are studied independently. Therefore, this paper investigates the effect of rain patterns on object detection performance. To achieve this, residual network for rain removal and Fast RCNN network for object detection are trained separately, and then object detection performance before and after rain removal is compared. Through experiments, it is confirmed that average precision was improved by 11.1 after removing the rain streaks from rain images.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼