RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • YOLOTransfer-DF: A deep and transfer learning framework for aerial collision detection and avoidance in virtual environments

        난 라오 이? 건국대학교 항공우주정보시스템공학과 2022 국내석사

        RANK : 2911

        Unmanned aerial vehicles (UAVs) using intelligence and vision techniques like object detection and collision avoidance can be used in a wide range of real-world scenarios. The primary contributions of this thesis are to design a deep and transfer learning framework for autonomous vehicles to be able to detect aerial collisions, avoid, and finish the mission all in one to accomplish visual tasks such as object identification and avoidance tasks in complicated scenarios by combining three different methodologies such as object detection, object tracking, and avoidance algorithm. The difficulty with autonomous vehicles is that they must be able to detect and avoid collisions along their routes. It's not simply an issue of detecting the impediments in the vehicle's route; it's also a matter of anticipating where they'll be at any particular moment. This piques interest at the decision-making level, which includes vision-based detection and navigation that detects and takes actions autonomously. Object detection is the process of detecting a specific object in an image or a single video frame. Object tracking, on the other hand, is trained to follow an object's path. The autonomous vehicle must be able to forecast situation awareness and react or plan in order to maximize performance. The proposed framework approach incorporates real-time detection, tracking, and avoidance. Using a proposed framework that includes an object detection model (YOLOv3), transfer learning, and a tracker (DeepSort) to extract information from the environment. The training of the Deep Neural Network model necessitates a large number of image datasets. It is possible to obtain improved performance with a minimal amount of data by including transfer learning into the algorithm. The transfer learning notion basically aims to apply what they've learned in one activity to a network that performs a new task. Reporting crucial situations, such as safety distance alerts if obstacles are too close to vehicles, falls under the object detection and tracking category. When both dynamic and static obstacles are recognized on the route, it will employ the Frenet coordinate system's trajectory planner for obstacle avoidance. The proposed framework was tested in a simulation that mimicked a real-world environment, including buildings, planes, trees, roads, and lakes, as well as rendering the autonomous UAV shown in the scenario. Taking autonomous vehicles out into the real world and letting them fly around is the best way to train them. However, this comes at a steep price and unsafety because there are numerous risks associated. This research will test how a deep learning model can be used to simulate autonomous UAV training before they are deployed in the actual world. Three primary steps were explored with the proposed framework (YOLOTransfer-DF): detection, tracking, and trajectory planning. The avoidance stage, on the other hand, is where we make decisions based on the detection and tracking accuracy. The proposed framework's outcome demonstrates the benefits of combining YOLOv3 and transfer learning to reduce data volume, training time, and improve accuracy. 본 연구의 목적은 무인 항공기(이하 UAV)의 지능화를 위한 자율화 기술개발이며, 그 중에서도 상황 인식 및 예측을 위한 딥러닝 및 전이학습의 활용 방안에 초점을 맞추었다. 최근 지능화된 무인기는 현재 민수용 및 군수용으로 활발히 사용되고 있으며, 다양하고 복합적인 영역으로 활용 범위가 확장됨에 따라 복잡한 임무 요구를 대처할 수 있도록 무인기 자율화의 핵심은 기존 항공기 조종사가 수행하던 대부분의 업무를 자율적으로 수행하게 함에 있으며, 이를 위해서는 다양한 센서 데이터를 활용하여 무인기 스스로 상황을 인지(awareness)하고 판단(decision)하는 기술이 필요하다. UAV가 주변 상황에 대해 정확히 인지하기 위한 방법으로 카메라와 같은 시각적 센서를 활용하여 주변 객체를 탐지(detection)하고 추적(Tracking)하는 기술이 있다. 객체 탐지(object detection)란, 이미지 또는 비디오의 단일 프레임을 기반으로 대상 객체를 식별하는 기술이며, 객체 추적(object tracking)은 감지된 대상이 이전에 인식한 대상과 같은 대상인지 그리고 대상의 움직임에 따라 지속적으로 탐지할 수 있는 기술이다. 객체 탐지 및 추적 기술은 단순히 경로상에 있는 장애물을 감지하는 문제만이 아니라, 측정한 데이터를 기반으로 가까운 미래에는 어떤 위치로 이동할지 또는 어떤 행동을 취할 지 예측하는 문제와 직결된다. 본 논문에서는 이러한 객체 탐지 및 추적 기술을 위한 딥러닝(Deep Learning)과 전이학습(Transfer Learning)을 기반으로 하는 인공지능(AI, Artificial Intelligence) 모델을 생성하였고, 본 모델을 AirSim 기반으로 구축한 시뮬레이션 환경에서 자체 검증을 수행했다. 객체 탐지 모델은 YOLOv3 알고리즘을 활용하였고 객체 추적에는 전이학습 및 DeepSort 알고리즘을 활용하였다. 또한, Frenet 좌표계를 활용한 궤적 계획 알고리즘을 통합하여 실시간 객체 탐지 및 추적, 그리고 회피 기술을 포함하는 새로운 통합 프레임워크(YOLO Transfer Framework)를 본 연구를 통해 제시하였다. 시뮬레이션 검증결과, 제안된 YOLO Transfer프레임워크는 기존의 YOLOv3 알고리즘 대비 학습에 필요한 데이터의 수(data volume) 및 학습 소요 시간이 감소하였으며, 객체 탐지 및 추적 결과에 대한 정확도를 향상시켰다.

      • Object-Spatial Layout-Route-Based Hybrid Map and Its Application to Mobile Robot Navigation

        박순용 연세대학교 대학원 2010 국내박사

        RANK : 2879

        This thesis proposes a novel object-spatial layout-route-based hybrid map (OSR map) that integrates objects and spatial layouts into a topological map. By representing objects as high-level features in a map, a robot can deal more effectively with different contexts such as dynamic environments, human-robot interaction, and semantic information. However, the use of objects alone for map representation has inherent problems. For example, it is difficult to represent empty spaces for robot navigation, and objects are limited to readily recognizable things. One way to overcome these problems is to develop a hybrid map that includes objects and the spatial layout of a local space. On the contrast to conventional object-containing maps, the proposed map can give robot complete information for autonomous navigation. The topological structure reflects the route knowledge. Therefore, the robot can obtain an optimal route that the robot should pass through to get from start local space to goal local space. The spatial layouts of local spaces and locations of objects are defined with respect to the reference frame of each local space. Therefore, the robot can estimate its pose with the objects and spatial layouts, and it can safely move between adjacent local spaces by using the spatial layout information. On the basis of the OSR map, we also suggest different navigational functions including mapping, exploration, scan matching, localization, path planning, path tracking, and obstacle avoidance as follows: The OSR map integrates a global topological map and local hybrid maps under the assumption that indoor environment can be represented by a set of local spaces. The topological map describes spatial relationships between the local spaces and includes all the spaces, where the local spaces form nodes of the topological map. The local hybrid map provides detailed information of local spaces in terms of both the objects found in those spaces and the spatial layouts of the spaces. It is composed of an object location map and a spatial layout map. In order to define a topological representation of the environment, we propose a node extraction method from the range scan data with simple image processing algorithm. We also suggest a strategy of topological exploration using a concept of concave node. Two scan matching approaches are proposed: probabilistic scan matching (PSM) and spectral scan matching (SSM). The PSM method is based on a Markov Chain Monte Carlo (MCMC) approach, in which random samples are iteratively generated until they converge into a spot from their initial positions. The SSM method uses pairwise geometric relationships between scan points to find geometrically consistent correspondences. It can estimate robot pose without knowing an initial alignment between two scans. Global localization and local pose tracking approaches are presented. For the global localization, we propose two kinds of methods: vision-based and range scan-based ones. The vision-based global localization is performed in three stages: perception, coarse pose estimation, and fine pose estimation. The perception stage carries out object recognition and selects candidate nodes where the robot is expected to be located. In the coarse pose estimation stage, relative poses with respect to all candidate nodes are computed with point cloud fitting. Finally, the fine pose estimation stage uses the PSM method to determine the correct node among the candidate nodes and compute fine pose relative to the correct node. The range scan-based global localization applies the framework of the SSM method. Unlike the Monte Carlo localization method, it does not require an initial pose and wandering motion. The local localization method is based on the particle filter, in which the PSM method is used to estimate more accurate deterministic pose at every time step. A route-based navigational strategy using the OSR map is proposed. This includes path planning, path tracking, and obstacle avoidance. Path planning along a route is performed in global and local schemes. The global path planning is to find an optimal route from start node to goal node. The optimal route contains minimum nodes that the robot should pass through to reach the goal in the shortest distance. The local path planning is to create a smooth destination-directed path between two neighboring nodes included in the optimal route. It is basically generated by using local grid map integrating the spatial layout maps of two neighboring nodes. While the robot tracks the local path, it continuously performs local localization. The local grid map can be updated with the latest laser range scan for local localization. Therefore, the local path can be also updated with the modified local grid map. The obstacle avoidance is then naturally carried out by following the updated local path. The proposed methods in this thesis have been implemented on our mobile robot and tested intensively in real robot experiments, as well as in simulation tests.

      • Pedestrians Detection Based on Laser Scanner and Camera Sensors For Unmanned Ground Vehicle : Pedestrians Detection Based on Laser Scanner and Camera Sensors For Unmanned Ground Vehicle

        체나소파니스 건국대학교 대학원 2013 국내석사

        RANK : 2827

        In this thesis, we propose the idea to develop a real time rapid detecting pedestrians and measure the distance between vehicle and pedestrians based on laser scanner and camera data fusion for an unmanned ground vehicle system (UGV). The fast speed reflecting of laser’s beams provided an accurate range to compute the distance of any objects which appears in front. And image that captured from camera is used to classify shapes of object. First, laser scanner point data is clustered into segments, each of that indicates a candidate position of pedestrian. Then because of laser and image frames are different, Inverse Perspective Mapping (IPM) Algorithm is used to transform image plane into real world plan for matching with laser frame to form regions of interest (ROI) on the image. Once ROI is defined, Finally pedestrians could be extracted by using Support Vector Machine (SVM) classifier on Histogram of Oriented Gradient (HOG) features. The proposed system is tested on standard x86 machine and gives good real time performance.

      • 교차로 사각지대의 안전자율주행 확보를 위한 V2I 기반의 충돌 방지 강화 알고리즘 연구

        SANGYONG HAN 국민대학교 자동차공학전문대학원 2023 국내박사

        RANK : 2622

        본 연구는 교차로 사각지대의 자율주행 중 발생하는 사고를 방지하고 자동차와 운전자 그리고 주변 보행자의 안전성 확보를 목표로 진행하였다. 카메라와 LiDAR 를 이용한 2 종류의 Sensor Fusion 으로 자동차 운행 시 필요로 하는 객체들을 안정적으로 정확하게 검출하고, 주행 중 운전자 및 주행 자동차의 센서가 주행 경로 상 주변의 객체를 인지하지 못하는 구간 등의 사각지대에서 충돌상황을 예측하여 Infrastructure 와 V2I 기반의 차량 정지 또는 위험 신호 송수신을 통해 connected car 를 구현하여 사고를 미연에 방지하는 시스템으로 이를 실시간 으로 보장할 수 있는 Reinforced Anti-Collision Safety Algorithm (RACSA)을 제안하고 이를 검증하기 위해 자율주행 자동차 및 비 자율주행 자동차의 실차 실험을 통한 사고방지 신호 생성 시스템을 개발하는 연구를 진행하였다. 본 연구에서는 자율주행 자동차가 주행중에 장애물을 인지하지 못하여 사고가 발생할 수 있는 사각지대에의 Infrastructure 에 LiDAR 와 단안 카메라를 설치하여 돌발상황을 사전에 인지하여 RACSA 를 통해 획득된 주행 관련 신호를 주변의 자동차에 송신 하여 미연에 사고를 예방하는 V2I 기반의 충돌 방지 시스템을 구현하였다. 자동차의 주행 중, 주변의 검출하고자 하는 목적 객체는 자동차, 이륜차 및 보행자로 선정하였다. RACSA 에서 Multiple Object Detection & Tracking 과정은 카메라를 이용한 객체 검출 및 분류 알고리즘으로 딥러닝 기반의 Multiple Object Detector 인 YOLOv4 와 검출된 목적 객체의 진행 거동을 추적하기 위해 LiDAR 를 이용한 Object Tracker 인 IMM-UKF-JPDAF 를 사용하였다. 교차로에 진입하는 물체를 단안카메라가 인식하고 LiDAR 센서와의 Sensor Fusion 을 통해 목적 객체의 고유 특성 파라메터를 고려하여 위험도를 분류하여 사고유발 가능 우선순위를 부여한 후 충돌 예상 시간과 제동 거리에 대한 위험 상황 별 예상 제동 거리 및 제동 시간을 구하고, 경고 또는 정지 신호를 생성한 후 대상 자동차에게 해당 신호를 송신하여 각 신호에 일치하는 제어를 통해 사고를 방지한다. 본 논문에서는 교차로 자율주행 중 인지 범위 밖의 사각지대 에서 발생하는 교통사고를 방지하고자 V2I 기반의 Reinforced Anti-Collision Safety Algorithm 를 제안하고, 자율주행 자동차뿐만 아니라 非자율주행 자동차에서도 본 알고리즘을 적용하여 안전 주행 확보가 가능한 시나리오 별 사각지대에서 실차 주행실험을 통해 검증하였다. This study was aimed at preventing accidents occurring in the blind spots of intersections while driving autonomously and securing the safety of vehicles, passengers, and nearby pedestrians. To reliably and accurately detect the surrounding objects while driving a vehicle, two types of sensor fusion using a camera and a laser scanner were used. The scenarios where the driver and sensors installed on the driving vehicle do not recognize objects on the path while driving were considered. A reinforced anti-collision safety algorithm (RACSA) that can prevent accidents in advance through infrastructure and I2V-based signal transmission and reception in blind spots in real time was proposed. The RACSA was verified through actual vehicle experiments. Existing advanced driver-assistance system and autonomous vehicles detect nearby obstacles encountered while driving using a fusion of different types of sensors or sensor fusion algorithm. This algorithm combines cameras, radar, and light detection and ranging (LiDAR) as object recognition sensors. However, in the unexpected environment encountered while driving or in situations in which obstacles appear unexpectedly, accidents may occur while autonomously driving. To improve that, in this study, LiDAR and monocular camera were installed in the infrastructure of the vehicle-driving environment prone to many unexpected situations. The driving-related signals were acquired through the RACSA by recognizing the accident in advance. An I2V-based collision avoidance system was implemented to prevent accidents in advance by issuing the signals to nearby vehicles. While driving, the target objects to be detected around were selected as vehicles (passenger cars, trucks, and buses), two-wheeled vehicles, and pedestrians. In the RACSA, the Multiple Object Detection & Tracking process is implemented as follows. First, YOLOv4, a deep learning- based multiple object detector, is used to detect and classify objects in the camera image. Once the objects are detected, the IMM-UKF-JPDAF, an object tracker using LiDAR, is used to track their trajectories. A camera detects objects entering an intersection. A camera is used to detect objects entering an intersection. The severity of the object is then classified by considering its unique characteristic parameters through sensor fusion with a LiDAR sensor. This classification is used to assign a priority for potential collision. The expected braking distance and braking time are then calculated for each dangerous situation based on the predicted collision time and braking distance. A warning or stop signal is then generated and transmitted to the target vehicle. The vehicle is controlled in accordance with the signal to prevent the accident. This paper proposes an I2V-based RACSA to prevent traffic accidents that occur in blind spots outside the cognitive range in autonomous driving. For verification, RACSA was applied to autonomous and non-autonomous vehicles through actual vehicle driving experiments in blind spots for each scenario to secure safe driving.

      • Object detection and distance measurement algorithm for precast concrete collision avoidance in crane lifting process

        Yong, Yik Pong Sungkyunkwan University 2022 국내석사

        RANK : 2607

        In the construction industry, the process of carrying heavy weights from one place to another is inevitable. Depending on the conditions and requirements on-site, various types of construction lifting equipment are used. This phenomenon is more obvious when it comes to a high-rise building construction project. Besides, since the market size of the Off-site Construction (OSC) method has started a growing trend, cranes become the main supporting machine throughout the life cycle on site as precast concrete (PC) has become the default material of building structure and architecture. As a result of the increased use of cranes in construction sites, the concern of construction safety and the effectiveness of load collision prevention systems are now attracting more attention from the government as well as other stakeholders in the construction project, as the unpredictable movement of on-site workers and presence of detection blind spot are the main causes of accidents during lifting operation. Therefore, an improved load collision avoidance system is necessary to reduce collision accidents from happening and causing fatal damages. This study introduces the application of deep learning-based object detection and sensor integration in a complementary way to achieve the purpose. Both appliances work together to detect the presence of workers and other obstacles at minimal blind spots. In this study, the object detection technique was used with an Internet Protocol (IP) camera to detect the approach of workers, and ultrasonic sensors were used to measure the distance of surrounding obstacles. This integrated system was developed, and its function was verified through usability testing on a real crawler crane. 건설현장에서는 다양한 유형의 건설 리프팅 장비가 사용되며, 작업 시 하중이 높은 물체 등을 들어 올리고 이동하는 과정이 빈번하게 발생 한다. OSC(Off-Site Construction) 시장규모가 확대되면서 건축 구조 부재의 기본 재료로 프리캐스트 콘크리트(Precast Concrete)의 사용이 점차 증가하고 있으며, 건설현장 프로젝트의 라이프 사이클 전체에 걸쳐서 크레인이 주요 중장비로 사용되고 있다. 건설현장 내 크레인 사용이 증가함에 따라 현장 안전관리를 위한 크레인 충돌방지시스템에 대한 중요성이 증가하고 있다. 특히, 건설현장 내 사각지대는 리프팅 작업 중 사고가 발생하는 주요 원인 중 하나이다. 이에 따라 리프팅 작업 중 충돌사고 발생 및 피해를 줄이기 위한 크레인 인양물 충돌방지에 대한 개선이 필요한 실정이다. 본 연구에서는 Computer Vision 및 초음파 센서 융합 기술을 기반으로 타워크레인 인양물 충돌방지 시스템을 제안한다. 딥러닝 기반 객체인식 및 초음파 센서 기술을 함께 적용하여 사각지대를 최소화하고 작업자 및 기타 장애물의 접근을 실시간으로 감지한다. 작업자의 접근을 감지하기 위해 IP(Internet Protocol) 카메라 기반 객체인식 기술을 적용하였으며, 주변 장애물의 거리를 측정하기 위해 초음파 센서를 사용하였다. 또한 크레인 인양물 충돌방지 시스템의 현장 적용을 통해 Vision 객체 인식 및 초음파 센서 거리 측정 성능을 검증하였다. 본 연구를 통해 현장 작업자들의 안전사고 위험 예방과 보다 안전한 작업환경 조성이 가능할 것으로 기대된다.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼