http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM
최윤원,최정원,이석규,Choi, Yun Won,Choi, Jeong Won,Lee, Suk Gyu 제어로봇시스템학회 2015 제어·로봇·시스템학회 논문지 Vol.21 No.7
This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.
어안 이미지의 배경 제거 기법을 이용한 실시간 전방향 장애물 감지
최윤원,권기구,김종효,나경진,이석규,Choi, Yun-Won,Kwon, Kee-Koo,Kim, Jong-Hyo,Na, Kyung-Jin,Lee, Suk-Gyu 제어로봇시스템학회 2015 제어·로봇·시스템학회 논문지 Vol.21 No.8
This paper proposes an object detection method based on motion estimation using background subtraction in the fisheye images obtained through omni-directional camera mounted on the vehicle. Recently, most of the vehicles installed with rear camera as a standard option, as well as various camera systems for safety. However, differently from the conventional object detection using the image obtained from the camera, the embedded system installed in the vehicle is difficult to apply a complicated algorithm because of its inherent low processing performance. In general, the embedded system needs system-dependent algorithm because it has lower processing performance than the computer. In this paper, the location of object is estimated from the information of object's motion obtained by applying a background subtraction method which compares the previous frames with the current ones. The real-time detection performance of the proposed method for object detection is verified experimentally on embedded board by comparing the proposed algorithm with the object detection based on LKOF (Lucas-Kanade optical flow).
어안 워핑 이미지 기반의 Ego notion을 이용한 위치 인식 알고리즘
최윤원(Yun Won Choi),최경식(Kyung Sik Choi),최정원(Jeong Won Choi),이석규(Suk Gyu Lee) 제어로봇시스템학회 2014 제어·로봇·시스템학회 논문지 Vol.20 No.1
This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous 360° panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.
엣지 카메라 기반 지역 최적화를 이용한 스마트 횡단 보도 시스템 개발
최윤원(Yun Won Choi),이준구(Joon-Goo Lee),백장운(Jang Woon Baek),임길택(Kil-Taek Lim) 한국통신학회 2021 한국통신학회 학술대회논문집 Vol.2021 No.6
최근 보행자들의 교통안전을 위해 스마트 횡단보도에 대한 요구가 많이 늘어나는 추세이며 여러 곳에서 시범사업을 통하여 스마트 횡단보도 시스템에 대한 활용 가능성을 검토하고 있다. 기존에 다양한 센서를 사용하여 운영하는 시스템과 달리 최근에는 영상 기반의 스마트 횡단보도 시스템이 설치되고 있으며 영상을 통한 차량인식, 보행자 인식 기술들과 접목되고 있지만 낮은 인식 성능과 고비용을 요구하고 있다. 본 논문에서는 딥러닝 기반 실시간 객체 검출 기술을 기반으로 한 엣지 카메라와 설치 위치에서 검출 성능을 최적화 해주는 지역 최적화 기술을 포함한 스마트 횡단보도 시스템을 제안한다. 이 시스템은 엣지 카메라로 객체를 검출·추적한 정보를 활용하여 보행자 정보를 표지판으로 운전자에게 제공하고 도로 위험 정보를 보행자에게 소리와 바닥 등으로 제공한다. 우리는 실험을 통하여 지역 최적화 기술을 통하여 개선된 엣지 카메라에 대한 객체 검출 성능을 검증하였고 이를 활용한 스마트 횡단보도 시스템의 활용 가능성을 검증하였다.
어안 이미지의 배경 제거 기법을 이용한 실시간 전방향 장애물 감지
최윤원(Yun-Won Choi),권기구(Kee-Koo Kwon),김종효(Jong-Hyo Kim),나경진(Kyung-Jin Na),이석규(Suk-Gyu Lee) 제어로봇시스템학회 2015 제어·로봇·시스템학회 논문지 Vol.18 No.1
This paper proposes an object detection method based on motion estimation using background subtraction in the fisheye images obtained through omni-directional camera mounted on the vehicle. Recently, most of the vehicles installed with rear camera as a standard option, as well as various camera systems for safety. However, differently from the conventional object detection using the image obtained from the camera, the embedded system installed in the vehicle is difficult to apply a complicated algorithm because of its inherent low processing performance. In general, the embedded system needs system-dependent algorithm because it has lower processing performance than the computer. In this paper, the location of object is estimated from the information of object’s motion obtained by applying a background subtraction method which compares the previous frames with the current ones. The real-time detection performance of the proposed method for object detection is verified experimentally on embedded board by comparing the proposed algorithm with the object detection based on LKOF (Lucas-Kanade optical flow).
어안 영상을 이용한 물체 추적 기반의 한 멀티로봇의 대형 제어
최윤원(Yun Won Choi),김종욱(Jong Uk Kim),최정원(Jeong Won Choi),이석규(Suk Gyu Lee) 제어로봇시스템학회 2013 제어·로봇·시스템학회 논문지 Vol.19 No.6
This paper proposes a novel formation algorithm of identical robots based on object tracking method using omnidirectional images obtained through fisheye lenses which are mounted on the robots. Conventional formation methods of multirobots often use stereo vision system or vision system with reflector instead of general purpose camera which has small angle of view to enlarge view angle of camera. In addition, to make up the lack of image information on the environment, robots share the information on their positions through communication. The proposed system estimates the region of robots using SURF in fisheye images that have 360°of image information without merging images. The whole system controls formation of robots based on moving directions and velocities of robots which can be obtained by applying Lucas-Kanade Optical Flow Estimation for the estimated region of robots. We confirmed the reliability of the proposed formation control strategy for multi-robots through both simulation and experiment.
어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM
최윤원(Yun Won Choi),최정원(Jeong Won Choi),대염염(Yanyan Dai),이석규(Suk Gyu Lee) 제어로봇시스템학회 2014 제어·로봇·시스템학회 논문지 Vol.20 No.8
This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle’s feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous 360° panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.
어안 이미지 기반의 전방향 영상 SLAM을 이용한 충돌 회피
최윤원(Yun Won Choi),최정원(Jeong Won Choi),임성규(Sung Gyu Im),이석규(Suk Gyu Lee) 제어로봇시스템학회 2016 제어·로봇·시스템학회 논문지 Vol.22 No.3
This paper presents a novel collision avoidance technique for mobile robots based on omni-directional vision simultaneous localization and mapping (SLAM). This method estimates the avoidance path and speed of a robot from the location of an obstacle, which can be detected using the Lucas-Kanade Optical Flow in images obtained through fish-eye cameras mounted on the robots. The conventional methods suggest avoidance paths by constructing an arbitrary force field around the obstacle found in the complete map obtained through the SLAM. Robots can also avoid obstacles by using the speed command based on the robot modeling and curved movement path of the robot. The recent research has been improved by optimizing the algorithm for the actual robot. However, research related to a robot using omni-directional vision SLAM to acquire around information at once has been comparatively less studied. The robot with the proposed algorithm avoids obstacles according to the estimated avoidance path based on the map obtained through an omni-directional vision SLAM using a fisheye image, and returns to the original path. In particular, it avoids the obstacles with various speed and direction using acceleration components based on motion information obtained by analyzing around the obstacles. The experimental results confirm the reliability of an avoidance algorithm through comparison between position obtained by the proposed algorithm and the real position collected while avoiding the obstacles.