http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
백석영(Seokyeong Baek),박정인(Jeongin Park),윤준용(Junyong Yun),성우석(Woosuk Sung) 대한전자공학회 2019 대한전자공학회 학술대회 Vol.2019 No.11
In this work, we incorporate lidar point clouds into a camera image in order to map obstacles in the lane. To this end, two different processes are required; one is spatial fusion and the other is temporal fusion. The spatial fusion transforms the lidar coordinates into the camera coordinates such that the lidar point clouds could be projected onto the camera image. By doing this, we are able to combine the lidar point clouds representing the obstacles and the camera image indicating the lane marking. The temporal fusion down-samples the lidar point clouds, attaining time-synchronization between them and the camera image. Through these two processes, the lidar point clouds are fused with the camera image, thereby generating a virtual lane that enables our mobile robot to avoid the obstacles while its lane-following.
백석영(Seokyeong Baek),박정인(Jeongin Park),윤준용(Junyong Yun),성우석(Woosuk Sung) 한국자동차공학회 2019 한국자동차공학회 부문종합 학술대회 Vol.2019 No.5
In this work, we adopt an object detection algorithm to a TurtleBot3 which is small-sized, low-cost but well-known as a ROS (robot operating system) standard platform. This aims at the real-time detection of traffic signs in the AutoRace track where the TurtleBot3 attempts to complete missions during self-driving. As an effort for the real-time guarantee, a YOLO (you only look once) is selected as an unified object detection algorithm. In order to run this deep neural network-based algorithm in real-time, a Nvidia Jetson TX2 is employed as a single board computer in the TurtleBot3. While training the network in the YOLO, we suffer from much lower recall levels in distinguishing between left-turn and right-turn signs than others. It turns out that this stems from horizontal flipping, one of the built-in means to augment data in the YOLO. By disabling horizontal flipping, we finally obtain a recall level of over 90% across 12 classes of the traffic signs at a speed of 10fps. The achieved performance is good enough for the TurtleBot3 to follow missions in real-time.
박정인(Jeongin Park),백석영(Seokyeong Baek),윤준용(Junyong Yun),성우석(Woosuk Sung) 대한전자공학회 2019 대한전자공학회 학술대회 Vol.2019 No.11
We propose the pipeline of a lane detection algorithm that can compensate for performance degradation of lane detection due to image blurring while transforming perspectives. The proposed pipeline combines the advantages of two different lane detection pipelines. Prior to image blurring by the perspective transform, we first detect lanes by using their color features. In the top view, we subsequently perform additional image processing such as applying ROI (region of interest) and generating virtual lanes. Experimental results show that the proposed pipeline improves the detection performance specifically against distant lanes at dark environments.
Gazebo 시뮬레이터를 활용한 모바일 로봇의 장애물 회피 알고리즘 검증
박정인(Jeongin Park),백석영(Seokyeong Baek),윤준용(Junyong Yun),성우석(Woosuk Sung) 한국자동차공학회 2019 한국자동차공학회 부문종합 학술대회 Vol.2019 No.5
This paper deals with the validation of an obstacle avoidance algorithm, the dynamic window approaches, in different in-door navigation environments. The navigation environments come from the AutoRace challenge where a ROSenabled TurtleBot3 completes missions during self-driving. Among six missions, we selected two requiring obstacle avoidance, which are called roadworks and tunnel, respectively. They differ in the sense that the roadworks feature static obstacles while the tunnel is filled with obstacles whose number and position are not predetermined fixed. In order for TurtleBot3 to cope with the many different cases in navigating through obstacles, a ROS-compatible robot simulator, Gazebo, is used for the purpose of validating fine-tuned parameters in the ROS navigation package. By applying Gazebo prior to actual tests, the validation can be done in time-effective way. This enables TurtleBot3 to pass through roadworks and tunnel specifically with 3 obstacles in 13 and 14 seconds, respectively.
이용하(Yongha Lee),강용문(Yongmun Kang),백석영(Seokyeong Baek),성우석(Woosuk Sung) 한국자동차공학회 2022 한국자동차공학회 학술대회 및 전시회 Vol.2022 No.11
We develop the autonomous driving system of vehicles for environmental monitoring. The vehicle is designed to monitor the quality of the environment while autonomously driving around the operational area, Pyeong-dong industrial complex in Gwangju-si. This paper presents two key subsystems which enable the vehicle to achieve the design goals successfully. The perception subsystem is based on four LiDARs placed in the right places to reduce blind spots all around the vehicle. Deploying multi-LiDARs is more important with blind spots enlarged owing to the box-shaped body of the vehicle. The deployed LiDARs are fused through the extrinsic calibration, pointcloud concatenation, and validation. The control, indication, and warning subsystems are required for the vehicle to operate unattended. The warning subsystem is based on the fail-safe function which stops the vehicle from operating when a failure occurs in the autonomous driving system. Preventing unsafe consequences of the systems failure is more critical to the vehicle with no attendant to respond to take-over requests.
윤준용(Junyong Yun),박정인(Jeongin Park),백석영(Seokyeong Baek),성우석(Woosuk Sung) 대한전자공학회 2019 대한전자공학회 학술대회 Vol.2019 No.11
In this paper, we propose virtual lane generation schemes to enable our mobile robots to traverse intersections where one line marking does not exist. The proposed schemes commonly detect a line on one side and based on its characteristics, they generate a virtual line on the other side. However, detailed schemes to virtualize a line is quite different. Three different schemes are comparatively validated in our test environment. It is demonstrated that the first two schemes are limited to use in relatively low curvatures. In comparison, the final scheme is able to yield better virtual lines, irrespective of the radius of curvature at the intersection.