http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
구조광 방식 3차원 복원을 위한 간편한 프로젝터-카메라 보정 기술
박순용,박고광,Park, Soon-Yong,Park, Go-Gwang,Zhang, Lei 한국정보처리학회 2010 정보처리학회논문지B Vol.17 No.3
구조광(structured-light)을 이용한 3차원 복원 기술은 카메라 영상과 프로젝터 영상에서 구조광 코드(code)의 일치점을 탐색하고 그 점의 3차원 좌표를 획득하는 기술이다. 일치점의 3차원 좌표를 계산하기위해서는 카메라와 프로젝터의 보정(calibration)이 선행되어야 한다. 또한 복원된 3차원 형상의 정확도는 카메라와 프로젝터의 보정 결과에 영향을 받는다. 기존의 카메라-프로젝터 보정 기술은 고가의 장치를 사용하거나 복잡한 알고리즘을 사용하여 시간과 비용에 대한 효율성이 낮았다. 본 논문에서는 쉽고도 정밀한 카메라-프로젝터 보정 기술을 제안하고자 한다. 제안하는 기술은 복잡한 장치 또는 알고리즘이 필요치 않고 영상처리 기술로만 구현이 가능하기 때문에 3차원 형상복원의 효율성을 높일 수 있다. 두 종류의 카메라-프로젝터 장치에 대한 보정 실험 결과를 보였으며, 보정된 카메라와 프로젝터의 투영 오차 및 월드 기준점의 3차원 복원 오차를 측정하여 제안하는 알고리즘의 정밀도를 분석하였다. The structured-light 3D reconstruction technique uses a coded-pattern to find correspondences between the camera image and the projector image. To calculate the 3D coordinates of the correspondences, it is necessary to calibrate the camera and the projector. In addition, the calibration results affect the accuracy of the 3D reconstruction. Conventional camera-projector calibration techniques commonly require either expensive hardware rigs or complex algorithm. In this paper, we propose an easy camera-projector calibration technique. The proposed technique does not need any hardware rig or complex algorithm. Thus it will enhance the efficiency of structured-light 3D reconstruction. We present two camera-projector systems to show the calibration results. Error analysis on the two systems are done based on the projection error of the camera and the projector, and 3D reconstruction of world reference points.
핀홀 스테레오 비전 센서의 공간 스캔을 통한 방사선의 영상화 시뮬레이션
박순용,백승해,최창원,Park, Soon-Yong,Baek, Seung-Hae,Choi, Chang-Won 한국정보통신학회 2014 한국정보통신학회논문지 Vol.18 No.7
There are always much concern about the leakage of radiation materials in the event of dismantle or unexpected accident of nuclear power plant. In order to remove the leakage of radiation materials, appropriate dispersion detection techniques for radiation materials are necessary. However, because direct handling of radiation materials is highly restricted and risky, developing radiation-related techniques needs computer simulation in advance to evaluate the feasibility. In this paper, we propose a radiation imaging technique which can acquire 3D dispersion information of radiation materials and tested by simulation. Using two virtual 1D radiation sensors, we obtain stereo radiation images and acquire the 3D depth to virtual radiation materials using stereo disparity. For point and plane type virtual radiation materials, the possibility of the acquisition of stereo radiation image and 3D information are simulated.
박순용 ( Soon Yong Park ),김세광 ( Shae Kwang Kim ),이동복 ( Dong Bok Lee ) 대한금속재료학회(구 대한금속학회) 2016 대한금속·재료학회지 Vol.54 No.6
CaO-added Mg alloys were cast in air, hot extruded to thin plates, and oxidized in air at high temperatures. During the casting process, the CaO was decomposed into Ca in the α-Mg matrix and formed Al2Ca precipitates along grain boundaries. A thin, nonuniform CaO-rich layer was formed on the surface during the casting and oxidation. Based on this study, the oxidation mechanism of the CaO-added Mg alloys during casting and oxidation is proposed. (Received August 25, 2015)
분말야금법과 주조법으로 제조한 자동차 터보차져강의 고온산화
박순용 ( Soon Yong Park ),이동복 ( Dong Bok Lee ) 한국부식방식학회 2015 Corrosion Science and Technology Vol.14 No.3
Turbocharger steels were manufactured by the powder metallurgical and casting method. They consisted primarily of a large amount of γ-Fe, a small amount of α-Fe, and fine Nb6C5 precipitates. The casting method was better than the powder metallurgical method, because a sound matrix with little oxides were obtained. When turbocharger steels were oxidized at 900 ℃ for 50 h, Mn2VO4 and (Mn,Si)-oxides were formed along grain boundaries, while Mn2O3 and CrMn2O4 were formed intragranularly. Fe, Nb, and Ni were depleted in the oxide scale.
인공지능 : 구형 물체를 이용한 다중 RGB-D 카메라의 간편한 시점보정
박순용 ( Soon Yong Park ),최성인 ( Sung In Choi ) 한국정보처리학회 2014 정보처리학회 논문지 Vol.3 No.8
To generate a complete 3D model from depth images of multiple RGB-D cameras, it is necessary to find 3D transformations between RGB-D cameras. This paper proposes a convenient view calibration technique using a spherical object. Conventional view calibration methods use either planar checkerboards or 3D objects with coded-pattern. In these conventional methods, detection and matching of pattern features and codes takes a significant time. In this paper, we propose a convenient view calibration method using both 3D depth and 2D texture images of a spherical object simultaneously. First, while moving the spherical object freely in the modeling space, depth and texture images of the object are acquired from all RGB-D camera simultaneously. Then, the external parameters of each RGB-D camera is calibrated so that the coordinates of the sphere center coincide in the world coordinate system.
3차원 거리정보와 DSM의 정사윤곽선 영상 정합을 이용한 무인이동로봇의 위치인식
박순용 ( Soon Yong Park ),최성인 ( Sung In Choi ) 한국정보처리학회 2012 정보처리학회 논문지 Vol.1 No.1
This paper presents a new localization technique of an UGV(Unmanned Ground Vehicle) by matching ortho-edge images generated from a DSM (Digital Surface Map) which represents the 3D geometric information of an outdoor navigation environment and 3D range data which is obtained from a LIDAR (Light Detection and Ranging) sensor mounted at the UGV. Recent UGV localization techniques mostly try to combine positioning sensors such as GPS (Global Positioning System), IMU (Inertial Measurement Unit), and LIDAR. Especially, ICP (Iterative Closest Point)-based geometric registration techniques have been developed for UGV localization. However, the ICP-based geometric registration techniques are subject to fail to register 3D range data between LIDAR and DSM because the sensing directions of the two data are too different. In this paper, we introduce and match ortho-edge images between two different sensor data, 3D LIDAR and DSM, for the localization of the UGV. Details of new techniques to generating and matching ortho-edge images between LIDAR and DSM are presented which are followed by experimental results from four different navigation paths. The performance of the proposed technique is compared to a conventional ICP-based technique.
전방향 거리 센서의 균일 원호길이 샘플링을 이용한 무인 이동차량의 실시간 위치 추정
박순용(Soon-Yong Park),최성인(Sung-In Choi) 大韓電子工學會 2011 전자공학회논문지CI (Computer and Information) Vol.48 No.6
본 논문에서는 무인 지상 차량의 (Unmanned Ground Vehicle UGV)의 위치 추정을 위한 컴퓨터 비전 기술을 제안한다. 제안하는 방법은 연속적으로 획득되는 360도 거리 정보(range data)와 디지털 수치모델(Digital Surface Model 이하 DSM)의 3차원 등록(3-D registration) 방법에 기반하고 있다. 많은 수의 3차원 점군(point clouds) 정보를 가지고 있는 거리 정보의 연속적 3차원 등록은 상당한 수행 시간을 필요로 한다. 실시간 위치 추정을 위해 우리는 투영 기반의 등록 방법과 Uniform Arc Length Sampling(이하 UALS) 방법을 제안한다. UALS는 거리영상에서의 GSD(ground sample distance)를 균일하게 유지하면서 동시에 3차원 샘플 포인트의 수를 줄일 수 있는 장점을 가지고 있다. 또한 투영 기반 등록 기술은 3차원 대응점의 탐색 시간을 감소시킨다. 두 개의 실제 항법 경로를 이용한 실험을 통하여 제안하는 방법의 성능을 검증하였다. 3차원 점군의 다양한 샘플링에 대하여 제안하는 기술의 속도 및 정합 성능을 기존 방법과 비교하였다. We propose an automatic localization technique based on Uniform Arc Length Sampling (UALS) of 360 degree range sensor data. The proposed method samples 3D points from dense a point-cloud which is acquired by the sensor registers the sampled points to a digital surface model(DSM) in real-time and determines the location of an Unmanned Ground Vehicle(UGV). To reduce the sampling and registration time of a sequence of dense range data 3D range points are sampled uniformly in terms of ground sample distance. Using the proposed method we can reduce the number of 3D points while maintaining their uniformity over range data. We compare the registration speed and accuracy of the proposed method with a conventional sample method. Through several experiments by changing the number of sampling points we analyze the speed and accuracy of the proposed method.