RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      비전을 이용한 이동로봇의 자가측위와 VRML과의 영상 오버레이 = The Localization of Mobile Robot using Vision and Overlay with VRML

      한글로보기

      https://www.riss.kr/link?id=T10354868

      • 저자
      • 발행사항

        전주: 전북대학교 대학원, 2006

      • 학위논문사항

        학위논문(석사) -- 전북대학교 대학원 , 제어계측공학과 , 2006

      • 발행연도

        2006

      • 작성언어

        한국어

      • 발행국(도시)

        전북특별자치도

      • 형태사항

        viii,81p: 삽도; 26 cm.

      • 소장기관
        • 국립군산대학교 도서관 소장기관정보
        • 전북대학교 중앙도서관 소장기관정보
      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      부가정보

      다국어 초록 (Multilingual Abstract)

      Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-positioning, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach.
      The goal of our research is to measure more exact robot location by matching between built 3D model and real vision image. To determine the position of mobile robot, landmark-localization technique is applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, e.g., crosses or patterns of concentric circles. In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. If the world positions of the landmarks are known the angular separations can be used to compute the robot position and heading relative to a 2D floor map. The robot, That is, identifies landmarks in the environment and carries out the self-positioning. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-positioning, the 2D scene of the vision is overlaid with the VRML scene. This paper describes how to realize the self-positioning and shows the result of overlapping between the 2D scene and VRML scene. In addition we describe the advantage expected from overlapping both scenes.
      번역하기

      Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-positioning, and there are different moda...

      Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-positioning, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach.
      The goal of our research is to measure more exact robot location by matching between built 3D model and real vision image. To determine the position of mobile robot, landmark-localization technique is applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, e.g., crosses or patterns of concentric circles. In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. If the world positions of the landmarks are known the angular separations can be used to compute the robot position and heading relative to a 2D floor map. The robot, That is, identifies landmarks in the environment and carries out the self-positioning. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-positioning, the 2D scene of the vision is overlaid with the VRML scene. This paper describes how to realize the self-positioning and shows the result of overlapping between the 2D scene and VRML scene. In addition we describe the advantage expected from overlapping both scenes.

      더보기

      목차 (Table of Contents)

      • 1. 서 론 = 1
      • 1.1 연구 배경 = 1
      • 1.2 논문 구성 = 3
      • 2. Linear Position Estimation Algorithm = 4
      • 2.1 기존 Triangulation 기법 = 4
      • 1. 서 론 = 1
      • 1.1 연구 배경 = 1
      • 1.2 논문 구성 = 3
      • 2. Linear Position Estimation Algorithm = 4
      • 2.1 기존 Triangulation 기법 = 4
      • 2.1.1 3개의 Landmark에 의한 Triangulation = 4
      • 2.1.2 개의 Landmark에 의한 Triangulation() = 5
      • 2.2 Linear Position Estimation = 8
      • 3. Landmark 추출 및 인식 = 15
      • 3.1 이미지 프로세싱 절차 = 15
      • 3.1.1 히스토그램 평활화 = 15
      • 3.1.2 칼라영상 이진화 = 18
      • 3.1.3 이진화 된 영상 노이즈 제거 = 20
      • 3.1.4 경계추적 알고리즘 = 21
      • 3.1.5 에러 제거 및 중앙 마크 분리 = 25
      • 3.2 feature 정의 = 26
      • 3.2.1 feature 값 추출 = 26
      • 3.2.2 feature 스케일링 = 28
      • 3.3 신경회로망 = 29
      • 3.3.1 신경회로망의 개요 = 29
      • 3.3.2 역전파 알고리즘(Back-propagation Algorithm) = 30
      • 3.3.3 패턴인식 신경회로망 구성 = 32
      • 3.3.4 패턴인식 신경회로망 학습 = 33
      • 4. VRML과 3차원 영상구현 = 35
      • 4.1 VRML 개요 = 35
      • 4.1.1 VRML의 특징 = 36
      • 4.1.2 VRML 문법 = 37
      • 4.1.3 VRML의 필드와 노드 = 40
      • 4.1.4 IndexedFaceSet 노드 = 43
      • 4.1.5 자바 EAI와 VRML 제어 = 44
      • 4.2 3차원 영상구현 = 47
      • 4.2.1 3차원 컴퓨터 그래픽스 소개 = 48
      • 4.2.2 관측방향에 따른 좌표변환 = 49
      • 4.2.3 원근투영 = 52
      • 4.3 VRML의 3차원 영상 구현 = 55
      • 5. 로봇 측위 시스템과 3차원 영상과의 Overlay = 56
      • 5.1 시스템의 개요 = 56
      • 5.2 영상 내 세로선 추출 = 59
      • 5.3 시스템에 대한 고찰 = 61
      • 6. 시뮬레이션 = 62
      • 6.1 시뮬레이션 환경 = 62
      • 6.1.1 실험 장비 = 62
      • 6.1.2 로봇 작업 환경구축 = 62
      • 6.1.3 좌표계 = 64
      • 6.2 신경회로망 패턴인식률 실험 = 65
      • 6.2.1 feature 추출 및 패턴인식률 측정 실험 방법 = 65
      • 6.2.2 신경회로망 패턴인식률 실험결과 = 66
      • 6.3 Landmark-Localization 실험 = 68
      • 6.3.1 Landmark-Localization 실험 방법 = 68
      • 6.3.2 Landmark-Localization 실험 결과 = 69
      • 6.4 VRML과의 영상 오버레이와 위치 보정 결과 = 71
      • 6.4.1 VRML과의 영상 오버레이 = 71
      • 6.4.2 VRML을 이용한 위치 보정 = 72
      • 7. 결 론 = 77
      • 7.1 결 론 = 77
      • 7.2 차후과제 = 78
      더보기

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      주제

      연도별 연구동향

      연도별 활용동향

      연관논문

      연구자 네트워크맵

      공동연구자 (7)

      유사연구자 (20) 활용도상위20명

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼