RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      AI기반 다중센서 융합 무인 이동체의 사람 인식 시스템 안전성 고찰 = Study on Safety of AI-Based Multi-Sensor Fusion Human Recognition in Unmanned Vehicles : A Review

      한글로보기

      https://www.riss.kr/link?id=A109510188

      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      부가정보

      다국어 초록 (Multilingual Abstract)

      This study introduces an AI-based multi-sensor fusion system to enhance human detection and collision avoidance in autonomous vehicles and drones. Utilizing camera, LiDAR, and radar data, the system integrates deep learning models, such as YOLO, Faster R-CNN, DQN, and PPO, to leverage the strengths of different sensors. Object recognition and path prediction are managed with CNN, RNN, and reinforcement learning algorithms, ensuring real-time collision avoidance even in complex environments. A key innovation is the interaction capability between vehicles and drones, allowing shared object detection from aerial and ground views for cooperative collision avoidance based on predicted paths. The system implements distributed learning that merges cloud and edge computing to improve real-time responsiveness and optimize energy efficiency, facilitating data sharing without imposing heavy computational demands. This strategy contrasts with previous research by reducing processing load and supporting coordinated functionality. Additionally, ethical considerations are embedded through algorithms designed for optimal decisions in high-risk scenarios, promoting safer, cooperative operation and boosting public trust. The integration of these technologies aims to enhance both the effectiveness and societal acceptance of autonomous systems.
      번역하기

      This study introduces an AI-based multi-sensor fusion system to enhance human detection and collision avoidance in autonomous vehicles and drones. Utilizing camera, LiDAR, and radar data, the system integrates deep learning models, such as YOLO, Faste...

      This study introduces an AI-based multi-sensor fusion system to enhance human detection and collision avoidance in autonomous vehicles and drones. Utilizing camera, LiDAR, and radar data, the system integrates deep learning models, such as YOLO, Faster R-CNN, DQN, and PPO, to leverage the strengths of different sensors. Object recognition and path prediction are managed with CNN, RNN, and reinforcement learning algorithms, ensuring real-time collision avoidance even in complex environments. A key innovation is the interaction capability between vehicles and drones, allowing shared object detection from aerial and ground views for cooperative collision avoidance based on predicted paths. The system implements distributed learning that merges cloud and edge computing to improve real-time responsiveness and optimize energy efficiency, facilitating data sharing without imposing heavy computational demands. This strategy contrasts with previous research by reducing processing load and supporting coordinated functionality. Additionally, ethical considerations are embedded through algorithms designed for optimal decisions in high-risk scenarios, promoting safer, cooperative operation and boosting public trust. The integration of these technologies aims to enhance both the effectiveness and societal acceptance of autonomous systems.

      더보기

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      주제

      연도별 연구동향

      연도별 활용동향

      연관논문

      연구자 네트워크맵

      공동연구자 (7)

      유사연구자 (20) 활용도상위20명

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼