RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • Semantic Indoor Scene Recognition of Time-Series Arial Images from a Micro Air Vehicle Mounted Monocular Camera

        Hirokazu Madokoro,Shinya Ueda,Kazuhito Sato 제어로봇시스템학회 2018 제어로봇시스템학회 국제학술대회 논문집 Vol.2018 No.10

        This paper presents a semantic scene recognition method from indoor areal time-series images obtained using a micro air vehicle (MAV). Using category maps, topologies of image features are mapped into a low-dimensional space based on competitive and neighborhood learning. The proposed method comprises two phases: a codebook feature description phase and a recognition phase using category maps. For the former phase, codebooks are created automatically as visual words using self-organizing maps (SOMs) after extracting part-based local features using a part-based descriptor from time-series scene images. For the latter phase, category maps are created using counter propagation networks (CPNs) with extraction of category boundaries using a unified distance matrix (U-Matrix). With manual MAV operation, we obtained areal time-series image datasets of five sets for two flight routes: a round flight route and a zigzag flight route. The experimentally obtained results with leave-one-out cross-validation (LOOCV) for datasets divided with 10 zones revealed respective mean recognition accuracies for the round flight datasets and zigzag flight datasets of 71.7% and 65.5%. The created category maps addressed the complexity of scenes because of segmented categories in both flight datasets.

      • Development of Octo-Rotor UAV Prototype with Night-vision Stereo Camera System Used for Nighttime Visual Inspection

        Hirokazu Madokoro,Hanwool Woo,Kazuhito Sato,Nobuhiro Shimoi 제어로봇시스템학회 2019 제어로봇시스템학회 국제학술대회 논문집 Vol.2019 No.10

        This paper presents an octo-rotor unmanned air vehicle (UAV) prototype and its vision system used for nighttime visual infrastructure inspection. After developing a stereo vision system using two inexpensive night-vision cameras for obtaining depth information, we conducted a comprehensive evaluation experiment to assess its practical use, to support the design and manufacture of our prototype, to build a camera system including a dedicated camera mount, and to compare and evaluate stereo matching algorithms. The nighttime inspection yielded depth images using stereo matching algorithms of four types for nighttime aerial photography of parallax images. We evaluated optimal stereo matching for nighttime aerial photographs. The experimentally obtained results revealed that the contrast between structure outlines and depth information was extracted clearly for the highest accuracy stereo matching result. Results show that our system concept can open up a new field of inspecting structures using nighttime aerial photography.

      • Development of Micro Air Vehicle Using Aerial Photography for Safe Rowing and Coaching

        Hirokazu Madokoro,Kazuhito Sato,Nobuhiro Shimoi 제어로봇시스템학회 2016 제어로봇시스템학회 국제학술대회 논문집 Vol.2016 No.10

        This study was undertaken to establish basic technologies and knowledge of aerial photography and its application to support safe rowing. For the water sport of rowing, managers and coaches use a motorboat to follow a rowing boat for coaching and safe rowing observation. Utilization of a motorboat gives rise to numerous problems in terms of pulled waves, narrow visual ranges, limited tracking of boats at any one time, fuel consumption, and maintenance costs. Moreover, rowing boats present collision risks to other rowing boats or obstacles floating on water, especially for a cox-less rowing boat because the visual direction for rowers is opposite to the moving direction. The aim of this study is to actualize rowing aerial photography using a Micro Air Vehicle (MAV): a radio-controlled small multi-rotor helicopter that has become popularly used for numerous applications recently. We obtained rowing movies using three-camera compositional patterns with changing altitudes and tilt angles. We examined the benefits of rowing aerial photography compared with movies obtained from a motorboat with consideration of safety improvement.

      • Bed-Leaving Behavior Detection and Recognition Based on Time-Series Learning Using Elman-Type Counter Propagation Networks

        Hirokazu Madokoro,Kantarou Kakuta,Ryo Fujisawa,Nobuhiro Shimoi,Kazuhito Sato,Li Xu 제어로봇시스템학회 2014 제어로봇시스템학회 국제학술대회 논문집 Vol.2014 No.10

        This paper presents a bed-leaving detection method using Elman-type Counter Propagation Networks (ECPNs), a novel machine-learning-based method used for time-series signals. In our earlier study, we used CPNs, a form of supervised model of Self-Organizing Maps (SOMs), to produce category maps to learn relations among input and teaching signals. For this study, we inserted a feedback loop as the second Grossberg layer for learning time-series features. Moreover, we developed an original caster-stand sensor using piezoelectric films to measure weight changes of a subject on a bed to be loaded through bed legs. The features of our sensor are that it obviates a power supply for operations and that it can be installed on existing beds. We evaluated our sensor system by examining 10 people in an environment representing a clinical site. The mean recognition accuracy for seven behavior patterns is 71.1%. Furthermore, the recognition accuracy for three behavior patterns of sleeping, sitting, and leaving the bed is 83.6% Falsely recognized patterns remained inside of respective categories of sleeping and sitting. We infer that this system is applicable to an actual environment as a novel sensor system requiring no restraint of patients.

      • Unrestrained Sensors Using Piezoelectric Elements for Bed-Leaving Prediction

        Hirokazu Madokoro,Nobuhiro Shimoi,Kazuhito Sato 제어로봇시스템학회 2013 제어로봇시스템학회 국제학술대회 논문집 Vol.2013 No.10

        This paper presents a sensor system that predicts behavior patterns that occur when a patient leaves a bed. We originally developed plate-shaped sensors using piezoelectric elements. Existing sensors such as clip sensors and mat sensors require that patients be restrained. The features of our sensors are that they require no power supply or patient restraint for privacy problems. Moreover, we developed machine-learning algorithms to predict behavior patterns without setting thresholds. We evaluated our system for three subjects at an experimental environment constructed in reference to a clinical site. The mean recognition accuracy was 78.6% for seven behavior patterns. Especially, the recognition accuracies of lateral sitting and terminal sitting were each 94.4%. We consider that these capabilities are useful for bed-leaving prediction in practical use.

      • Calibration and 3D Reconstruction of Images Obtained Using Spherical Panoramic Camera

        Hirokazu Madokoro,Satoshi Yamamoto,Yo Nishimura,Stephanie Nix,Hanwool Woo,Kazuhito Sato 제어로봇시스템학회 2021 제어로봇시스템학회 국제학술대회 논문집 Vol.2021 No.10

        This study was conducted to develop a 3D reconstruction procedure for application to crop monitoring. For 3D construction of a similar target object, we compared images obtained from two camera types: a compact digital camera (CDC) and a spherical panoramic camera (SPC). First, we calculate camera parameters from images that include a checkerboard. Subsequently, we correct the image distortion including that of the target object using the camera parameters. Finally, we estimate camera positions and three-dimensional (3D) reconstruction based on the structure from motion (SfM). Experimentally obtained results demonstrated that the 3D reconstruction of a target object was improved after calibration compared with that before calibration. Moreover, we conducted an application experiment using a tree in an outdoor environment as a trial of practical use at a farm.

      • Prediction of Local PM<SUB>2.5</SUB>Concentrations Based on Time-Series Feature Learning Using Multivariate LSTM

        Hirokazu Madokoro,Saki Nemoto,Stephanie Nix,Osamu Kiguchi,Atsushi Suetsugu,Takeshi Nagayoshi,Kazuhito Sato 제어로봇시스템학회 2022 제어로봇시스템학회 국제학술대회 논문집 Vol.2022 No.11

        Air pollution causes various health problems and diseases. Long-term PM<SUB>2.5</SUB> monitoring and prediction of its occurrence and sources are necessary not only in global areas based on public monitoring stations but also in local areas using cost-effective sensor systems. For this study, we developed a sensor system to achieve simplified and high-frequency PM<SUB>2.5</SUB> measurements. We attempted to learn and to predict local PM<SUB>2.5</SUB> concentrations from observed data using long short-term memory (LSTM) as a dominant time-series feature learning network. For improving learning and prediction accuracy evaluated according to the root mean square error (RMSE), sensor calibration is performed using a higher sensor. Moreover, we strove to reduce RMSE by optimizing its five major parameters. Experimentally obtained results demonstrate that the prediction accuracy is improved gradually after calibration and parameter optimization. As an ablation experiment, five meteorological factors are imported externally to verify the factors which contribute to reducing RMSE. Results verify the strong effects of local pressure and temperature for training and relative humidity and temperature for testing as validation.

      • Occlusion-Robust Segmentation for Multiple Objects using a Micro Air Vehicle

        Asahi Kainuma,Hirokazu Madokoro,Kazuhito Sato,Nobuhiro Shimoi 제어로봇시스템학회 2016 제어로봇시스템학회 국제학술대회 논문집 Vol.2016 No.10

        This paper presents a novel object extraction method using a micro air vehicle (MAV) for improving the robustness of occlusion. The proposed method is based on saliency of objects for extracting regions of interest (RoIs) using scale invariant feature transform (SIFT) features and segmentation of target objects using GrabCut, which requires advance learning. We obtained original aerial photographic time-series image datasets using a MAV. Results of experiments revealed that object extraction accuracies measured using precision, recall, and F-measure improved according to the MAV movement for images with changing rates of collusion between two objects: a chair and a table. Especially for images of a chair, which is smaller than the table, our method functioned well for the extraction of object regions. For improving extraction accuracy based on the result to extract the table, an advanced mechanism combined with flight patterns is necessary to adjust the suitable distance between the MAV and a target object.

      • Visual Saliency Based Segmentation of Multiple Objects Using Variable Regions of Interest

        Ayaka Yamanashi,Hirokazu Madokoro,Yutaka Ishioka,Kazuhito Sato 제어로봇시스템학회 2014 제어로봇시스템학회 국제학술대회 논문집 Vol.2014 No.10

        This paper presents a segmentation method of multiple object regions based on visual saliency. Our method comprises three steps. First, attentional points are detected using saliency maps (SMs). Subsequently, regions of interest (RoIs) are extracted using scale-invariant feature transform (SIFT). Finally, foreground regions are extracted as object regions using GrabCut. Using RoIs as teaching signals, our method achieved automatic segmentation of multiple objects without learning in advance. As experimentally obtained results obtained using PASCAL2011 dataset, attentional points were extracted correctly from 18 images for two objects and from 25 images for single objects. We obtained segmentation accuracies: 64.1%, precision; 62.1%, recall, and 57.4%, F-measure. Moreover, we applied our method to time-series images obtained using a mobile robot. Attentional points were extracted correctly for seven images for two objects and three images for single objects from ten images. We obtained segmentation accuracies of 58.0%, precision; 63.1%, recall, and 58.1%, F-measure.

      • Semantic Scene Recognition and Zone Labeling for Mobile Robot Benchmark Datasets based on Category Maps

        Ryoma Fukushi,Hirokazu Madokoro,Kazuhito Sato 제어로봇시스템학회 2018 제어로봇시스템학회 국제학술대회 논문집 Vol.2018 No.10

        For this study, we focus on autonomous locomotion based on visual landmarks that recognizes surrounding environments based on saliency characteristics. This paper presents a feature extraction method combined with saliency maps (SMs), histograms of oriented gradients (HOG) features, and accelerated KAZE (AKAZE) descriptors to describe image features as visual landmarks without removing human regions as dynamic objects. As semantic scene recognition, we used a method combined with self-organizing maps (SOMs) based on bag of features for creating codebooks as visual words and counter propagation networks (CPNs) based on topological learning of neighborhood and competition for creating a category maps (CMs) that converts input features into a low dimensional space. We used a mobile robot for obtaining clockwise datasets (CWDs) and counter CW datasets (CCWDs). The experimental obtained results revealed that recognition accuracies (RAs) for CWDs and CCWDs, were, respectively 70.76% for 26 categories and 72.24% for 25 categories. Based on this result as an original ground truth (GT) pattern, we change label patterns (LPs) of five types according to mapping results on CMs for selection.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼