http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Hyoyoung Jang,Hyojoong Kim,Yikweon Jang 한국응용곤충학회 2010 한국응용곤충학회 학술대회논문집 Vol.2010 No.10
Plant penetration by aphids can be monitored electrically by the electrical penetration graph (EPG) technique. To confirm whether some behaviors are correlated to specific graph pattern in EPG, we analyzed the two synchronized data, EPG and video records. We recorded electrical penetration graph (EPG) and behaviors of aphids simultaneously. Then we compared the behaviors of aphids with the recorded EPG waveforms in order to match their visible behaviors and invisible behaviors with stylet. The visible behaviors were categorized for walking, wagging, honeydew production, and reproduction. When the aphids were generally motionless, the EPG denoted feeding-related waveforms (E1, E2, F and G). Whereas, probing waveforms (B and pd) frequently occurred when they were wagging. We aim to present the correlation between observed behaviors and EPG patterns.
Decomposition approach for hand-pose recognition
Hyoyoung Jang,Dae-Jin Kim,Jung-Bae Kim,Jin-Woo Jung,Kwang-Hyun Park,Z. Zenn Bien 한국과학기술원 인간친화 복지 로봇 시스템 연구센터 2004 International Journal of Assistive Robotics and Me Vol.5 No.1
This paper presents the recognition of hand-pose by decomposition approach, which is constructed on the hierarchical structure of hand-pose classifiers, and subgroup classifier. With the new method, it is possible to do hand-pose specified recognition with simple structure. To show the effectiveness of the proposed system, experiments were carried out with 32 hand-poses, grouped into 12 subgroups and demonstrated a 96.46% success rate for individual poses.
Hyoyoung Jang,Amanda Wee Huixin,Sim Sze Ching Danielle,Yeo Qin Yi,Hyojoong Kim,Yikweon Jang 한국응용곤충학회 2011 한국응용곤충학회 학술대회논문집 Vol.2011 No.05
Aphids feed on host plants by penetrating the stems or leaves with stylets. The feeding behavior of aphids consists of probing, penetration, salivation, and sap ingestion. To assess the effects of sound on feeding behavior, we monitored the stylet activity of the green peach aphid, Myzus persicae (Sulzer), using electrical penetration graph (EPG). The use of EPG was critical for determining the stage, frequency, and duration of feeding in aphids. We played back three acoustic stimuli of sine waves with frequencies of 100, 1000, and 5000 Hz to adult aphids. In the sound treatment group, the frequencies of probing, penetration, and salivation increased, whereas the duration of sap ingestion decreased. The results of EPG revealed that the acoustic stimuli may restrict aphid feeding by inhibiting sap ingestion.
View-invariant Hand-posture Recognition Method for Soft Remocon System
Hyoyoung Jang,Jun-Hyeong Do,Jin-Woo Jung,Kwang-Hyun Park,Z,Zenn Bien 한국과학기술원 인간친화 복지 로봇 시스템 연구센터 2004 International Journal of Assistive Robotics and Me Vol.5 No.2
This paper suggests a robust gesture recognition method against the variation of camera viewpoint. The proposed system uses both 2-dimensional appearance features and finger-angles. We use 3 cameras attached to the ceiling. During database-construction, a dataglove is used to capture finger-angles. Due to the variance in the viewing direction, the same appearance feature does not guarantee the same 3-dimensional structure. Likewise, the same 3-dimensional structure does not guarantee the same appearance. The proposed hand-posture database provides the robustness in decision under variations in the viewing-direction. Consequently, it is possible to build a more natural environment. Additional glove-devices are not required in this method.
3차원 손 모델을 이용한 비전 기반 손 모양 인식기의 개발
장효영(Hyoyoung Jang),변증남(Zeungnam Bien) 대한전기학회 2007 대한전기학회 학술대회 논문집 Vol.2007 No.4
Recent changes to ubiquitous computing requires more natural human-computer(HCI) interfaces that provide high information accessibility. Hand-gesture, i.e., gestures performed by one or two hands, is emerging as a viable technology to complement or replace conventional HCI technology. This paper deals with hand-posture recognition. Hand-posture database construction is important in hand-posture recognition. Human hand is composed of 27 bones and the movement of each joint is modeled by 23 degrees of freedom. Even for the same hand-posture, grabbed images may differ depending on user's characteristic and relative position between the hand and cameras. To solve the difficulty in defining hand-postures and construct database effective in size, we present a method using a 3D hand model. Hand joint angles for each hand-posture and corresponding silhouette images from many viewpoints by projecting the model into image planes are used to construct the database. The proposed method does not require additional equations to define movement constraints of each joint. Also using the method, it is easy to get images of one hand-posture from many viewpoints and distances. Hence it is possible to construct database more precisely and concretely. The validity of the method is evaluated by applying it to the hand-posture recognition system.
관측 시점에 강인한 손 모양 인식을 위한 손 모양과 손 구조 사이의 학습 기반 유사도 결정 방법
장효영(Hyoyoung Jang),정진우(Jin-Woo Jung),변증남(Zeungnam Bien) 한국지능시스템학회 2006 한국지능시스템학회논문지 Vol.16 No.3
본 논문에서는 비전 기술에 기반을 둔 손 모양 인식 시스템의 성능 향상을 위해 학습을 통해 손 모양과 손 구조 간 유사도를 결정하는 방법을 제안한다. 비전 센서에 기반을 둔 손 모양 인식은 손의 높은 자유도로 인한 자체 가림 현상과 관찰방향 변화에 따른 입력 영상의 다양함으로 인해 인식에 어려움이 따른다. 따라서 비전 기반 손 모양 인식의 경우, 카메라와 손 간의 상대적인 각도에 제한을 두거나 여러 대의 카메라를 배치하는 것이 일반적이다. 그러나 카메라와 손 간의 상대적 각도에 제한을 두는 경우에는 사용자의 움직임에 제약이 따르게 되며, 여러 대의 카메라를 사용할 경우에는 각 입력된 영상에 대한 인식 결과를 최종 인식 결과에 반영하는 방식에 대해서 추가적으로 고려해야 한다. 본 논문에서는 비전 기반 손 모양 인식의 이러한 문제점을 개선하기 위하여 인식의 과정에서 사용되는 손 모양 특징을 손 구조적인 각도 정보와 손 영상 특징으로 나누고, 학습을 통해 각 특징 간 연관성을 정의한다. This paper deals with a similarity decision method between the shape of hand-postures and their structures to improve performance of the vision-based hand-posture recognition system. Hand-posture recognition by vision sensors has difficulties since the human hand is an object with high degrees of freedom, and hence grabbed images present complex self-occlusion effects and, even for one hand-posture, various appearances according to viewing directions. Therefore many approaches limit the relative angle between cameras and hands or use multiple cameras. The former approach, however, restricts user’s operation area. The latter requires additional considerations on the way of merging the results from each camera image to get the final recognition result. To recognize hand-postures, we use both of appearance and structural features and decide the similarity between the two types of features by learning.