RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 음성지원유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        Emotion Recognition using Facial Thermal Images

        Jin-Sup Eom,Jin-Hun Sohn 대한인간공학회 2012 大韓人間工學會誌 Vol.31 No.3

        Objective: The aim of this study is to investigate facial temperature changes induced by facial expression and emotional state in order to recognize a persons emotion using facial thermal images. Background: Facial thermal images have two advantages compared to visual images. Firstly, facial temperature measured by thermal camera does not depend on skin color, darkness, and lighting condition. Secondly, facial thermal images are changed not only by facial expression but also emotional state. To our knowledge, there is no study to concurrently investigate these two sources of facial temperature changes. Method: 231 students participated in the experiment. Four kinds of stimuli inducing anger, fear, boredom, and neutral were presented to participants and the facial temperatures were measured by an infrared camera. Each stimulus consisted of baseline and emotion period. Baseline period lasted during 1min and emotion period 1~3min. In the data analysis, the temperature differences between the baseline and emotion state were analyzed. Eyes, mouth, and glabella were selected for facial expression features, and forehead, nose, cheeks were selected for emotional state features. Results: The temperatures of eyes, mouth, glanella, forehead, and nose area were significantly decreased during the emotional experience and the changes were significantly different by the kind of emotion. The result of linear discriminant analysis for emotion recognition showed that the correct classification percentage in four emotions was 62.7% when using both facial expression features and emotional state features. The accuracy was slightly but significantly decreased at 56.7% when using only facial expression features, and the accuracy was 40.2% when using only emotional state features. Conclusion: Facial expression features are essential in emotion recognition, but emotion state features are also important to classify the emotion. Application: The results of this study can be applied to human-computer interaction system in the work places or the automobiles.

      • KCI등재

        Neuro-facial Fusion for Emotion AI: Improved Federated Learning GAN for Collaborative Multimodal Emotion Recognition

        D. Saisanthiya,P. Supraja 대한전자공학회 2024 IEIE Transactions on Smart Processing & Computing Vol.13 No.1

        In the context of artificial intelligence technology, an emotion recognition (ER) has numerous roles in human lives. On the other hand, the emotion recognition techniques most currently used perform poorly in recognizing emotions, which limits their wide spread use in practical applications. A Collaborative Multimodal Emotion Recognition through Improved Federated Learning Generative Adversarial Network (MER-IFLGAN) for facial expressions and electro encephalogram (EEG) signals was proposed to reduce this issue. Multi-resolution binarized image feature extraction (MBIFE) was initially used for facial expression feature extraction. The EEG features were extracted using the Dwarf Mongoose Optimization (DMO) algorithm. Finally, IFLGAN completes the Emotion recognition task. The proposed technique was simulated in MATLAB. The proposed technique achieved 25.45% and 19.71% higher accuracy and a 32.01% and 39.11% shorter average processing time compared to the existing models, like EEG based Cross-subject and Cross-modal Model (CSCM) for Multimodal Emotion Recognition (MERCSCM) and Long-Short Term Memory Model (LSTM) for EEG Emotion Recognition (MERLSTM), respectively. The experimental results of the proposed model shows that complementing EEG signals with the features of facial expression could identify four types of emotions: happy, sad, fear, and neutral. Further more, the IFLGAN classifier can enhance the capacity of multimodal emotion recognition.

      • KCI등재

        인지 기능 저하에 따른 얼굴 정서 인식의 손상 : 정상 노인과 치매 환자를 대상으로

        최성진 한국건강심리학회 2013 한국심리학회지 건강 Vol.18 No.3

        타인의 얼굴 정서 인식은 대인관계 의사소통과 의미 있는 사회행동에 중요하다. 치매 환자의 경우 얼굴 정서 인식에 선택적 손상이 있다는 연구가 대부분이었지만, 그렇지 않다는 결과도 있어 논란이 있었다. 이에 본 연구는 정상 노인과 치매 환자를 대상으로 인지기능 저하에 따른 얼굴 정서 인식 손상에 관하여 살펴보았다. 실험참가자들은 정서차원평정과제와 정서구별과제를 실시하였다. 그 결과, 정서차원 평정점수에서 정상 노인과 치매 환자 모두 혐오, 분노, 공포, 슬픔, 놀람, 중립 순으로 정서를 부정 평정하였고, 행복은 긍정 평정하였다. 한편, 치매 환자는 정상 노인에 비해 공포, 분노, 슬픔을 더 부정적으로 평정하였고, 행복은 덜 긍정적으로 평정하였다. 정서구별과제에서는 정상 노인과 치매 환자 모두 행복, 놀람, 중립, 분노, 슬픔, 혐오, 공포 순으로 정확반응률이 높았다. 그리고 치매 환자는 정상 노인보다 공포, 슬픔, 혐오, 중립의 정확반응률이 저조하였다. 즉, 인지 기능 저하에 따라 얼굴 정서 인식의 정확도에 선택적인 감소가 있었다. 얼굴 정서 인식에 대한 이해는 치매 환자가 남아 있는 능력으로 사회적 상호작용을 하며 살아가고, 필요한 치료재활 전략을 수립하는데 중요한 역할을 할 것이다. Recognizing facial emotion from others is important for interpersonal communication and control for meaningful social behaviors. In the case of patients with dementia, most studies found that they had selective impairments of facial emotion recognition, whereas other studies disagreed. This study investigated the impairment of facial emotion recognition by declined cognitive function for the normal elderly and patients with dementia. Participants took two rating tasks of emotion-dimension and emotion-distinction. For the rating task of emotion-dimension, both the normal elderly and patients with dementia rated disgust, anger, fear, sadness, surprise; and neutral negatively in order, and rated happiness positively. Patients with dementia rated fear, anger, and sadness more negatively and happiness less positively compared to the normal elderly. For the rating task of emotion-distinction, both the normal elderly and patients with dementia showed a high rate of accuracy of happiness, surprise, neutral, anger, sadness, disgust, and fear in order. However, patients with dementia demonstrated a lower rate of accuracy of fear, sadness, disgust, and neutral compared to the normal elderly. The results imply that selective impairments of facial emotion recognition were caused by declined cognitive function. A better understanding of facial emotion recognition for patients with dementia will help design a therapeutic rehabilitation program that is necessary for social interaction with their remaining ability.

      • Facial Emotion Recognition Using k-NN and SVM

        Sangsup Choi,Eung-Hee Kim,Byungtae Ahn,Jin-Hun Sohn 대한인간공학회 2012 대한인간공학회 학술대회논문집 Vol.2012 No.11

        Objective: The aim of this study is to build an emotion recognition system that recognizes emotion on the basis of human facial expressions. Background: Emotion recognition is important for intelligent UI (User Interface) of computers. Our approach is to combine insights gained from psychological research and the power of k-NN (k Nearest Neighbors) and SVM (Support Vector Machine) to recognize facial emotions. Method: Our dataset was still images that recorded people’s facial expressions at their peak in 3 emotion-inducing situations. The emotions were: "joy", "anger", and "disgust". Using ASM (Active Shape Model), we extracted geometric features from recorded videos. The selection of features was based on findings of previous research. For k-NN, when a new image is presented, the model finds k most similar instances from the training data and recognizes an emotion that was most often associated with the k instances. For SVM, we trained 3 SVMs (joy vs. anger, anger vs. disgust, disgust vs. joy) to discriminate the geometric patterns of facial expressions. Then we presented a new image to the models to classify a most likely emotion category. Results: LOOCV (Leave-One-Out Cross Validation) was used to evaluate the performance. The accuracy of correct classification was 96% and 99% for k-NN and SVM, respectively. Conclusion: Our facial emotion recognition system that uses k-NN and SVM to classify temporal patterns of facial expressions achieved a very high rate of emotion detection. Application: Our emotion recognition system can be used to build more intelligent user interfaces.

      • KCI등재

        The Influence of Anxiety on the Recognition of Facial Emotion Depends on the Emotion Category and Race of the Target Faces

        강원준,김가영,김혜연,이수현 한국뇌신경과학회 2019 Experimental Neurobiology Vol.28 No.2

        The recognition of emotional facial expressions is critical for our social interactions. While some prior studies have shown that a high anxiety level is associated with more sensitive recognition of emotion, there are also reports supporting that anxiety did not affect or reduce the sensitivity to the recognition of facial emotions. To reconcile these results, here we investigated whether the effect of individual anxiety on the recognition of facial emotions is dependent on the emotion category and the race of the target faces. We found that, first, there was a significant positive correlation between the individual anxiety level and the recognition sensitivity for angry faces but not for sad or happy faces. Second, while the correlation was significant for both low- and high-intensity angry faces during the recognition of the observer’s own-race faces, there was significant correlation only for low-intensity angry faces during the recognition of other-race faces. Collectively, our results suggest that the influence of anxiety on the recognition of facial emotions is flexible depending on the characteristics of the target face stimuli including emotion category and race.

      • KCI등재

        공감-체계화 유형에 따른 얼굴 표정 읽기의 차이 -정서읽기와 정서변별을 중심으로-

        태은주 ( Eun Ju Tae ),조경자 ( Kyung Ja Cho ),박수진 ( Soo Jin Park ),한광희 ( Kwang Hee Han ),김혜리 ( Hei Rhee Ghim ) 한국감성과학회 2008 감성과학 Vol.11 No.4

        Mind reading is an essential part of normal social functioning and empathy plays a key role in social understanding. This study investigated how individual differences can have an effect on reading emotions in facial expressions, focusing on empathizing and systemizing. Two experiments were conducted. In study 1, participants performed emotion recognition test using facial expressions to investigate how emotion recognition can be different as empathy-systemizing type, facial areas, and emotion type. Study 2 examined how emotion recognition can be different as empathy-systemizing type, facial areas, and emotion type. An emotion discrimination test was used instead, with every other condition the same as in studies 1. Results from study 2 showed mostly same results as study 1: there were significant differences among facial areas and emotion type and also have an interaction effect between facial areas and emotion type. On the other hand, there was an interaction effect between empathy-systemizing type and emotion type in study 2. That is, how much people empathize and systemize can make difference in emotional discrimination. These results suggested that the empathy-systemizing type was more appropriate to explain emotion discrimination than emotion recognition.

      • KCI등재

        정서 재인 방법 고찰을 통한 통합적 모델 모색에 관한 연구

        박미숙 ( Mi Sook Park ),박지은 ( Ji Eun Park ),손진훈 ( Jin Hun Sohn ) 한국감성과학회 2011 감성과학 Vol.14 No.1

        Current researches on emotion detection classify emotions by using the information from facial, vocal, and bodily expressions, or physiological responses. This study was to review three representative emotion recognition methods, which were based on psychological theory of emotion. Firstly, literature review on the emotion recognition methods based on facial expressions was done. These studies were supported by Darwin`s theory. Secondly, review on the emotion recognition methods based on changes in physiology was conducted. These researches were relied on James` theory. Lastly, a review on the emotion recognition was conducted on the basis of multimodality(i.e., combination of signals from face, dialogue, posture, or peripheral nervous system). These studies were supported by both Darwin`s and James` theories. In each part, research findings was examined as well as theoretical backgrounds which each method was relied on. This review proposed a need for an integrated model of emotion recognition methods to evolve the way of emotion recognition. The integrated model suggests that emotion recognition methods are needed to include other physiological signals such as brain responses or face temperature. Also, the integrated model proposed that emotion recognition methods are needed to be based on multidimensional model and take consideration of cognitive appraisal factors during emotional experience.

      • KCI등재

        청각장애학생들의 정서표현어휘 이해능력과 얼굴표정 읽기능력의 관계 탐색

        서유경 ( Seo Yoo-kyung ),서중현 ( Seo Joong-hyun ),안성우 ( Ahn Seoung-woo ) 한국특수아동학회 2016 특수아동교육연구 Vol.18 No.4

        Purpose: This study explored a relationship between emotion related word knowledge and facial expression recognition of hearing impaired students. Method: Participants of this research were 26 hearing impaired students in middle or high school, 43 normal hearing 6th grade students of elementary school and 41 normal hearing 3rd grade students of middle school. Emotion related word knowledge task and Reading the Mind in the Eyes for facial expression recognition were conducted. Results: First, hearing impaired students` emotion related word knowledge was inferior to that of hearing students. Second, hearing impaired and normal hearing students showed similar performance for facial expression recognition. Third, hearing impaired students were significantly better than 6th grade students of elementary school on facial expression recognition task after controlling emotion related word knowledge. Forth, emotion related word knowledge and facial expression recognition were correlated significantly. Conclusion: The results support the hypothesis of correlation between emotion related word knowledge and facial expression recognition of students.

      • SCIESSCISCOPUSKCI등재

        Korean Facial Emotion Recognition Tasks for Schizophrenia Research

        YongChun Bahk,SeonKeong Jang,JeeYe Lee,KeeHong Choi 대한신경정신의학회 2015 PSYCHIATRY INVESTIGATION Vol.12 No.2

        Objective-Despite the fact that facial emotion recognition (FER) tasks using Western faces should be applied with caution to non-Western participants or patients, there are few psychometrically sound and validated FER tasks featuring Easterners’ facial expressions for emotions. Thus, we aimed to develop and establish the psychometric properties of the Korean Facial Emotion Identification Task (K-FEIT) and the Korean Facial Emotion Discrimination Task (K-FEDT) for individuals with schizophrenia. Methods-The K-FEIT and K-FEDT were administered to 42 Korean individuals with schizophrenia to evaluate their psychometric properties. To test the convergent and divergent validities, the Social Behavior Sequencing Task (SBST) and hinting task were administered as social-cognitive measures, and the Trail Making Test (TMT)-A and -B were administered as neurocognitive measures. Results-Average accuracy on the K-FEIT and K-FEDT were 63% and 74%, respectively, and internal consistencies of the K-FEIT and K-FEDT were 0.82 and 0.95, respectively. The K-FEIT and K-FEDT were significantly correlated with SBST and Hinting Task, but not with TMT-A and B. Conclusion-Following replication studies in a larger sample, the K-FEIT and K-FEDT are expected to facilitate future studies targeting facial emotion recognition in schizophrenia in Korea. Limitations and directions for future research are discussed.

      • Literature Survey on the Emotion Recognition Technologies based on Physiological Signals and Facial Expressions

        Hyeji Jang,Sung H. Han,Joohwan Park,Mingyu Lee,Dong Yeong Jeong 대한인간공학회 2015 대한인간공학회 학술대회논문집 Vol.2015 No.10

        This study aims to review the recent research trend and the current state of the emotion recognition technologies. Nowadays, the road rage became one of the most important social issues. As the cause of road rage is the anger of a driver, it is necessary to detect the driver’s anger and provide adequate feedbacks to prevent severe damages that could be caused by the road rage. For the sake of the reason, many researchers attempted to develop emotion recognition technologies to detect anger of the drivers based on several signals such as physiological signals and facial expressions. Literature survey was conducted to collect and analyze the academic literatures that containing information about the emotion recognition technologies. Collected emotion recognition technologies were analyzed from several perspectives such as the type of the sensors, signals, recognized emotions, and emotion classification algorithms. Most of the researchers preferred to classify emotions into several discrete groups. However, several studies tried to propose technologies to estimate driver’s emotional state on the continuous mood space. The most frequently used physiological signals to recognize the drivers’ emotion were heart rate and Galvanic skin responses. Visible light camera and infrared camera were used for facial expression recognition. The result of the study could be useful for the researchers who want to understanding the current state of the art of the emotion recognition technologies.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼