RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 음성지원유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        장르 드라마에서의 표정연기연구 - 드라마‘보이스2’를 중심으로 -

        오윤홍 한국엔터테인먼트산업학회 2019 한국엔터테인먼트산업학회논문지 Vol.13 No.8

        For the actors on video, facial expression acting can easily become ‘forced facial expression’ or ‘over-acting’. Also, if self-restraint is emphasized too much, then it becomes ‘flat acting’ with insufficient emotions. By bringing forth questions in regard to such facial expression acting methods, this study analyzed the facial expression acting of the actors in genre dramas with strong commercial aspects. In conclusion, the facial expression acting methods of the actors in genre dramas were being conducted in a typical way. This means that in visual conventions of video acting, the aesthetic standard has become the important standard in the facial expression acting of the actors. In genre dramas, the emotions of the characters are often revealed in close-up shots. Within the close-up shot, the most important expressive medium in a ‘zoomed-in face’ is the ‘pupil of the eye’, and emotions are mostly expressed through the movements of the eye and muscles around it. The second most important expressive medium is the ‘mouth’. The differences in the degree of opening and closing the mouth convey diverse emotions along with the expression of the ‘eye’. In addition, tensions in the facial muscles greatly hinder the expression of emotions, and the movement of facial muscles must be minimized to prevent excessive wrinkles from forming on the surface of the face. Facial expressions are not completed just with the movement of the muscles. Ultimately, the movement of the muscle is the result of emotions. Facial expression acting takes place after having emotional feelings. For this, the actor needs to go through the process of ‘personalization’ of a character, such as ‘emotional memory’, ‘concentration’ and ‘relaxation’ which are psychological acting techniques of Stanislavsky. Also, the characteristics of close-up shots that visually reveal the ‘inner world’ should be recognized. In addition, it was discovered that the facial expression acting is the reaction acting that provides the important point in the unfolding of narratives, and that the method of facial expression and the size of the shots required for the actors are different depending on the roles of main and supporting characters.

      • KCI등재

        시청자의 시각적 주의 연구를 통한 시각적 매체에서 비언어 커뮤니케이션의 효과적인 감정 및 상태 정보 전달 연구 -얼굴 표정과 몸짓을 중심으로-

        장다윤,김창원,김지윤 한국만화애니메이션학회 2022 만화애니메이션연구 Vol.- No.68

        People deliver their expression in both verbal and nonverbal ways. Nonverbal expression is the nonverbal communication based expression using the motions of various body parts. The facial expression and gesture show specific mental state, reaction and intention to deliver the core information that enables the counterpart can infer the expresser’s emotion. According to some of neuro-psychology studies, the facial expression connects to the expresser’s internal emotion state through neuro-circuit making her or his internal state information delivered to her or his counterpart. In any situation, therefore, people look at the counterparts’ faces unconsciously to read the counterparts’ information. Such motional factors as gesture and posture have the motional characters depending on the expresser’s mental state, so that the expresser delivers her or his emotion and state information by controlling her or his arms’ direction or position, hand shape or motion speed and etc. This study looks into which factors are more important to the audience based on the preceding research, which indicates that the eyes are caught to the facial expression and gesture that deliver the expresser’s information and enable the audience to infer her or his state and emotion. That is, this study analyzes the correlation between the facial expression and gesture based on the test of eye-tracking gaze motion: where the audience’s eyes are caught to and dispersed from. The test result of the focus level indicates that the eyes are first focusing on the facial expression, which the audiences try to read the expresser’s face by. The total gaze number is tested to look into the visual interest level and the test result shows that the audiences’ eyes keep being caught to the facial expression part due to their habit of trying to read the expresser’s face even after the expresser’s face is blocked. This result is as same as the result of the preceding research, which indicates that the audiences understand and grasp the expresser’s state and emotion through her or his facial expression. According to also the study result based on the eyes’ fixation time test additionally conducted, the visual processing is bigger on the facial expression than gesture to try to read the information through the facial expression. The focus and interest level test indicates that the audiences’ visual interest level is higher on the facial expression than on the gesture. The preceding research of the visual attention based on nonverbal expression targets the characters and images as its test objects, so that it is insufficient in the visual attention study that analyzes the visual media as nonverbal expression. This study, therefore, divides the audiences’ visual attention into the facial expression and gesture on each video footage to compare and analyze. The practical results of this study can be referred to and used as the factors to consider for efficient delivery of the information of emotion and state expression in digital contents video production.

      • KCI등재

        FCM 클러스터링을 이용한 표정공간의 단계적 가시화

        김성호(Sung-Ho Kim) 한국콘텐츠학회 2008 한국콘텐츠학회논문지 Vol.8 No.2

        본 논문은 사용자로 하여금 표정공간으로부터 일련의 표정들을 선택하게 함으로써 3차원 아바타의 표정을 제어할 수 있는 표정공간의 단계적 가시화 기법을 기술한다. 본 기법에 의한 시스템은 무표정 상태를 포함하여 11개의 서로 다른 모션들로 구성된 2400여개의 표정 프레임으로 2차원 표정공간을 구성하였으며, 3차원 아바타의 표정 제어는 사용자가 표정공간을 항해함으로서 수행되어진다. 그러나 표정공간에서 는 과격한 표정 변화에서부터 세밀한 표정 변화까지 다양한 표정 제어를 수행할 수 있어야하기 때문에 단계적 가시화 기법이 필요하다. 표정공간을 단계적으로 가시화하기 위해서는 퍼지 클러스터링을 이용한 다. 초기 단계에서는 11개의 클러스터 센터를 가지도록 클러스터링하고, 단계가 증가될 때 마다 클러스터 센터의 수를 두 배씩 증가시켜 표정들을 클러스터링한다. 이때 클러스터 센터와 표정공간에 분포된 표정들의 위치는 서로 다른 경우가 많기 때문에, 클러스터 센터에서 가장 가까운 표정상태를 찾아 클러스터센터로 간주한다. 본 논문은 본 시스템이 어떤 효과가 있는지를 알기 위해 사용자들로 하여금 본 시스템을 사용하여 3차원 아바타의 단계적 표정 제어를 수행하게 하였으며, 그 결과를 평가한다. This paper presents a phased visualization method of facial expression space that enables the user to control facial expression of 3D avatars by select a sequence of facial frames from the facial expression space. Our system based on this method creates the 2D facial expression space from approximately 2400 facial expression frames, which is the set of neutral expression and 11 motions. The facial expression control of 3D avatars is carried out in realtime when users navigate through facial expression space. But because facial expression space can phased expression control from radical expressions to detail expressions. So this system need phased visualization method. To phased visualization the facial expression space, this paper use fuzzy clustering. In the beginning, the system creates 11 clusters from the space of 2400 facial expressions. Every time the level of phase increases, the system doubles the number of clusters. At this time, the positions of cluster center and expression of the expression space were not equal. So, we fix the shortest expression from cluster center for cluster center. We let users use the system to control phased facial expression of 3D avatar, and evaluate the system based on the results.

      • KCI등재

        사회적 배제자들에게 쾌락재는 기부를 자극시키는가?

        오민정(Oh, Min Jung),박기경(Park , Kikyung),박종철(Park, Jong Chul) 한국문화산업학회 2017 문화산업연구 Vol.17 No.3

        본 연구는 사회적 배제(배제 vs. 통제)조건에서 제품유형이 준거기준으로 제시되었을 때 기부의도에 미치는 영향을 살펴봄과 동시에 수혜자의 표정(슬픔 vs. 기쁨)에 따라 기부자의 기부설득에 미치는 태도가 어떻게 달라지는지를 파악하였다. 즉 기존 선행연구에 따르면, 쾌락적 제품을 구매한 기부자들의 부의도가 높았으며, 수혜자의 슬픈 표정이 기부자들의 동정심을 자극하여 기부행동을 높여주었다. 하지만 아직까지 사회적 배제상황에서의 제품유형과 수혜자의 표정에 따른 조절효과를 살펴본 연구가 없으며 특히 본 연구에서는 제품유형을 준거기준으로 제시하여 선택맥락효과를 확인하고자 하였다. 하지만 연구결과 사회적 배제 조건에서 제품유형은 기부의도에 영향을 미치지 않았고, 수혜자의 표정 역시 조절변수로서 영향을 미치지 않았다. 이러한 조절상황을 보다 심도 있게 분석하기 위해 제품이 실용적인지 쾌락적인지, 또한 수혜자의 표정이 슬픈 표정인지 기쁜 표정인지에 따라 기부의도가 어떻게 달라지는지를 검증하기 위해 삼원상호작용 효과를 살펴보았다. 그 결과, 흥미롭게도 제품유형과 수혜자의 표정이 동시에 기부의도에 영향을 미치고 있었는데 특히 사회적 배제 집단에게서 그 차이가 뚜렷이 나타났다. 구체적으로, 사회적배제집단의 기부자들이 실용적 제품을 준거기준으로 제시받았을 때는 수혜자가 웃는 표정을 보일 때 기부의도가 높아진 반면, 쾌락적 제품을 먼저 접한 기부자들의 경우는 슬픈 표정의 수혜자에게 높은 기부의도 성향을 보이는 것으로 나타났다. The purpose of this study is to investigate the effect of product type on the donation intention when the product type is presented on the basis of social exclusion(exclusion vs control) and to examine the attitude toward donor`s donation persuasion according to the recipient`s facial expression(sadness vs happy). According to the previous study, donors who purchased hedonic products had a high degree of donation intention, and the sad facial expression of the recipient stimulated the sympathy of the donors and increased donation. However, there is no study on the effect of product type and recipient"s facial expression on social exclusion. Especially in this study we tried to confirm the effect of selective context by presenting product type as reference. But, in the social exclusion condition, the product type did not affect the donation intention and the recipient"s expression did not affect as a moderating variable either. However, in order to analyze the moderating situation more deeply, we examined the effect of 3-way interaction to test whether the product is utilitarian or hedonic and whether the recipient`s facial expression is sad or happy. As a result, interestingly, the product type and the recipient`s expression influenced donation intention at the same time, especially in the social exclusion group. In particular, donors of the social exclusion group are more likely to donate when the recipient shows a smiling face when the utilitarian product, but in the case of donors who have first experienced hedonic products are more likely to donate when the recipient shows a sad expression. 본 연구는 사회적 배제(배제 vs. 통제)조건에서 제품유형이 준거기준으로 제시되었을 때 기부의도에 미치는 영향을 살펴봄과 동시에 수혜자의 표정(슬픔 vs. 기쁨)에 따라 기부자의 기부설득에 미치는 태도가 어떻게 달라지는지를 파악하였다. 즉 기존 선행연구에 따르면, 쾌락적 제품을 구매한 기부자들의 부의도가 높았으며, 수혜자의 슬픈 표정이 기부자들의 동정심을 자극하여 기부행동을 높여주었다. 하지만 아직까지 사회적 배제상황에서의 제품유형과 수혜자의 표정에 따른 조절효과를 살펴본 연구가 없으며 특히 본 연구에서는 제품유형을 준거기준으로 제시하여 선택맥락효과를 확인하고자 하였다. 하지만 연구결과 사회적 배제 조건에서 제품유형은 기부의도에 영향을 미치지 않았고, 수혜자의 표정 역시 조절변수로서 영향을 미치지 않았다. 이러한 조절상황을 보다 심도 있게 분석하기 위해 제품이 실용적인지 쾌락적인지, 또한 수혜자의 표정이 슬픈 표정인지 기쁜 표정인지에 따라 기부의도가 어떻게 달라지는지를 검증하기 위해 삼원상호작용 효과를 살펴보았다. 그 결과, 흥미롭게도 제품유형과 수혜자의 표정이 동시에 기부의도에 영향을 미치고 있었는데 특히 사회적 배제 집단에게서 그 차이가 뚜렷이 나타났다. 구체적으로, 사회적배제집단의 기부자들이 실용적 제품을 준거기준으로 제시받았을 때는 수혜자가 웃는 표정을 보일 때 기부의도가 높아진 반면, 쾌락적 제품을 먼저 접한 기부자들의 경우는 슬픈 표정의 수혜자에게 높은 기부의도 성향을 보이는 것으로 나타났다. The purpose of this study is to investigate the effect of product type on the donation intention when the product type is presented on the basis of social exclusion(exclusion vs control) and to examine the attitude toward donor`s donation persuasion according to the recipient`s facial expression(sadness vs happy). According to the previous study, donors who purchased hedonic products had a high degree of donation intention, and the sad facial expression of the recipient stimulated the sympathy of the donors and increased donation. However, there is no study on the effect of product type and recipient"s facial expression on social exclusion. Especially in this study we tried to confirm the effect of selective context by presenting product type as reference. But, in the social exclusion condition, the product type did not affect the donation intention and the recipient"s expression did not affect as a moderating variable either. However, in order to analyze the moderating situation more deeply, we examined the effect of 3-way interaction to test whether the product is utilitarian or hedonic and whether the recipient`s facial expression is sad or happy. As a result, interestingly, the product type and the recipient`s expression influenced donation intention at the same time, especially in the social exclusion group. In particular, donors of the social exclusion group are more likely to donate when the recipient shows a smiling face when the utilitarian product, but in the case of donors who have first experienced hedonic products are more likely to donate when the recipient shows a sad expression.

      • KCI등재

        CCA 투영기법을 사용한 모션 데이터의 대화식 얼굴 표정 애니메이션

        김성호 ( Sung-ho Kim ) 한국인터넷정보학회 2005 인터넷정보학회논문지 Vol.6 No.1

        본 논문은 다량의 고차원 얼굴 표정 모션 데이터를 2차원 공간에 분포시키고, 애니메이터가 이 공간을 항해하면서 원하는 표정들을 실시간적으로 선택함으로써 얼굴 표정 애니메이션을 생성하는 방법을 기술한다. 본 논문에서는 약 2400여개의 얼굴 표정 프레임을 이용하여 표정공간을 구성하였다. 표정공간의 생성은 임의의 두 표정간의 최단거리의 결정으로 귀결된다. 표정공간은 다양체 공간으로서 이 공간내의 두 점간의 거리는 다음과 같이 근사적으로 표현한다. 임의의 마커 간의 거리를 표시하는 거리행렬을 사용하여 각 표정의 상태를 표현하는 표정상태벡터를 정의한 후, 두 표정이 인접해 있으면, 이를 두 표정 간 최단거리(다양체 거리)에 대한 근사치로 간주한다. 그리하여 인접 표정들 간의 인접거리가 결정되면, 이들 인접거리들을 연결하여 임의의 두 표정 상태간의 최단거리를 구하는데, 이를 위해 Floyd 알고리즘을 이용한다. 다차원 공간인 표정공간을 가시화하기 위해서는 CCA 투영기법을 이용하여 2차원 평면에 투영시켰다. 얼굴 애니메이션은 사용자 인터페이스를 사용하여 애니메이터들이 2차원 공간을 항해하면서 실시간으로 생성한다. This paper describes how to distribute high multi-dimensional facial expression data of vast quantity over a suitable space and produce facial expression animations by selecting expressions while animator navigates this space in real-time. We have constructed facial spaces by using about 2400 facial expression frames on this paper. These facial spaces are created by calculating of the shortest distance between two random expressions. The distance between two points in the space of expression, which is manifold space, is described approximately as following ; When the linear distance of them is shorter than a decided value, if the two expressions are adjacent after defining the expression state vector of facial status using distance matrix expressing distance between two markers, this will be considered as the shortest distance (manifold distance) of the two expressions. Once the distance of those adjacent expressions was decided, We have taken a Floyd algorithm connecting these adjacent distances to yield the shortest distance of the two expressions. We have used CCA(Curvilinear Component Analysis) technique to visualize multi-dimensional spaces, the form of expressing space, into two dimensions. While the animators navigate this two dimensional spaces, they produce a facial animation by using user interface in real-time.

      • KCI등재

        이모티콘 세부특징의 정교성과 표정 지각 기제

        한현주,최훈 한국인지및생물심리학회 2019 한국심리학회지 인지 및 생물 Vol.31 No.3

        Facial expressions are visual information that reflects our inner states and plays an important role in how people communicate with others. Recently, facial expressions have been actively used through emoticons via online communication modes such as chatting on social media. The current study investigated the underlying mechanism of the perception of facial expressions in emoticons. There are two hypotheses regarding the processing of facial expressions. First, the feature-based processing hypothesis suggests facial expressions are perceived based on information gathered from each facial feature such as eyes, nose, and mouth, all of which are processed independently. Second, according to the holistic processing hypothesis, facial expressions are perceived with configural information such as the distance between the eyes. For the perception of facial expressions in a real human face, most studies have supported the holistic processing hypothesis. However, the results of studies for facial expressions in emoticons are inconsistent. The purpose of the current study is to explore whether the underlying mechanism of the facial expression perception in emoticons can be influenced by the elaboration level, related to how facial features are described in detail. In the experiment, we employed two types of emoticons to manipulate the elaboration level: simple and elaborated. In a simple emoticon, all facial features are represented with a few line segments, whereas in an elaborated emoticon, the features are elaborately described at the drawing level. Participants were asked to perform an emotion recognition task in which a human face or an emoticon with one of the five basic facial expressions (anger, fear, happiness, sadness, and surprise) was presented in an aligned or misaligned state. As a result, whereas in the case of the elaborated emoticons the accuracy of facial expression recognition was higher in the aligned condition than in the misaligned condition, there was not significant differences between these two conditions in the case of the simple emoticons. It suggests that the higher the level of elaboration of features in emoticons, the stronger the effect of the holistic processing. 표정은 우리의 내면 상태를 알려주는 얼굴 내의 시각 정보로, 인간 사이의 의사소통을 도와주는 중요한 역할을 한다. 최근에는 SNS 등과 같이 온라인 상의 의사소통 과정에서도 이모티콘을 통해 표정 정보를 적극적으로 사용하고 있다. 본 연구는 이모티콘 표정 지각의 처리 기제에 대해서 알아보고자 하였다. 일반적으로 표정 지각 처리와 관련하여 두 가지의 가설이 있다. 세부특징 기반 처리 가설은 먼저 얼굴의 눈, 코, 입 등의 세부특징들을 각각 처리한 이후, 하나의 통합된 결과로 표정이 지각된다고 주장한다. 이에 반해, 전역적 처리 가설은 세부특징들의 위치 관계성(예, 두 눈 간의 거리)과 같은 전반적인 배열 정보를 통하여 표정이 지각된다고 주장한다. 이모티콘의 표정 처리는 실제 사람의 얼굴 표정과 동일한 기제로 지각된다는 믿음과 달리 최근의 연구들은 이 두 기제가 동일하지 않다는 것을 보여주어, 이모티콘 표정 지각의 기제에 대한 보다 체계적인 연구가 요구 되고 있다. 본 연구에서는 이모티콘의 정교성 수준에 따라 표정 지각 기제가 달라질 수 있는지를 확인하고자 하였다. 본 실험에서 이모티콘의 정교성 수준을 단순한 선으로 표현한 것(단순 이모티콘)과 그림 수준으로 정교하게 표현한 것(정교 이모티콘)으로 구분하였다. 다섯 개의 표정(분노, 공포, 행복, 슬픔, 놀람) 중 하나의 표정을 짓고 있는 실제 얼굴 혹은 이모티콘의 세부특징을 정상적인 곳에 위치시키거나(정배열 조건), 정상적이지 않은 곳에 위치시켜(오배열 조건) 제시하여 정서 식별 과제를 실시하였다. 그 결과, 정교 이모티콘의 경우 정배열 조건에서의 표정 식별 정답률이 오배열 조건에서보다 높았던 반면, 단순 이모티콘의 경우에는 두 조건 간에 유의한 차이가 없었다. 이는 이모티콘의 표정 지각은 그 세부특징의 정교성 수준이 높을수록, 전역적 처리가 표정 지각에 더 강하게 영향을 끼친다는 것을 보여준다.

      • KCI등재

        사회불안 집단의 얼굴표정 정서자극에 대한 해석편향

        이대현,백용매 한국임상심리학회 2013 Korean Journal of Clinical Psychology Vol.32 No.1

        The aim of this study was to investigate the interpretation biases of the social anxiety group using facial expression emotional stimuli. SADS(Social Avoidance and Distress Scale) was administered to 636 college students, and the high social anxiety group(32 persons) and low social anxiety group(34 persons) were selected based on the scores of this scale. The social anxiety situation was manipulated to all participants in the experiment; then, the single facial expression stimulus and the multiple facial stimulus were presented to all participants, who were asked to rate between a 1∼5 points, depending on how positive or negative the attitude to the single facial expression stimulus and multiple stimulus. Three-way ANOVA was performed in order to comprehend the differences in the interpretation biases of the single and multiple stimuli between the high social anxiety group and the low social anxiety group. The results of this study were as follows: the high social anxiety group showed the negative interpretation biases in the negative and neutral emotion type of the multiple facial stimuli. However, no differences in the single stimulus type were observed between the high social anxiety group and the low social anxiety group. In particular, the high social anxiety group showed more negative interpretation biases when there were ambiguous neutral emotional stimuli of facial expression. This means that subjects in the high social anxiety group interpreted the negative facial expression stimulus and the neutral facial expression stimulus more negatively when many facial stimuli were presented, compared to the low social anxiety group. In general, these results imply that the high social anxiety group showed the characteristics of interpretation biases in processing the facial expression emotion stimulus. The implications and limitations of this study, along with suggestions for further research were discussed.

      • SCISCIESCOPUS

        Different patterns in mental rotation of facial expressions in complex regional pain syndrome patients

        Lee, Won Joon,Choi, Soo-Hee,Jang, Joon Hwan,Moon, Jee Youn,Kim, Yong Chul,Noh, EunChung,Shin, Jung Eun,Shin, HyunSoon,Kang, Do-Hyung Williams & Wilkins Co 2017 Medicine Vol.96 No.39

        <▼1><P>Supplemental Digital Content is available in the text</P></▼1><▼2><P><B>Abstract</B></P><P>Although facial pain expressions are considered to be the most visible pain behaviors, it is known that the association between pain intensity and facial pain expression is weak for chronic pain. The authors hypothesized that the facial pain expressiveness was altered in chronic pain and investigated it with a mental rotation task using various facial expression, which seems to be associated with actual facial movements. As a task stimulus, 4 types of facial expression stimuli consisted of upper (tightening of eye and furrowed brows) and lower (raising upper lip) pain-specific facial expressions, and upper (eyeball deviation) and lower (tongue protrusion) facial movements not using facial muscles were used. Participants were asked to judge whether a stimulus presented at various rotation angles was left- or right-sided. The authors tested 40 patients with complex regional pain syndrome (CRPS) (12 women, age range 21–60) and 35 healthy controls (15 women, age range 26–64). In an analysis of reaction time (RT) using a linear mixed model, patients were slower to react to all types of stimuli (<I>P</I> = .001) and a significant interaction between group (patient or control) and type of facial expression was observed (<I>P</I> = .01). In the post hoc analysis only patients showed longer RTs to raising upper lip than other types of facial expressions. This reflects a deficit in mental rotation tasks especially for lower facial region pain expressions in CRPS, which may be related to the psychosocial aspects of pain. However, comprehensive intra- and interpersonal influences should be further investigated.</P></▼2>

      • KCI등재

        A Study of the Effect of Non-verbal Communication on the Customer’s Emotional Responses and Customer Loyalty in Service Failure Situations in Bakery Cafés - Based on Comparison between the Service Provider’s Uniform and Expression -

        서광열(Suh, Kwang-Yul) 한국외식경영학회 2016 외식경영연구 Vol.19 No.5

        한국에서 외식산업의 성장과 더불어 급속히 발전하고 있는 베이커리카페에서 서비스제공자의 비언어적커뮤니케이션이 고객감정과 고객충성도에 미치는 영향을 관해 알아보았다. 이를 위해 베이커리카페에서 서비스제공자의 비언어적커뮤니케 이션을 외양과 신체언어를 중심으로 동영상을 제작하여 설문조사를 실시하였다. 실험연구를 위한 동영상은 외양으로 유니폼과 신체언어로 표정을 조합하여 4개의 실험 처치물로 구성하였다. 실험연구 결과, 실험 처치물(a)인 ‘단정한 유니폼과 밝은 표정’. 실험 처치물(c)인 ‘비단정한 유니폼과 ‘밝은 표정’이 동일한 결과를 도 출하였고, 실험 처치물 (b)인 ‘단정한 유니폼과 무표정’, 실험 처치물 (d)인 ‘비단정한 유니폼과 무표정’이 같은 결과로 표정이 고객감정과 고객충성도에 영향이 미치는 것으로 나타났다. 이와 같은 연구결과는 베이커리카페 서비스제공자의 비언어적 커뮤니케이션 요소중 신체언어인 표정이 외양인 유니폼 보다 더욱 중요하다는 것을 알수 있었다. 즉, 베이커리카페에서 간혹 발생할 수 있는 서비스 실패상황에서도 종사원의 표정을 통 한 이미지가 고객에게 긍정적인 충성도를 이끌어 낼 수 있을 것이다. 이는 선행연구에서 진행되지 않았던 비언어적 커뮤니케이션 요소인 표정과 유니폼을 비교하여 어떤 부분이 더 고객감정과 충성도에 영향을 주는지를 연구했다는 데 의의가 있다. The aim of this research was to investigate the effect of non-verbal communication of the service provider on the customer’s emotional responses and on customer loyalty in a bakery café, as bakery cafés are rapidly developing with the growth of the food service industry in South Korea. For this purpose, videos of non-verbal communication of the service provider in a bakery café were made focusing on appearance and body language and a survey was conducted. The videos for experimental research were composed of four manipulated experimental video clips by combining the uniform as a physical appearance element and the facial expression as a body language element. As a result of experiments, the experimental video clip (a) of the tidy uniform and bright facial expression and the experimental video clip (c) of the untidy uniform and bright facial expression derived the same results, while the experimental video clip (b) of the tidy uniform and blank facial expression’ and the experimental video clip (d) of ‘the untidy uniform and blank facial expression’ draw the same results. These results showed that facial expressions affected the customer’s emotional responses and customer loyalty. These results demonstrated that the facial expression of the service provider of the bakery café, which is a sort of body language among non-verbal communication elements, is more important than the uniform, which is an appearance element. In other words, the results showed that the image of employees’ facial expressions may lead to positive customer loyalty even in service failure situations that can sometimes occur in bakery cafés. This study is meaningful in that it investigated which of the facial expression and uniform has a greater effect on the customer’s emotional responses and customer loyalty by comparing the two non-verbal communication elements, which was not conducted in previous studies.

      • KCI등재

        정서차원과 얼굴제시영역이 얼굴표정 인식에 미치는 영향: 성차를 중심으로

        송인혜,김혜리,조경자 한국여성심리학회 2008 한국심리학회지 여성 Vol.13 No.2

        In this study, we observed the abilities to recognize facial expressions between the genders, the different types of emotions (basic and complex emotions) and facial expression presenting areas (whole face and eyes). We also investigated whether there are differences in ability to recognize facial expressions according to dimensions of emotion(pleasantness/unpleasantness and awakening/relaxation dimension). A total of 32 types of emotional state facial expressions which are linked relatively strong with the emotional vocabularies were presented. In each trial, an emotional vocabulary and four facial expressions were shown to the subjects who were told to choose one suitable facial expression which was matched with the emotional vocabulary. The results showed that the subjects judged better on the condition of basic emotions more than complex emotions, the whole face more than eyes, the pleasantness dimension more than the unpleasantness dimension and the relaxation dimension more than the awakening dimension. Also, it was found that women were better at judging facial expressions than men. This study suggests that gender, types of emotion, facial presenting areas and dimension of emotion have effects on recognition of facial expressions.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼