RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      KCI등재

      개인화 전시 서비스 구현을 위한 지능형 관객 감정 판단 모형

      한글로보기

      https://www.riss.kr/link?id=A60115300

      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      부가정보

      다국어 초록 (Multilingual Abstract)

      Recently, due to the introduction of high-tech equipment in interactive exhibits, many people’s attention has been concentrated on Interactive exhibits that can double the exhibition effect through the interaction with the audience. In addition, it ...

      Recently, due to the introduction of high-tech equipment in interactive exhibits, many people’s attention has been concentrated on Interactive exhibits that can double the exhibition effect through the interaction with the audience. In addition, it is also possible to measure a variety of audience reaction in the interactive exhibition. Among various audience reactions, this research uses the change of the facial features that can be collected in an interactive exhibition space. This research develops an artificial neural network-based prediction model to predict the response of the audience by measuring the change of the facial features when the audience is given stimulation from the non-excited state. To present the emotion state of the audience, this research uses a Valence-Arousal model. So, this research suggests an overall framework composed of the following six steps. The first step is a step of collecting data for modeling. The data was collected from people participated in the 2012 Seoul DMC Culture Open, and the collected data was used for the experiments. The second step extracts 64 facial features from the collected data and compensates the facial feature values. The third step generates independent and dependent variables of an artificial neural network model. The fourth step extracts the independent variable that affects the dependent variable using the statistical technique. The fifth step builds an artificial neural network model and performs a learning process using train set and test set. Finally the last sixth step is to validate the prediction performance of artificial neural network model using the validation data set. The proposed model is compared with statistical predictive model to see whether it had better performance or not. As a result, although the data set in this experiment had much noise, the proposed model showed better results when the model was compared with multiple regression analysis model. If the prediction model of audience reaction was used in the real exhibition, it will be able to provide countermeasures and services appropriate to the audience’s reaction viewing the exhibits. Specifically, if the arousal of audience about Exhibits is low, Action to increase arousal of the audience will be taken. For instance, we recommend the audience another preferred contents or using a light or sound to focus on these exhibits. In other words, when planning future exhibitions, planning the exhibition to satisfy various audience preferences would be possible. And it is expected to foster a personalized environment to concentrate on the exhibits. But, the proposed model in this research still shows the low prediction accuracy. The cause is in some parts as follows : First, the data covers diverse visitors of real exhibitions, so it was difficult to control the optimized experimental environment. So, the collected data has much noise, and it would results a lower accuracy. In further research, the data collection will be conducted in a more optimized experimental environment. The further research to increase the accuracy of the predictions of the model will be conducted. Second, using changes of facial expression only is thought to be not enough to extract audience emotions. If facial expression is combined with other responses, such as the sound, audience behavior, it would result a better result.

      더보기

      참고문헌 (Reference)

      1 임승희, "체험전시 콘텐츠의 몰입도 분석을 위한 주관적 경험 측정" 한국디자인학회 22 (22): 19-30, 2009

      2 임채진, "첨단과학기술 체험전시를 위한 연출기법 및 전시구성체계에 관한 연구-첨단과학분야(NT, MEMS, BT)의 체험전시를 중심으로-" 7 (7): 182-186, 2005

      3 김기수, "제품디자인 사고과정 분석을 위한 코딩시스템 개발" 한국기초조형학회 9 (9): 27-38, 2008

      4 김철근, "전시이론과 기법 연구집, In 학술총서 12권" 국립중앙과학관 1996

      5 김미연, "인터렉티브 체험형 전시공간 디자인을 위한 사례분석 연구" 대한건축학회 24 (24): 11-18, 2008

      6 심홍기, "인공신경망을 이용한 대대전투간작전지속능력 예측에 관한 연구" 2008

      7 유재엽, "영상미디어 연출 특성에 따른 공간 표현에 관한 연구" 한국실내디자인학회 13 (13): 175-183, 2004

      8 김대수, "신경망 이론과 응용" 하이테크정보 1993

      9 최지영, "디지털 미디어를 활용한 어린이 체험 전시 작품 구현 방안 연구" 한국디자인학회 23 (23): 233-242, 2010

      10 장남식, "데이터마이닝" 대청 1999

      1 임승희, "체험전시 콘텐츠의 몰입도 분석을 위한 주관적 경험 측정" 한국디자인학회 22 (22): 19-30, 2009

      2 임채진, "첨단과학기술 체험전시를 위한 연출기법 및 전시구성체계에 관한 연구-첨단과학분야(NT, MEMS, BT)의 체험전시를 중심으로-" 7 (7): 182-186, 2005

      3 김기수, "제품디자인 사고과정 분석을 위한 코딩시스템 개발" 한국기초조형학회 9 (9): 27-38, 2008

      4 김철근, "전시이론과 기법 연구집, In 학술총서 12권" 국립중앙과학관 1996

      5 김미연, "인터렉티브 체험형 전시공간 디자인을 위한 사례분석 연구" 대한건축학회 24 (24): 11-18, 2008

      6 심홍기, "인공신경망을 이용한 대대전투간작전지속능력 예측에 관한 연구" 2008

      7 유재엽, "영상미디어 연출 특성에 따른 공간 표현에 관한 연구" 한국실내디자인학회 13 (13): 175-183, 2004

      8 김대수, "신경망 이론과 응용" 하이테크정보 1993

      9 최지영, "디지털 미디어를 활용한 어린이 체험 전시 작품 구현 방안 연구" 한국디자인학회 23 (23): 233-242, 2010

      10 장남식, "데이터마이닝" 대청 1999

      11 김용준, "감정 모델을 통해 휴대폰의 벨소리를 추천하는 상황 인식 추천 시스템" 36 (36): 162-165, 2009

      12 유민준, "감성모델을 이용한 음악탐색 인터페이스" 707-710, 2009

      13 Ekman, P, "Universals and Cultural Differences in Facial Expressions of Emotion, In Nebraska Symposium on Motivation Vol.19" 207-282, 1972

      14 Luigina, C., "Situating ‘Place’ in Interaction Design :Enhancing the User Experience in Interactive Environments" University of Limerick 2004

      15 Colibazzi, T, "Neural systems subserving valence and arousal during the experience of induced emotions" 10 (10): 377-389, 2010

      16 Haykin, S., "Neural Networks:A Comprehensive Foundation" Prentice Hall:Upper Saddle River 1999

      17 Henry, A. R, "Neural Network-Based Face Detection" 20 (20): 23-38, 1998

      18 Lewis, P. A, "Neural Correlates of Processing Valence and Arousal in Affective Words" 17 (17): 742-748, 2007

      19 Oliveira, A. M, "Joint Model-Parameter Validation of Self-Estimates of Valence and Arousal:Probing a Differential-Weighting Model of Affective Intensity" 245-250, 2006

      20 Mark, R, "Human Expression Recognition from Motion Using : a Radial Basis Function Network Architecture" 7 (7): 1121-1138, 1996

      21 Xu, M, "Hierarchical movie affective content analysis based on arousal and valence features" 677-680, 2008

      22 Huang, C. L., "Facial Expression Recognition Using Model-Based Feature Extraction and Action Parameters Classification" 8 (8): 278-290, 1977

      23 Bouzalmat, A, "Face Recognition Using Neural Network Based Fourier Gabor Filters and Random Projection" 5 (5): 376-386, 2011

      24 Pantic, M, "Expert system for automatic analysis of facial expressions" 18 : 881-905, 2000

      25 Brainerd, C. J, "Developmental reversals in false memory : Effects of emotional valence and arousal" 107 : 137-154, 2010

      26 Nicolaou, M, "Continuous Prediction of Spontaneous Affect from Multiple Cues and Modalities in Valence-Arousal Space" 2 (2): 92-105, 2011

      27 Lane, R, "Cognitive Neuroscience of Emotion" Oxford Univ. Press 2000

      28 Alvarado, N, "Arousal and Valence in the Direct Scaling of Emotional Response to Film Clips" 21 : 323-348, 1997

      29 Stathopoulou, I. O, "An Improved Neural Network-Based Face Detection and Facial Expression Classification System" 2004

      30 Mehrabian, A, "An Approach to Environmental Psychology" MIT Press 1974

      31 MaCulloch, W. S, "A logical calculus of ideas immanent in nervous activity" 9 : 115-133, 1943

      32 Zahid, R, "A Unified Features Approach to Human Face Image Analysis and Interpretation" 41-48, 2009

      33 Russell, J. A, "A Circumplex Model of Affect" 39 : 1161-1178, 1980

      더보기

      동일학술지(권/호) 다른 논문

      동일학술지 더보기

      더보기

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      주제

      연도별 연구동향

      연도별 활용동향

      연관논문

      연구자 네트워크맵

      공동연구자 (7)

      유사연구자 (20) 활용도상위20명

      인용정보 인용지수 설명보기

      학술지 이력

      학술지 이력
      연월일 이력구분 이력상세 등재구분
      2027 평가예정 재인증평가 신청대상 (재인증)
      2021-01-01 평가 등재학술지 유지 (재인증) KCI등재
      2018-01-01 평가 등재학술지 유지 (등재유지) KCI등재
      2015-03-25 학회명변경 영문명 : 미등록 -> Korea Intelligent Information Systems Society KCI등재
      2015-03-17 학술지명변경 외국어명 : 미등록 -> Journal of Intelligence and Information Systems KCI등재
      2015-01-01 평가 등재학술지 유지 (등재유지) KCI등재
      2011-01-01 평가 등재학술지 유지 (등재유지) KCI등재
      2009-01-01 평가 등재학술지 유지 (등재유지) KCI등재
      2008-02-11 학술지명변경 한글명 : 한국지능정보시스템학회 논문지 -> 지능정보연구 KCI등재
      2007-01-01 평가 등재학술지 유지 (등재유지) KCI등재
      2004-01-01 평가 등재학술지 선정 (등재후보2차) KCI등재
      2003-01-01 평가 등재후보 1차 PASS (등재후보1차) KCI등재후보
      2001-07-01 평가 등재후보학술지 선정 (신규평가) KCI등재후보
      더보기

      학술지 인용정보

      학술지 인용정보
      기준연도 WOS-KCI 통합IF(2년) KCIF(2년) KCIF(3년)
      2016 1.51 1.51 1.99
      KCIF(4년) KCIF(5년) 중심성지수(3년) 즉시성지수
      1.78 1.54 2.674 0.38
      더보기

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼