RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      Consistency Regularization을 적용한 멀티모달 한국어 감정인식

      한글로보기

      https://www.riss.kr/link?id=A107947519

      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      부가정보

      다국어 초록 (Multilingual Abstract)

      Recently, the demand for artificial intelligence-based voice services, identifying and appropriately responding to user needs based on voice, is increasing. In particular, technology for recognizing emotions, which is non-verbal information of human v...

      Recently, the demand for artificial intelligence-based voice services, identifying and appropriately responding to user needs based on voice, is increasing. In particular, technology for recognizing emotions, which is non-verbal information of human voice, is receiving significant attention to improve the quality of voice services. Therefore, speech emotion recognition models based on deep learning is actively studied with rich English data, and a multi-modal emotion recognition framework with a speech recognition module has been proposed to utilize both voice and text information. However, the framework with speech recognition module has a disadvantage in an actual environment where ambient noise exists. The performance of the framework decreases along with the decrease of the speech recognition rate. In addition, it is challenging to apply deep learning-based models to Korean emotion recognition because, unlike English, emotion data is not abundant. To address the drawback of the framework, we propose a consistency regularization learning methodology that can reflect the difference between the content of speech and the text extracted from the speech recognition module in the model. We also adapt pre-trained models with self-supervised way such as Wav2vec 2.0 and HanBERT to the framework, considering limited Korean emotion data. Our experimental results show that the framework with pre-trained models yields better performance than a model trained with only speech on Korean multi-modal emotion dataset. The proposed learning methodology can minimize the performance degradation with poor performing speech recognition modules.

      더보기

      참고문헌 (Reference)

      1 Schneider, S., "wav2vec:Unsupervised pre-training for speech recognition"

      2 Baevski, A., "wav2vec 2.0:A framework for self-supervised learning of speech representations"

      3 Baevski, A., "vq-wav2vec:Self-supervised learning of discrete speech representations, arXiv.org > cs >"

      4 Karlgren, J., "Usefulness of sentiment analysis" 2012

      5 Wu, M., "Transformer Based End-to-End Mispronunciation Detection and Diagnosis" 2021 : 3954-3958, 2021

      6 Paleari, M., "Towards multimodal emotion recognition: A new approach" 174-181, 2010

      7 Jiao, X., "TinyBERT: Distilling BERT for Natural Language Understanding" 2020

      8 Seehapoch, T., "Speech emotion recognition using support vector machines" 2013

      9 Nwe, T. L., "Speech emotion recognition using hidden Markov models" 41 (41): 603-623, 2003

      10 Han, K., "Speech emotion recognition using deep neural network and extreme learning machine" 2014

      1 Schneider, S., "wav2vec:Unsupervised pre-training for speech recognition"

      2 Baevski, A., "wav2vec 2.0:A framework for self-supervised learning of speech representations"

      3 Baevski, A., "vq-wav2vec:Self-supervised learning of discrete speech representations, arXiv.org > cs >"

      4 Karlgren, J., "Usefulness of sentiment analysis" 2012

      5 Wu, M., "Transformer Based End-to-End Mispronunciation Detection and Diagnosis" 2021 : 3954-3958, 2021

      6 Paleari, M., "Towards multimodal emotion recognition: A new approach" 174-181, 2010

      7 Jiao, X., "TinyBERT: Distilling BERT for Natural Language Understanding" 2020

      8 Seehapoch, T., "Speech emotion recognition using support vector machines" 2013

      9 Nwe, T. L., "Speech emotion recognition using hidden Markov models" 41 (41): 603-623, 2003

      10 Han, K., "Speech emotion recognition using deep neural network and extreme learning machine" 2014

      11 Peng, Z., "Shrinking Bigfoot: Reducing wav2vec 2.0footprint"

      12 Liu, Y., "Roberta: A robustly optimized bert pretraining approach"

      13 Kolesnikov, A., "Revisiting self-supervised visual representation learning" 2019

      14 Kim, K. -H., "Predicting the success of bank telemarketing using deep convolutional neural network" 2015

      15 Tsai, Y. -H. H., "Multimodal transformer for unaligned multimodal language sequences" 2019

      16 Yoon, S., "Multimodal speech emotion recognition using audio and text" 2018

      17 Majumder, N., "Multimodal sentiment analysis using hierarchical fusion with context modeling" 161 : 124-133, 2018

      18 Gu, Y., "Multimodal affective analysis using hierarchical attention strategy with word-level alignment, Proceedings of the Conference, Association for Computational Linguistics" 2225-2235, 2018

      19 Siriwardhana, S., "Multimodal Emotion Recognition With Transformer-Based Self Supervised Feature Fusion" 8 : 176274-176285, 2020

      20 Khan, A. U., "Mmft-bert : Multimodal fusion transformer with bert encodings for visual question answering"

      21 Tsai, Y. H. H., "Learning factorized multimodal representations"

      22 Xu, H., "Learning Alignment for Multimodal Emotion Recognition from Speech"

      23 Bang, J. -U., "Ksponspeech : Korean spontaneous speech corpus for automatic speech recognition" 10 (10): 6936-, 2020

      24 Siriwardhana, S., "Jointly Fine-Tuning"BERT-like"Self Supervised Models to Improve Multimodal Speech Emotion Recognition"

      25 Park, J., "HanBERT: Pretrained BERT Model for Korean"

      26 Selvaraju, R. R., "Grad-cam: Visual explanations from deep networks via gradient-based localization" 2017

      27 Fayek, H. M., "Evaluating deep learning architectures for Speech Emotion Recognition" 92 : 60-68, 2017

      28 Bojanowski, P., "Enriching word vectors with subword information" 5 : 135-146, 2017

      29 Kwon, O. -W., "Emotion recognition by speech signals" 2003

      30 Pepino, L., "Emotion Recognition from Speech Using Wav2vec 2.0 Embeddings"

      31 Sanh, V., "DistilBERT, a distilled version of BERT: Smaller, faster, Cheaper and Lighter"

      32 Rawat, S., "Digital life assistant using automated speech recognition" 2014

      33 Devlin, J., "Bert:Pre-training of deep bidirectional transformers for language understanding"

      34 Pantic, M., "Affective multimodal human-computer interaction" 669-676, 2005

      35 Kingma, D. P., "Adam : A Method for Stochastic Optimization" 2015

      36 Chen, T., "A simple framework for contrastive learning of visual representations" 119 : 1597-1160, 2020

      37 McDuff, D., "A multimodal emotion sensing platform for building emotion-aware applications"

      더보기

      동일학술지(권/호) 다른 논문

      동일학술지 더보기

      더보기

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      주제

      연도별 연구동향

      연도별 활용동향

      연관논문

      연구자 네트워크맵

      공동연구자 (7)

      유사연구자 (20) 활용도상위20명

      인용정보 인용지수 설명보기

      학술지 이력

      학술지 이력
      연월일 이력구분 이력상세 등재구분
      2021 평가예정 계속평가 신청대상 (등재유지)
      2016-01-01 평가 우수등재학술지 선정 (계속평가)
      2013-01-01 평가 등재학술지 유지 (기타) KCI등재
      2012-05-25 학술지명변경 외국어명 : Journal of the Korean Insitute of Industrial Engineers -> Journal of the Korean Institute of Industrial Engineers KCI등재
      2010-01-01 평가 등재학술지 유지 (등재유지) KCI등재
      2008-01-01 평가 등재학술지 유지 (등재유지) KCI등재
      2006-01-01 평가 등재학술지 유지 (등재유지) KCI등재
      2004-01-01 평가 등재학술지 유지 (등재유지) KCI등재
      2001-07-01 평가 등재학술지 선정 (등재후보2차) KCI등재
      1999-01-01 평가 등재후보학술지 선정 (신규평가) KCI등재후보
      더보기

      학술지 인용정보

      학술지 인용정보
      기준연도 WOS-KCI 통합IF(2년) KCIF(2년) KCIF(3년)
      2016 0.65 0.65 0.66
      KCIF(4년) KCIF(5년) 중심성지수(3년) 즉시성지수
      0.56 0.47 1.026 0.14
      더보기

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼