RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      KCI등재 SCI SCIE SCOPUS

      Electroencephalography-based imagined speech recognition using deep long short-term memory network

      한글로보기

      https://www.riss.kr/link?id=A108226809

      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      부가정보

      다국어 초록 (Multilingual Abstract)

      This article proposes a subject-independent application of brain–computer interfacing (BCI). A 32-channel Electroencephalography (EEG) device is used to measure imagined speech (SI) of four words (sos, stop, medicine, washroom) and one phrase (come-...

      This article proposes a subject-independent application of brain–computer interfacing (BCI). A 32-channel Electroencephalography (EEG) device is used to measure imagined speech (SI) of four words (sos, stop, medicine, washroom) and one phrase (come-here) across 13 subjects. A deep long short-term memory (LSTM) network has been adopted to recognize the above signals in seven EEG frequency bands individually in nine major regions of the brain. The results show a maximum accuracy of 73.56% and a network prediction time (NPT) of 0.14 s which are superior to other state-of-the-art techniques in the literature. Our analysis reveals that the alpha band can recognize SI better than other EEG frequencies. To reinforce our findings, the above work has been compared by models based on the gated recurrent unit (GRU), convolutional neural network (CNN), and six conventional classifiers. The results show that the LSTM model has 46.86% more average accuracy in the alpha band and 74.54% less average NPT than CNN. The maximum accuracy of GRU was 8.34% less than the LSTM network. Deep networks performed better than traditional classifiers.

      더보기

      참고문헌 (Reference)

      1 K. Khanna, "“The locked-in syn drome”: Can it be unlocked?" 2 (2): 96-99, 2011

      2 S. Martin, "Word pair classification during imagined speech using direct brain recordings" 6 : 25803-, 2016

      3 X. Glorot, "Understanding the difficulty of training deep feedforward neural networks" 249-256, 2010

      4 P. Agarwal, "Transforming imagined thoughts into speech using a covariance-based subset selection method" 59 (59): 180-183, 2021

      5 G. H. Klem, "The ten twenty electrode system of the international federation. The international federation of clinical neurophysiology" 52 : 3-6, 1999

      6 E. F. González-Castañeda, "Sonification and textification:Proposing methods for classifying unspoken words from EEG signals" 37 : 82-91, 2017

      7 C. S. Dasalla, "Single-trial classification of vowel speech imagery using common spatial patterns" 22 (22): 1334-1339, 2009

      8 P. Agarwal, "Silent speech classification based upon various feature extraction methods" 16-20, 2020

      9 Sandeep Kumar, "Real‐time implementation and performance evaluation of speech classifiers in speech analysis‐synthesis" 한국전자통신연구원 43 (43): 82-94, 2021

      10 J. Hazarika, "Neural modulation in action video game players during inhibitory control function: An EEG study using discrete wavelet transform" 45 : 144-150, 2018

      1 K. Khanna, "“The locked-in syn drome”: Can it be unlocked?" 2 (2): 96-99, 2011

      2 S. Martin, "Word pair classification during imagined speech using direct brain recordings" 6 : 25803-, 2016

      3 X. Glorot, "Understanding the difficulty of training deep feedforward neural networks" 249-256, 2010

      4 P. Agarwal, "Transforming imagined thoughts into speech using a covariance-based subset selection method" 59 (59): 180-183, 2021

      5 G. H. Klem, "The ten twenty electrode system of the international federation. The international federation of clinical neurophysiology" 52 : 3-6, 1999

      6 E. F. González-Castañeda, "Sonification and textification:Proposing methods for classifying unspoken words from EEG signals" 37 : 82-91, 2017

      7 C. S. Dasalla, "Single-trial classification of vowel speech imagery using common spatial patterns" 22 (22): 1334-1339, 2009

      8 P. Agarwal, "Silent speech classification based upon various feature extraction methods" 16-20, 2020

      9 Sandeep Kumar, "Real‐time implementation and performance evaluation of speech classifiers in speech analysis‐synthesis" 한국전자통신연구원 43 (43): 82-94, 2021

      10 J. Hazarika, "Neural modulation in action video game players during inhibitory control function: An EEG study using discrete wavelet transform" 45 : 144-150, 2018

      11 Dipti Pawar ; Sudhir Dhage, "Multiclass covert speech classifi cation using extreme learning machine" 대한의용생체공학회 10 (10): 217-226, 2020

      12 M. N. I. Qureshi, "Multiclass classification of word imagination speech with hybrid connectivity features" 65 (65): 2168-2177, 2018

      13 S. Hochreiter, "Long Short-Term Memory" 9 (9): 1735-1789, 1997

      14 C. H. Nguyen, "Inferring imagined speech using EEG signals: A new approach using Riemannian manifold features" 15 (15): 016002-, 2017

      15 A. A. Torres-García, "Implementing a fuzzy inference system in a multi-objective EEG channel selection model for imagined speech classification" 59 : 1-12, 2016

      16 K. Brigham, "Imagined speech classification with EEG signals for silent communication: A preliminary investigation into synthetic telepathy" 1-4, 2010

      17 M. D’Zamura, "Human-computer interaction. New trends Vol. 5610" Springer 40-48, 2009

      18 P. Saha, "Hierarchical deep feature learning for decoding imagined speech from EEG" 10019-10020, 2019

      19 T. K. Reddy, "HJB-equation-based optimal learning scheme for neural networks with applications in brain-computer interface" 4 (4): 159-170, 2020

      20 J. M. Lilly, "Generalized Morse wavelets as a superfamily of analytic wavelets" 60 (60): 6036-6041, 2012

      21 S. Wellington, "Fourteen-channel EEG with Imagined Speech (FEIS) dataset, v1.0" University of Edinburgh 3554128-, 2019

      22 C. Ju, "Federated transfer learning for EEG signal classification" 3040-3045, 2020

      23 A. M. Saxe, "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks" 2013

      24 P. Kumar, "Envisioned speech recognition using EEG sensors" 22 : 185-199, 2018

      25 Prabhakar Agarwal, "Electroencephalography based imagined alphabets classification using spatial and time‐domain features" Wiley 32 (32): 111-122, 2021

      26 A. Porbadnigk, "EEG-based speech recognition- impact of temporal effects" 376-381, 2009

      27 P. Kaushik, "EEG-based age and gender prediction using deep BLSTM-LSTM network model" 19 (19): 2634-2641, 2019

      28 S. Siuly, "EEG signal analysis and classification: Techniques and applications" Springer 2016

      29 P. Saha, "Deep learning the EEG manifold for phonological categorization from active thoughts" 2762-2766, 2019

      30 S. Kellis, "Decoding spoken words using local field potentials recorded from the cortical surface" 7 (7): 056007-, 2010

      31 D. Dash, "Decoding imagined and spoken phrases from non-invasive neural (MEG) signals" 14 : 290-, 2020

      32 L. Marple, "Computing the discrete-time “analytic” signal via FFT" 47 (47): 2600-2603, 1999

      33 S. Zhao, "Classifying phonological categories in imagined and articulated speech" 992-996, 2015

      34 M.-O. Tamm, "Classification of vowels from imagined speech with convolutional neural networks" 9 (9): 46-, 2020

      35 E. T. Esfahani, "Classification of primitive shapes using brain-computer interfaces" 44 (44): 1011-1019, 2012

      36 C. Cooney, "Classification of imagined spoken word-pairs using convolutional neural networks" 338-343, 2019

      37 P. Kant, "CWT based transfer learning for motor imagery classification for brain computer interfaces" 345 : 108886-, 2020

      38 R. A. Ramadan, "Brain computer interface:Control signals review" 223 : 26-44, 2017

      39 이미란 ; 류재환 ; 김덕환, "Automated epileptic seizure waveform detection method based on the feature of the mean slope of wavelet coefficient counts using a hidden Markov model and EEG signals" 한국전자통신연구원 42 (42): 217-229, 2020

      40 A. M. Choudhari, "An electrooculography based human machine interface for wheelchair control," 39 (39): 673-685, 2019

      41 O. Özdenizci, "Adversarial deep learning in EEG biometrics" 26 (26): 710-714, 2019

      42 W. He, "A wireless BCI and BMI system for wearable robots" 46 (46): 936-946, 2016

      43 A. Khosla, "A comparative analysis of signal processing and classification methods for different applications based on EEG signals" 40 (40): 649-690, 2020

      44 Sandeep Kumar, "A CNN based graphical user interface controlled by imagined movements" Springer Science and Business Media LLC 2021

      45 Ki-Hong Kim ; Hong Kee Kim ; Jong-Sung Kim ; Wookho Son ; 이수영, "A Biosignal-Based Human Interface Controlling a Power-Wheelchair for People with Motor Disabilities" 한국전자통신연구원 28 (28): 111-114, 2006

      더보기

      동일학술지(권/호) 다른 논문

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      주제

      연도별 연구동향

      연도별 활용동향

      연관논문

      연구자 네트워크맵

      공동연구자 (7)

      유사연구자 (20) 활용도상위20명

      인용정보 인용지수 설명보기

      학술지 이력

      학술지 이력
      연월일 이력구분 이력상세 등재구분
      2023 평가예정 해외DB학술지평가 신청대상 (해외등재 학술지 평가)
      2020-01-01 평가 등재학술지 유지 (해외등재 학술지 평가) KCI등재
      2005-09-27 학술지등록 한글명 : ETRI Journal
      외국어명 : ETRI Journal
      KCI등재
      2003-01-01 평가 SCI 등재 (신규평가) KCI등재
      더보기

      학술지 인용정보

      학술지 인용정보
      기준연도 WOS-KCI 통합IF(2년) KCIF(2년) KCIF(3년)
      2016 0.78 0.28 0.57
      KCIF(4년) KCIF(5년) 중심성지수(3년) 즉시성지수
      0.47 0.42 0.4 0.06
      더보기

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼