RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      KCI등재

      Hyper Parameter Tuning Method based on Sampling for Optimal LSTM Model = 최적 LSTM 모델 결정을 위한 표본 추출 기반 하이퍼 파라미터 튜닝 방법

      한글로보기

      https://www.riss.kr/link?id=A106028666

      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      부가정보

      다국어 초록 (Multilingual Abstract)

      As the performance of computers increases, the use of deep learning, which has faced technical limitations in the past, is becoming more diverse. In many fields, deep learning has contributed to the creation of added value and used on the bases of more data as the application become more divers. The process for obtaining a better performance model will require a longer time than before, and therefore it will be necessary to find an optimal model that shows the best performance more quickly. In the artificial neural network modeling a tuning process that changes various elements of the neural network model is used to improve the model performance. Except Gride Search and Manual Search, which are widely used as tuning methods, most methodologies have been developed focusing on heuristic algorithms. The heuristic algorithm can get the results in a short time, but the results are likely to be the local optimal solution. Obtaining a global optimal solution eliminates the possibility of a local optimal solution. Although the Brute Force Method is commonly used to find the global optimal solution, it is not applicable because of an infinite number of hyper parameter combinations. In this paper, we use a statistical technique to reduce the number of possible cases, so that we can find the global optimal solution.
      번역하기

      As the performance of computers increases, the use of deep learning, which has faced technical limitations in the past, is becoming more diverse. In many fields, deep learning has contributed to the creation of added value and used on the bases of mor...

      As the performance of computers increases, the use of deep learning, which has faced technical limitations in the past, is becoming more diverse. In many fields, deep learning has contributed to the creation of added value and used on the bases of more data as the application become more divers. The process for obtaining a better performance model will require a longer time than before, and therefore it will be necessary to find an optimal model that shows the best performance more quickly. In the artificial neural network modeling a tuning process that changes various elements of the neural network model is used to improve the model performance. Except Gride Search and Manual Search, which are widely used as tuning methods, most methodologies have been developed focusing on heuristic algorithms. The heuristic algorithm can get the results in a short time, but the results are likely to be the local optimal solution. Obtaining a global optimal solution eliminates the possibility of a local optimal solution. Although the Brute Force Method is commonly used to find the global optimal solution, it is not applicable because of an infinite number of hyper parameter combinations. In this paper, we use a statistical technique to reduce the number of possible cases, so that we can find the global optimal solution.

      더보기

      목차 (Table of Contents)

      • Abstract
      • I. Introduction
      • II. Preliminaries
      • III. The Proposed Scheme
      • IV. Conclusions
      • Abstract
      • I. Introduction
      • II. Preliminaries
      • III. The Proposed Scheme
      • IV. Conclusions
      • REFERENCES
      더보기

      참고문헌 (Reference)

      1 C. Paar, "Understanding cryptography : a textbook for students and practitioners" Springer Science & Business Media 7-, 2009

      2 H. K. Lam, "Tuning of the structure and parameters of neural network using an improved genetic algorithm" 25-30, 2001

      3 A. Graves, "Supervised sequence labelling with recurrent neural networks" Springer 35-42, 2012

      4 R. Socher, "Recursive deep models for semantic compositionality over a sentiment treebank" 1631-1642, 2013

      5 T. Mikolov, "Recurrent neural network based language model" 2010

      6 J. Bergstra, "Random search for hyper-parameter optimization" 13 : 281-305, 2012

      7 R. V. Hogg, "Probability and statistical inference" Pearson Educational International 204-205, 2015

      8 J. Snoek, "Practical bayesian optimization of machine learning algorithms" 2951-2959, 2012

      9 B. Qolomany, "Parameters optimization of deep learning models using particle swarm optimization" 1285-1290, 2017

      10 S. Hochreiter, "Long short-term memory" 9 (9): 1735-1780, 1997

      1 C. Paar, "Understanding cryptography : a textbook for students and practitioners" Springer Science & Business Media 7-, 2009

      2 H. K. Lam, "Tuning of the structure and parameters of neural network using an improved genetic algorithm" 25-30, 2001

      3 A. Graves, "Supervised sequence labelling with recurrent neural networks" Springer 35-42, 2012

      4 R. Socher, "Recursive deep models for semantic compositionality over a sentiment treebank" 1631-1642, 2013

      5 T. Mikolov, "Recurrent neural network based language model" 2010

      6 J. Bergstra, "Random search for hyper-parameter optimization" 13 : 281-305, 2012

      7 R. V. Hogg, "Probability and statistical inference" Pearson Educational International 204-205, 2015

      8 J. Snoek, "Practical bayesian optimization of machine learning algorithms" 2951-2959, 2012

      9 B. Qolomany, "Parameters optimization of deep learning models using particle swarm optimization" 1285-1290, 2017

      10 S. Hochreiter, "Long short-term memory" 9 (9): 1735-1780, 1997

      11 R. E. Korf, "Depth-first iterative-deepening: An optimal admissible tree search" 27 (27): 97-109, 1985

      12 L. Deng, "Deep learning : methods and applications" 7 (7): 197-387, 2014

      13 T. Pukkala, "A heuristic optimization method for forest planning and decision making" 8 (8): 560-570, 1993

      14 C. Zhang, "A compound structure of ELM based on feature selection and parameter optimization using hybrid backtracking search algorithm for wind speed forecasting" 143 : 360-376, 2017

      15 Y. Xia, "A boosted decision tree approach using Bayesian hyper-parameter optimization for credit scoring" 78 : 225-241, 2017

      16 A. Tharwat, "A ba-based algorithm for parameter optimization of support vector machine" 93 : 13-22, 2017

      더보기

      동일학술지(권/호) 다른 논문

      동일학술지 더보기

      더보기

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      주제

      연도별 연구동향

      연도별 활용동향

      연관논문

      연구자 네트워크맵

      공동연구자 (7)

      유사연구자 (20) 활용도상위20명

      인용정보 인용지수 설명보기

      학술지 이력

      학술지 이력
      연월일 이력구분 이력상세 등재구분
      2026 평가예정 재인증평가 신청대상 (재인증)
      2020-01-01 평가 등재학술지 유지 (재인증) KCI등재
      2017-01-01 평가 등재학술지 유지 (계속평가) KCI등재
      2013-01-01 평가 등재학술지 유지 (등재유지) KCI등재
      2010-01-01 평가 등재학술지 유지 (등재유지) KCI등재
      2007-01-01 평가 등재학술지 선정 (등재후보2차) KCI등재
      2006-01-01 평가 등재후보 1차 PASS (등재후보1차) KCI등재후보
      2004-07-01 평가 등재후보학술지 선정 (신규평가) KCI등재후보
      더보기

      학술지 인용정보

      학술지 인용정보
      기준연도 WOS-KCI 통합IF(2년) KCIF(2년) KCIF(3년)
      2016 0.44 0.44 0.44
      KCIF(4년) KCIF(5년) 중심성지수(3년) 즉시성지수
      0.43 0.38 0.58 0.15
      더보기

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼