RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      KCI등재 SCOPUS

      Comparison of Reinforcement Learning Activation Functions to Improve the Performance of the Racing Game Learning Agent

      한글로보기

      https://www.riss.kr/link?id=A107106538

      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      부가정보

      다국어 초록 (Multilingual Abstract)

      Recently, research has been actively conducted to create artificial intelligence agents that learn games throughreinforcement learning. There are several factors that determine performance when the agent learns a game, butusing any of the activation f...

      Recently, research has been actively conducted to create artificial intelligence agents that learn games throughreinforcement learning. There are several factors that determine performance when the agent learns a game, butusing any of the activation functions is also an important factor. This paper compares and evaluates whichactivation function gets the best results if the agent learns the game through reinforcement learning in the 2Dracing game environment. We built the agent using a reinforcement learning algorithm and a neural network.
      We evaluated the activation functions in the network by switching them together. We measured the reward, theoutput of the advantage function, and the output of the loss function while training and testing. As a result ofperformance evaluation, we found out the best activation function for the agent to learn the game. Thedifference between the best and the worst was 35.4%.

      더보기

      참고문헌 (Reference)

      1 G. C. Tripathi, "Swish activation based deep neural network predistorter for RFPA" 1239-1242, 2019

      2 Z. Huang, "SNDCNN: self-normalizing deep CNNs with scaled exponential linear units for speech recognition"

      3 G. Brockman, "OpenAI gym"

      4 M. N. Moghadasi, "Evaluating Markov decision process as a model for decision making under uncertainty environment" 2446-2450, 2007

      5 Z. Wang, "Efficient deep convolutional neural networks using CReLU for ATR with limited SAR images" 2019 (2019): 7615-7618, 2019

      6 L. Lu, "Dying ReLU and initialization: theory and numerical examples"

      7 X. Zhang, "Dilated convolution neural network with LeakyReLU for environmental sound classification" 1-5, 2017

      8 A. Shah, "Deep residual networks with exponential linear unit" 59-65, 2016

      9 R. Yamashita, "Convolutional neural networks : an overview and application in radiology" 9 (9): 611-629, 2018

      10 A. Jeerige, "Comparison of deep reinforcement learning approaches for intelligent game playing" 366-371, 2019

      1 G. C. Tripathi, "Swish activation based deep neural network predistorter for RFPA" 1239-1242, 2019

      2 Z. Huang, "SNDCNN: self-normalizing deep CNNs with scaled exponential linear units for speech recognition"

      3 G. Brockman, "OpenAI gym"

      4 M. N. Moghadasi, "Evaluating Markov decision process as a model for decision making under uncertainty environment" 2446-2450, 2007

      5 Z. Wang, "Efficient deep convolutional neural networks using CReLU for ATR with limited SAR images" 2019 (2019): 7615-7618, 2019

      6 L. Lu, "Dying ReLU and initialization: theory and numerical examples"

      7 X. Zhang, "Dilated convolution neural network with LeakyReLU for environmental sound classification" 1-5, 2017

      8 A. Shah, "Deep residual networks with exponential linear unit" 59-65, 2016

      9 R. Yamashita, "Convolutional neural networks : an overview and application in radiology" 9 (9): 611-629, 2018

      10 A. Jeerige, "Comparison of deep reinforcement learning approaches for intelligent game playing" 366-371, 2019

      11 D. W. Lu, "Agent inspired trading using recurrent reinforcement learning and LSTM neural networks"

      12 이동철, "2D 슈팅 게임 학습 에이전트의 성능 향상을 위한 딥러닝 활성화 함수 비교 분석" 한국인터넷방송통신학회 19 (19): 135-141, 2019

      더보기

      동일학술지(권/호) 다른 논문

      동일학술지 더보기

      더보기

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      주제

      연도별 연구동향

      연도별 활용동향

      연관논문

      연구자 네트워크맵

      공동연구자 (7)

      유사연구자 (20) 활용도상위20명

      인용정보 인용지수 설명보기

      학술지 이력

      학술지 이력
      연월일 이력구분 이력상세 등재구분
      2023 평가예정 해외DB학술지평가 신청대상 (해외등재 학술지 평가)
      2020-01-01 평가 등재학술지 유지 (해외등재 학술지 평가) KCI등재
      2012-01-01 평가 등재학술지 선정 (등재후보2차) KCI등재
      2011-01-01 평가 등재후보 1차 PASS (등재후보1차) KCI등재후보
      2009-01-01 평가 등재후보학술지 선정 (신규평가) KCI등재후보
      더보기

      학술지 인용정보

      학술지 인용정보
      기준연도 WOS-KCI 통합IF(2년) KCIF(2년) KCIF(3년)
      2016 0.09 0.09 0.09
      KCIF(4년) KCIF(5년) 중심성지수(3년) 즉시성지수
      0.07 0.06 0.254 0.59
      더보기

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼