RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      KCI등재

      강화학습 모델에 대한 적대적 공격과 이미지 필터링 기법을 이용한 대응 방안 = Adversarial Attacks on Reinforce Learning Model and Countermeasures Using Image Filtering Method

      한글로보기

      https://www.riss.kr/link?id=A109321684

      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      부가정보

      다국어 초록 (Multilingual Abstract)

      Recently, deep neural network-based reinforcement learning models have been applied in various advanced industrial fields such as autonomous driving, smart factories, and home networks, but it has been shown to be vulnerable to malicious adversarial attack. In this paper, we applied deep reinforcement learning models, DQN and PPO, to the autonomous driving simulation environment HighwayEnv and conducted three adversarial attacks: FGSM(Fast Gradient Sign Method), BIM(Basic Iterative Method), PGD(Projected Gradient Descent) and CW(Carlini and Wagner). In order to respond to adversarial attack, we proposed a method for deep learning models based on reinforcement learning to operate normally by removing noise from adversarial images using a bilateral filter algorithm. Furthermore, we analyzed performance of adversarial attacks using two popular metrics such as average of episode duration and the average of the reward obtained by the agent. In our experiments on a model that removes noise of adversarial images using a bilateral filter, we confirmed that the performance is maintained as good as when no adversarial attack was performed.
      번역하기

      Recently, deep neural network-based reinforcement learning models have been applied in various advanced industrial fields such as autonomous driving, smart factories, and home networks, but it has been shown to be vulnerable to malicious adversarial a...

      Recently, deep neural network-based reinforcement learning models have been applied in various advanced industrial fields such as autonomous driving, smart factories, and home networks, but it has been shown to be vulnerable to malicious adversarial attack. In this paper, we applied deep reinforcement learning models, DQN and PPO, to the autonomous driving simulation environment HighwayEnv and conducted three adversarial attacks: FGSM(Fast Gradient Sign Method), BIM(Basic Iterative Method), PGD(Projected Gradient Descent) and CW(Carlini and Wagner). In order to respond to adversarial attack, we proposed a method for deep learning models based on reinforcement learning to operate normally by removing noise from adversarial images using a bilateral filter algorithm. Furthermore, we analyzed performance of adversarial attacks using two popular metrics such as average of episode duration and the average of the reward obtained by the agent. In our experiments on a model that removes noise of adversarial images using a bilateral filter, we confirmed that the performance is maintained as good as when no adversarial attack was performed.

      더보기

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      주제

      연도별 연구동향

      연도별 활용동향

      연관논문

      연구자 네트워크맵

      공동연구자 (7)

      유사연구자 (20) 활용도상위20명

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼