RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • Deep Convolutional Neural Network Hyper-Parameters Tuning for Classification Problem

        ( Thavisack Sihalath ),( Jayanta Kumar Basak ),( Anil Bhujel ),( Byeong Eun Moon ),( Fawad Khan ),( Elanchezhain Arulmozhi ),( Deog Hyun Lee ),( Na Eun Kim ),( Hyeon Tae Kim ) 한국농업기계학회 2020 한국농업기계학회 학술발표논문집 Vol.25 No.1

        The hyper-parameter search is the one in the field of deep learning. The hyper-parameters are all the parameters which can be arbitrarily set by the user before starting training. Besides, the image classification is a classical problem of image processing, computer vision and machine learning fields. In this study is presented the experienced the image classification using convolutional neural network namely VGG16 pre-trained model for the observation of hyper-parameters performance and image classification purpose. The dataset is selected from the Gyeongsang National University and Kaggle dataset for the experimentation. The experiments are conducted through different hyper-parameters such as optimizers, batch size and epoch to observe the performance and accuracy classification improvement. It is proposed to build a deep learning model using various optimizers such as SGD, RMSProp, Adam, Adagrad, Adadelta and Ada max to investigate the losses for each optimizer by the loss function (cross-entropy) for evaluation. Besides, the models have evaluated the results by using a confusion matrix to summarize a prediction results on a classification problem by each class. Empirical results demonstrate that the model is run with minimum loss when applied Adagrad optimizer in the case of 16 batch size and 50 epochs. While performing by increasing the number of batch size and epoch, A dam works well in practice among those optimizers. Interestingly, the maximum accuracy is achieved while performing the Adamax optimizer with 120 batch size and 150 epochs. However, the classification performance is also measured by confusion matrix statistical measurement for binary classification test namely accuracy, recall, precision and F1-score. This study, it is created scope for further experimentation on several datasets under different hyper-parameter conditions to find out a suitable of the optimizers for the neural network. Moreover, this study also enhances knowledge and understanding by using different batch size and epoch to improve accuracy for image classification.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼