RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • Deep Learning Networks and ICT-based Plant Disease and Animal Activity Detection System for Digital Agriculture

        부젤아닐 경상국립대학교 대학원 2022 국내박사

        RANK : 247615

        The majority of food for human beings comes from agriculture. Recently, farmers have had significant pressure to fulfill the rising demand for agricultural products with the increased world population. However, various factors such as catastrophic diseases, urbanization, and climate changes limit agriculture production. Moreover, conventional and subsistence farming cannot meet the increased global food requirement. In this context, it is of utmost necessity to apply the latest technology and tools in agriculture for food safety and production increment. Therefore, the conventional farming concept has been quickly transitioning into digital farming. The main objective of this study was to implement deep learning networks and information and communication technology (ICT) to detect plant diseases, segment and measure disease severity, and detect animal activity. The varieties of deep convolutional neural networks were applied and evaluated their performances. This study has been divided broadly into two parts. The first part deals with the tomato disease classification using lightweight attention-based convolutional neural networks and strawberry gray mold disease segmentation and severity measurement. Likewise, the second part contains the pig posture and locomotion activity detection system using the deep learning-based object detection models and tracking algorithm. Two experiments were conducted on plant disease classification and segmentation and one on pig posture and walking activity detection. In the first experiment of plant disease identification, ten varieties of tomato diseases and healthy leaves images were collected from both the open-source database and the glasshouse located at Gyeongsang National University. A lightweight attention-based deep convolutional neural network (ACNN) was designed to improve the performance of the model for plant disease classification. The total images were divided into training, testing, and validation datasets at a ratio of 8:1:1. Then the performance of the proposed model was compared with the baseline CNN without attention (WACNN) and the standard ResNet50 model. In the second experiment, three concentrations of Botrytis cinerea (causal agent of gray mold disease) were inoculated to the strawberry plants at an early reproductive stage. The occurrence of disease spots on the leaves and their expansion were recorded using a handheld RGB camera daily non-invasively. The raw images were pre-processed to remove clutter background and to extract the target leaf only. Then a deep CNN-based pixel-level segmentation Unet model was designed, trained, tested, and validated using the pre-processed images. The performance of the deep learning model was calculated using the standard segmentation metrics (pixel accuracy, intersection over union (IoU) accuracy, and dice accuracy) and validated using the 5-fold cross-validation method. Moreover, the performance of the Unet model was compared with the XGboost and K-means machine learning models and an image processing algorithm. The disease severity is calculated by using the percentage of diseased pixels in a leaf. The results of tomato disease classification showed that the deep CNN with attention mechanism improved by 1.2% in the tomato disease classification accuracy compared to CNN without attention mechanism in compensation of a few more network parameters and complexity. The CNN without attention module extracts the global features from the whole image. However, the characteristics of the diseased regions would be more specific to an individual disease class. Therefore, the attention module emphasizes regional features rather than global features. Thus, boosting the disease classification accuracy of the model. Whereas, in terms of gray mold disease segmentation, the average pixel, dice, and IoU accuracies of 98.24%, 89.71%, and 82.12%, respectively, were achieved from the Unit model, followed by XGBoost (98.06%, 87.76%, and 80.12%) on 80 test images. Results showed that the Unit model surpasses the conventional XGBoost, K-means, and image processing technique in detecting and quantifying the gray mold disease. The Unit model has two encoder and decoder blocks without fully connected networks. Thus, the network parameters reduce considerably, allowing the model to converge even in a small number of training datasets. Moreover, the Unet model provided a disease segmented image of the same size as the input image due to implementing an up-converter block. For the pig posture and walking activity detection, an experiment was conducted in the experimental pig barn located in Gyeongsang National University. The concentration of greenhouse gases (GHGs) was elevated by closing the ventilator and door of the pig barn for an hour three times a day (morning, day, and night), and the treatment was repeated for three days. The GHGs concentration before, after, and after an hour of treatment was measured by taking air samples from three spatial locations near the center of the house and analyzed using gas chromatography (GC). The livestock environment monitoring system (LEMS) collected the other environmental data (temperature and humidity), including CO2 gas. A top view network camera (HIK VISION) was installed to record the videos of pig activities and stored them in a network video recorder (NVR). A total of 6,012 frames from the video were labeled manually using the computer vision annotation tool (CVAT) and split into training and testing datasets (9:1). Three variances of object detection models (YOLOv4, Faster R-CNN, and SSD ResNet) were trained and validated to detect pig postures and walking activity. Then the deep association simple online real-time tracking algorithm (Deep SORT) was implemented to track the individual pig in the video clips. Pig postures and walking activity information was extracted from the one hour before, during, and after the treatment periods and analyzed the changes in activity due to the compromised environment. The pigs' standing, walking, and sternal lying activities reduce with increased GHGs, increasing lateral lying posture duration. Also, the pigs were more active in the morning than daytime and the least in the nighttime. Moreover, the pig posture detection performances of the object detection models were evaluated using the average precision (AP) and mean AP (mAP) and found the YOLO model provided the highest mAP of 98.67%, followed by the Faster R-CNN model (96.42%). Furthermore, the YOLO model outperformed in terms of detection speed (0.031 s/frame), followed by the SSD model (0.123 s/frame) and the Faster R-CNN model (0.15 s/frame). Therefore, the deep learning networks showed that they could effectively solve complex agricultural problems. However, more researches are recommended for further improvement. Finally, a web-based client-server architecture (http://sfsl.gnu.ac.kr) was designed to automatically collect the environmental and image data from the experimental sites. Similarly, a multi-user python interactive program called JupyterHub was installed on the server (https://sfslws.gnu.ac.kr), allowing the deep learning models to run in the cloud.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼