RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 음성지원유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        그래프 합성곱-신경망 구조 탐색 : 그래프 합성곱 신경망을 이용한 신경망 구조 탐색

        최수연,박종열 국제문화기술진흥원 2023 The Journal of the Convergence on Culture Technolo Vol.9 No.1

        This paper proposes the design of a neural network structure search model using graph convolutional neural networks. Deep learning has a problem of not being able to verify whether the designed model has a structure with optimized performance due to the nature of learning as a black box. The neural network structure search model is composed of a recurrent neural network that creates a model and a convolutional neural network that is the generated network. Conventional neural network structure search models use recurrent neural networks, but in this paper, we propose GC-NAS, which uses graph convolutional neural networks instead of recurrent neural networks to create convolutional neural network models. The proposed GC-NAS uses the Layer Extraction Block to explore depth, and the Hyper Parameter Prediction Block to explore spatial and temporal information (hyper parameters) based on depth information in parallel. Therefore, since the depth information is reflected, the search area is wider, and the purpose of the search area of the model is clear by conducting a parallel search with depth information, so it is judged to be superior in theoretical structure compared to GC-NAS. GC-NAS is expected to solve the problem of the high-dimensional time axis and the range of spatial search of recurrent neural networks in the existing neural network structure search model through the graph convolutional neural network block and graph generation algorithm. In addition, we hope that the GC-NAS proposed in this paper will serve as an opportunity for active research on the application of graph convolutional neural networks to neural network structure search. 본 논문은 그래프 합성곱 신경망을 이용한 신경망 구조 탐색 모델 설계를 제안한다. 딥 러닝은 블랙박스로 학습이 진행되는 특성으로 인해 설계한 모델이 최적화된 성능을 가지는 구조인지 검증하지 못하는 문제점이 존재한다. 신경망 구조 탐색 모델은 모델을 생성하는 순환 신경망과 생성된 네트워크인 합성곱 신경망으로 구성되어있다. 통상의 신경망 구조 탐색 모델은 순환신경망 계열을 사용하지만 우리는 본 논문에서 순환신경망 대신 그래프 합성곱 신경망을 사용하여 합성곱 신경망 모델을 생성하는 GC-NAS를 제안한다. 제안하는 GC-NAS는 Layer Extraction Block을 이용하여 Depth를 탐색하며 Hyper Parameter Prediction Block을 이용하여 Depth 정보를 기반으로 한 spatial, temporal 정보(hyper parameter)를 병렬적으로 탐색합니다. 따라서 Depth 정보를 반영하기 때문에 탐색 영역이 더 넓으며 Depth 정보와 병렬적 탐색을 진행함으로 모델의 탐색 영역의 목적성이 분명하기 때문에 GC-NAS대비 이론적 구조에 있어서 우위에 있다고 판단된다. GC-NAS는 그래프 합성곱 신경망 블록 및 그래프 생성 알고리즘을 통하여 기존 신경망 구조 탐색 모델에서 순환 신경망이 가지는 고차원 시간 축의 문제와 공간적 탐색의 범위 문제를 해결할 것으로 기대한다. 또한 우리는 본 논문이 제안하는 GC-NAS를 통하여 신경망 구조 탐색에 그래프 합성곱 신경망을 적용하는 연구가 활발히 이루어질 수 있는 계기가 될 수 있기를 기대한다.

      • KCI등재

        Deep Neural Network와 Convolutional Neural Network 모델을 이용한 산사태 취약성 매핑

        공성현,백원경,정형섭,Gong, Sung-Hyun,Baek, Won-Kyung,Jung, Hyung-Sup 대한원격탐사학회 2022 大韓遠隔探査學會誌 Vol.38 No.6

        Landslides are one of the most prevalent natural disasters, threating both humans and property. Also landslides can cause damage at the national level, so effective prediction and prevention are essential. Research to produce a landslide susceptibility map with high accuracy is steadily being conducted, and various models have been applied to landslide susceptibility analysis. Pixel-based machine learning models such as frequency ratio models, logistic regression models, ensembles models, and Artificial Neural Networks have been mainly applied. Recent studies have shown that the kernel-based convolutional neural network (CNN) technique is effective and that the spatial characteristics of input data have a significant effect on the accuracy of landslide susceptibility mapping. For this reason, the purpose of this study is to analyze landslide vulnerability using a pixel-based deep neural network model and a patch-based convolutional neural network model. The research area was set up in Gangwon-do, including Inje, Gangneung, and Pyeongchang, where landslides occurred frequently and damaged. Landslide-related factors include slope, curvature, stream power index (SPI), topographic wetness index (TWI), topographic position index (TPI), timber diameter, timber age, lithology, land use, soil depth, soil parent material, lineament density, fault density, normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used. Landslide-related factors were built into a spatial database through data preprocessing, and landslide susceptibility map was predicted using deep neural network (DNN) and CNN models. The model and landslide susceptibility map were verified through average precision (AP) and root mean square errors (RMSE), and as a result of the verification, the patch-based CNN model showed 3.4% improved performance compared to the pixel-based DNN model. The results of this study can be used to predict landslides and are expected to serve as a scientific basis for establishing land use policies and landslide management policies.

      • KCI등재

        딥러닝의 모형과 응용사례

        안성만(Ahn, SungMahn) 한국지능정보시스템학회 2016 지능정보연구 Vol.22 No.2

        Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for “backward propagation of errors” and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer’s) neurons. Shared weights mean that we’re going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren’t just propagated backward through layers, they’re propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when traini

      • KCI등재

        GPGPU 기반 Convolutional Neural Network의 효율적인 스레드 할당 기법

        김민철,이광엽 사단법인 인문사회과학기술융합학회 2017 예술인문사회융합멀티미디어논문지 Vol.7 No.10

        CNN (Convolution neural network), which is used for image classification and speech recognition among neural networks learning based on positive data, has been continuously developed to have a high performance structure to date. There are many difficulties to utilize in an embedded system with limited resources. Therefore, we use GPU (General-Purpose Computing on Graphics Processing Units), which is used for general-purpose operation of GPU to solve the problem because we use pre-learned weights but there are still limitations. Since CNN performs simple and iterative operations, the computation speed varies greatly depending on the thread allocation and utilization method in the Single Instruction Multiple Thread (SIMT) based GPGPU. To solve this problem, there is a thread that needs to be relaxed when performing Convolution and Pooling operations with threads. The remaining threads have increased the operation speed by using the method used in the following feature maps and kernel calculations. 많은 양의 데이터 기반으로 학습하는 neural network 중 이미지 분류나 음성 인식 등에 사용되어 지고 있는 CNN(Convolution neural network)는 현재까지도 우수한 성능을 가진 구조로 계속적으로 발전되고 있다. 제한된 자원을 가진 임베디드 시스템에서 활용하기에는 많은 어려움이 있다. 그래서 미리 학습된 가중치를 사용하지만 여전히 한계점이 있기 때문에 이를 해결하기 위해 GPU의 범용 연산을 위해서 사용하는 GP-GPU(General-Purpose computing on Graphics Processing Units)를 활용하는 추세다. CNN은 단순하고 반복적인 연산을 수행하기 때문에 SIMT(Single Instruction Multiple Thread)기반의 GPGPU에서 스레드 할당과 활용 방법에 따라 연산 속도가 많이 달라진다. 스레드로 Convolution 연산과 Pooling 연산을 수행할 때 쉬어야 하는 스레드가 발생하는 데 이러한 문제를 해결하기 위해 남은 스레드가 다음 피쳐맵과 커널 계산에 활용되는 방법을 사용함으로써 연산 속도를 증가시켰다.

      • KCI등재

        Network Traffic Classification Based on Deep Learning

        ( Junwei Li ),( Zhisong Pan ) 한국인터넷정보학회 2020 KSII Transactions on Internet and Information Syst Vol.14 No.11

        As the network goes deep into all aspects of people's lives, the number and the complexity of network traffic is increasing, and traffic classification becomes more and more important. How to classify them effectively is an important prerequisite for network management and planning, and ensuring network security. With the continuous development of deep learning, more and more traffic classification begins to use it as the main method, which achieves better results than traditional classification methods. In this paper, we provide a comprehensive review of network traffic classification based on deep learning. Firstly, we introduce the research background and progress of network traffic classification. Then, we summarize and compare traffic classification based on deep learning such as stack autoencoder, one-dimensional convolution neural network, two-dimensional convolution neural network, three-dimensional convolution neural network, long short-term memory network and Deep Belief Networks. In addition, we compare traffic classification based on deep learning with other methods such as based on port number, deep packets detection and machine learning. Finally, the future research directions of network traffic classification based on deep learning are prospected.

      • KCI등재

        합성곱 신경망에 의한 과도 하중 감지에 대한 연구

        강정호 한국기계가공학회 2024 한국기계가공학회지 Vol.23 No.1

        Accurate load detection is important for the safe operation of mechanical structures. This study attempts toincorporate convolutional neural network technology to improve the accuracy of load detection. To increase thedetection accuracy of the excessive external load applied to the mechanical structure, it is necessary to check theamount of learning data along with the application of the Convolutional Neural Network technique of the artificialneural network. In this study, the amount of analysis data of the mechanical structure was insufficient forConvolutional Neural Network learning; therefore, the application of the stacked auto code technique was examinedas a method of securing the amount of data required for Convolutional Neural Network learning. In addition,appropriate conditions were suggested by researching the procedure of the Convolutional Neural Network for loaddetection and conditions appropriate for each function and variable. The accuracy was verified by detecting thehoisting load of the crawler crane from the load of the roller, which was the lower structure. The results of thisstudy can be applied to the analysis and design of various mechanical structures, and the possibility of using aConvolutional Neural Network to ensure machine safety and prevent accidents was verified.

      • KCI등재후보
      • Spatial Augmentation Localization based Deformable Convolutional Neural Networks for Object Detection

        Md Foysal Haque(포이살 하크),Hye-Youn Lim(임혜연),Dae-Seong Kang(강대성) 한국정보기술학회 2019 Proceedings of KIIT Conference Vol.2019 No.6

        최근에 심층 컨볼루션 뉴럴 네트워크는 객체를 검출하고 분류하는데 있어서 상당한 성능을 지니고 있습니다. 레이어의 복잡성을 감소시키기 위해 향상된 네트워크 구조를 기반으로 한 연구가 많이 이루어지고 있습니다. 본 논문에서는 검출 정확도를 높이기 위해 향상된 변형 컨벌루션 뉴럴 네트워크를 제안합니다. 제안하는 방법은 공간적 특징 국소화를 사용하여 특징맵들을 분류하고 추출하기 위해 증가된 학습 모듈을 사용합니다. 여기서 네트워크는 객체 검출을 위해 최신 성능의 네트워크를 수행하는 향상된 일반화 방법으로 구성되어 있습니다. The recent years deep convolutional neural network achieved impressive performance to classify and detect objects. Lots of research contributing to improving network architecture to reduce the complexity of the layers. In this work, to improve the detection accuracy, we designed an improved deformable convolutional neural network. The deformable convolutional neural network uses augment learning module to extracts and classify the feature maps by spatial feature localization. The network consists of improved generalization method that carries the network towards the state-of-the-art performance for the object detection task.

      • Suitable Activity Function of Neural Networks for Data Enlargement

        Betere Job Isaac,Hiroshi Kinjo,Kunihiko Nakazono,Naoki Oshiro 제어로봇시스템학회 2018 제어로봇시스템학회 국제학술대회 논문집 Vol.2018 No.10

        In this paper, we present a study on activity functions for a multi-layered neural networks (MLNNs) and propose a suitable activity function for data enlargement (DE). We have carefully studied the training performance of Sigmoid, ReLu, Leaky-ReLu and L & exp. activity functions for three inputs to multiple output training patterns. Our MLNNs model has L hidden layers with two inputs to four or six outputs by backpropagation neural network training (BP). We focused on the multi teacher training signals to investigate and evaluate the training performance in MLNNs and select the best and good activity function for data enlargement and hence could be applicable for image and signal processing (synaptic divergence). We specifically used four activity functions from which we found out that L & exp. activity function can suite data enlargement neural network training (DENN) since it could give the highest percentage training abilities compared to the other activity functions of Sigmoid, ReLu and Leaky-ReLu during simulation and training of data in the network. And finally, we recommend L & exp. function to be good for MLNNs and may be applicable for signal processing of data and information enlargement because of its performance training characteristics with multiple training logic patterns hence convolution neural networks (CNN).

      • KCI등재

        Vehicle Image Recognition Using Deep Convolution Neural Network and Compressed Dictionary Learning

        ( Yanyan Zhou ) 한국정보처리학회 2021 Journal of information processing systems Vol.17 No.2

        In this paper, a vehicle recognition algorithm based on deep convolutional neural network and compression dictionary is proposed. Firstly, the network structure of fine vehicle recognition based on convolutional neural network is introduced. Then, a vehicle recognition system based on multi-scale pyramid convolutional neural network is constructed. The contribution of different networks to the recognition results is adjusted by the adaptive fusion method that adjusts the network according to the recognition accuracy of a single network. The proportion of output in the network output of the entire multiscale network. Then, the compressed dictionary learning and the data dimension reduction are carried out using the effective block structure method combined with very sparse random projection matrix, which solves the computational complexity caused by high-dimensional features and shortens the dictionary learning time. Finally, the sparse representation classification method is used to realize vehicle type recognition. The experimental results show that the detection effect of the proposed algorithm is stable in sunny, cloudy and rainy weather, and it has strong adaptability to typical application scenarios such as occlusion and blurring, with an average recognition rate of more than 95%.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼