RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        역전파 알고리즘 기반 양방향 연상 신경망

        권상규,최용석 한양대학교 우리춤연구소 2013 우리춤과 과학기술 Vol.9 No.3

        사물들로부터 개념을 인식하고 또한 역으로 개념으로부터 대표 사물을 연상하는 인간의 양방향 연상 능력을 구현하기 위하여 일반적으로 BAM(Bidirectional Associative Memory)이 활용되고 있다. 그러나 BAM과 같은 기존 연상 방식은 비선형 문제의 해결이 어렵고, 연상 저장 능력과 수행 시간 면에서 불충분한 성능을 보여준 다. 본 연구에서는 이러한 문제점들을 해결하기 위해 역전파 신경망을 기반으로 하는 양방향 연상 신경망(BBN: Bidirectional Backpropagation Neural-net)을 제안한다. 또한 제안된 BBN의 효용성을 검증하기 위해 Exclusive-OR, Prior Encoder-Decoder 및 문자인식 문제를 대상으로 한 실험을 수행하여, 기존 방식에 비하 여 양방향 연상에 보다 효과적으로 활용될 수 있음을 확인하였다. A well-known BAM(Bidirectional Associative Memory) has been widely used to emulate human’s bidirectional association capability which can recognize a concept from objects and also recall a representative object from that concept. However, the conventional association model like BAM is often not useful for solving non-linearly separable problem and also inadequate when considering its capacity and speed of association. In order to relax such inadequacies, we propose BBN(Bidirectional Backpropagation Neural-net) based on backpropagation algorithm. We also validate the usefulness of BBN by some experiments for Exclusive- OR, Prior Encoder-Decoder, and Character Recognition problems. In some discussions, we state that BBN may outperform the conventional ones in terms of bidirectional association capability and its efficacy.

      • INTRAS 자료를 이용한 신경망 유고 감지에 관한 기초적 연구

        장세봉 永同大學校 2001 硏究論叢 Vol.7 No.1

        본 연구는 추론 및 판단의 신속성과 정확성이 우수하여 최근 교통분야에 적용이 활발해지고 있는 인공신경망을 고속도로유고감지에 적용하여 유고시 교통류 변수들의 임계치 사전 설정 및 낮은 유고 감지율 등으로 실제 현장 적용시 많은 문제점을 나타내고 있는 기존 유고감지 알고리즘의 한계를 극복하기 위한 가능성을 검토하였다. 적용된 인공신경망은 Multilayer Perceptron이며 Backpropagation 알고리즘으로 학습시켰으며 MLP의 학습 및 검증자료는 고속도로 교통류 미시적 시뮬레이션 모형인 INTRAS를 사용하여 준비하였다. 분석결과, 기존 알고리즘은 60%내외의 낮은 유고 감지율을 보이고 있으나 MLP에 의한 유고 감지율은 약 70∼100%로 매우 높게 나타나 추가적인 연구가 진행될 경우 실시간 기반의 ITS의 고속도로유고감지 알고리즘으로 유용될 가능성이 있는 것으로 판단된다. The objective of this study is to verify the application possibility of neural network model for the freeway incident detection. The neural network model adopted was the multilayer perception(MLP) and trained with the back propagation algorithm, INTRAS simulation data were used to train the MLP. The study result shows that MLP incident detection rate is 70∼100% and demonstrates the potential of neural network model in improving incident detection performance over the conventional algorithms. Therefore, it is expected that the neural network model can be applied to Intelligent Transport System through more following researches.

      • 오류 역전파(Error Backpropagation) 학습 알고리즘을 이용한 숫자 인식에 관한 연구

        김진숙 동의공업대학 2001 論文集 Vol.27 No.1

        This paper proposes and implements a multilayered neural network system which can learn and recognize the bitmap number images. To learn the bimap number, the system uses the learning algorithm, that is "Error Backpropagation Learning Algorithm" which is based on the delta rule. For the design of the system, we must consider not only the structure of the neural network, but also the initial weight value, learning rate and bias. According to the design concept, the system is implemented and then tested with the various number formats. The system can recognize test patterns 100% which are input patterns in learning phase. But with the other format, the system has the 60∼70% recognition.

      • KCI등재

        u-Health 시스템에서 슬라이딩 윈도우 기반 스트림 데이터 처리

        김태연(Kim, Tae-Yeun),송병호(Song, Byoung-Ho),배상현(Bae, Sang-Hyun) 한국정보전자통신기술학회 2011 한국정보전자통신기술학회논문지 Vol.4 No.2

        u-Health 시스템의 센서들로부터 측정된 데이터에 대한 정확하고 에너지 효율적인 관리가 필요하다. 센서네트워크에서 대용량의 입력 스트림 데이터 전체를 데이터베이스에 모두 저장하여 한꺼번에 처리하는 것은 효율적이지 못하다. 본 논문에서는 u-Health 시스템 내 센서 네트워크의 에너지 효율성과 정확성을 고려하여 여러 센서에서 지속적으로 들어오는 다차원 스트림 데이터의 처리 성능을 높이고자 한다. 효율적인 입력 스트림 처리를 위해서 슬라이딩 윈도우 기반으로 질의를 처리하고 Mjoin 방법으로 다중 질의 계획을 수립한 후 역전파 알고리즘을 통해 저장 데이터를 축소하는 효율적인 처리 기법을 제안한다. 14,324개의 데이터 집합을 사용하여 실험한 결과 실제 입력되는 데이터보다 저장 공간의 18.3%를 축소함으로써 효과적임을 보였다. It is necessary to accurate and efficient management for measured digital data from sensors in u-health system. It is not efficient that sensor network process input stream data of mass storage stored in database the same time. We propose to improve the processing performance of multidimensional stream data continuous incoming from multiple sensor. We propose process query based on sliding window for efficient input stream and found multiple query plan to Mjoin method and we reduce stored data using backpropagation algorithm. As a result, we obtained to efficient result about 18.3% reduction rate of database using 14,324 data sets.

      • KCI우수등재

        An accelerated Levenberg-Marquardt algorithm for feedforward network

        Young Tae Kwak 한국데이터정보과학회 2012 한국데이터정보과학회지 Vol.23 No.5

        This paper proposes a new Levenberg-Marquardt algorithm that is accelerated by adjusting a Jacobian matrix and a quasi-Hessian matrix. The proposed method parti-tions the Jacobian matrix into block matrices and employs the inverse of a partitioned matrix to find the inverse of the quasi-Hessian matrix. Our method can avoid expen-sive operations and save memory in calculating the inverse of the quasi-Hessian matrix. It can shorten the training time for fast convergence. In our results tested in a large application, we were able to save about 20% of the training time than other algorithms.

      • KCI등재

        An accelerated Levenberg-Marquardt algorithm for feedforward network

        곽영태 한국데이터정보과학회 2012 한국데이터정보과학회지 Vol.23 No.5

        This paper proposes a new Levenberg-Marquardt algorithm that is accelerated by adjusting a Jacobian matrix and a quasi-Hessian matrix. The proposed method partitions the Jacobian matrix into block matrices and employs the inverse of a partitioned matrix to find the inverse of the quasi-Hessian matrix. Our method can avoid expensive operations and save memory in calculating the inverse of the quasi-Hessian matrix. It can shorten the training time for fast convergence. In our results tested in a large application, we were able to save about 20% of the training time than other algorithms.

      • KCI우수등재

        An accelerated Levenberg-Marquardt algorithm for feedforward network

        Kwak, Young-Tae The Korean Data and Information Science Society 2012 한국데이터정보과학회지 Vol.23 No.5

        This paper proposes a new Levenberg-Marquardt algorithm that is accelerated by adjusting a Jacobian matrix and a quasi-Hessian matrix. The proposed method partitions the Jacobian matrix into block matrices and employs the inverse of a partitioned matrix to find the inverse of the quasi-Hessian matrix. Our method can avoid expensive operations and save memory in calculating the inverse of the quasi-Hessian matrix. It can shorten the training time for fast convergence. In our results tested in a large application, we were able to save about 20% of the training time than other algorithms.

      • 효율적인 학습 알고리즘의 신경망을 이용한 적응 필터의 구현

        조용현 대구효성가톨릭대학교 응용과학연구소 1998 응용과학연구논문집 Vol.6 No.2

        This paper proposes an efficient method for implementing an adaptive filter using neural networks which is combined the steepest descent method and the dynamic tunneling system. The steepest descent method is applied for high-speed optimization. The dynamic tunneling system is applied for global optimization. Having converged to the local minima by using the steepest descent method, the proposed method estimates the initial point for escaping the local minima by applying the dynamic tunneling system. The simulation results show that the proposed adaptive filter is better performance, in comparision with that using the LMS and the conventional backpropagation algorithm.

      • KCI등재

        딥러닝의 모형과 응용사례

        안성만(Ahn, SungMahn) 한국지능정보시스템학회 2016 지능정보연구 Vol.22 No.2

        Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for “backward propagation of errors” and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer’s) neurons. Shared weights mean that we’re going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren’t just propagated backward through layers, they’re propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when traini

      • KCI등재

        용접결함의 패턴인식을 위한 분류기 알고리즘의 성능 비교

        윤성운(Sung-Un Yoon),김창현(Chang-Hyun Kim),김재열(Jae-Yeol Kim) 한국생산제조학회 2006 한국생산제조학회지 Vol.15 No.3

        In this study, we nodestructive test based on ultrasonic test as inspection method and compared backpropagation neural network(BPNN) with probabilistic neural network(PNN) as pattern recognition algorithm of welding flasw. For this purpose, variables are applied the same to two algorithms. Where, feature variables are zooming flaw signals of reflected whole signals from welding flaws in time domain. Through this process, we confirmed advantages/disadvantages of two algorithms and identified application methods of two algorithms.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼