RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 다층 퍼셉트론의 은닉층을 최적화하는 개선된 계층별 학습

        곽영태 충남대학교 2001 국내박사

        RANK : 247693

        The EBP(Error Back Propagation) algorithm suffers from slow learning speed due to the gradient descent method and must determine the number of the hidden nodes according to the application. We propose the new method that optimizes the hidden layer of MLP(Multi-Layer Perceptron) by deleting unnecessary hidden nodes and accelerates the learning speed of the EBP algorithm by modifying the Layer-By-Layer learning algorithm. The hidden layer of MLP moves each learning pattern nonlinearly into the vertexes of its hypercube and makes the hyperplane of an output node separate the moved learning patterns. This means that the outputs of a hidden node are saturated toward one or two extreme values of the sigmoid function. For optimizing the hidden layer, we select a necessary, or unnecessary, hidden node by using the cost function defined with both mean and variance of each hidden node in the proposed method. Ascending the cost function makes the variance of the hidden node large and approximates its mean to the middle value of the sigmoid function. This magnifies the separability of the hidden node. On the contrary, descending the cost function makes the variance of the hidden node small and approximates its mean to the saturation region of the sigmoid function. This makes the hidden node a constant. As a result, we can delete the invariable hidden node. The learning speed of the Layer-By-Layer algorithm is fast because the weights are updated by the LS(Least Squares) method which can get the target vector of the hidden layer from the current weights. The generalization capability, however, is low because of separating linearly the target vector of the hidden layer. On the other □□□□, the learning speed of the EBP algorithm is slow by updating the weights along the surface of MSE(Mean Squared Error) function. But, the generalization capability is high when it has the sufficient hidden nodes. To increase the generalization capability of the Layer-By-Layer algorithm, we propose the method for the new gradient vector of the hidden layer. The new vector is made by adding the gradient vector of the hidden layer in the EBP algorithm to the target vector of the hidden layer in the LS method. The proposed method uses both nonlinear property of the EBP algorithm and linear property of the LS method at the same time. Our method improves the low generalization capability of the Layer-By-Layer algorithm. The method also makes our learning algorithm fast by preventing the gradient vector of the hidden layer from being small. In the CEDAR handwritten digit recognition, we simulated the proposed method, Mozer and Smolensky’s method and Chauvin’s method to evaluate the performance of optimizing hidden layer. The simulating result showed that the proposed method had the constant number of the final hidden nodes and Mozer and Smolensky's method and Chauvin’s method deleted the hidden nodes in proportion to the number of initial hidden nodes. As a whole, the learning speed and the generalization capability are better in the same order as our method, Mozer and Smolensky's method and Chauvin’s method. To evaluate the modified Layer-By-Layer algorithm, we simulated the EBP algorithm, the Modified EBP algorithm. Cross-entropy function, Wang’s method and Yam’s method. The learning speed of the proposed method is almost similar to that of the Layer-By-Layer algorithm at the initial learning process, faster than that of EBP algorithm und the Modified EBP algorithm, and equal or better than that of Cross-entropy function, Wang’s method. The result showed that the generalization capability of the proposed method is the hest regardless of the number of the hidden nodes. It says that our method has the advantage of the Layer-By-Layer algorithm for the learning speed and the advantage of the EBP algorithm for the generalization capability. The algorithm integrating two proposed methods can use the cost function for optimizing hidden layer and the modified Layer-By-Layer algorithm. In conclusion, this algorithm is suitable to optimize the hidden layer and to accelerate the learning speed.

      • 직교함수를 은닉층에 지닌 신경회로망에 대한 연구

        권성훈 東國大學校 大學院 1999 국내석사

        RANK : 247614

        In this paper, we proposed a heterogeneous hidden layer consisting of both sigmoid functions and RBF(Radial Basis Function) in multi-layered neural networks. focusing on the orthogonal relationship between the sigmoid function and its derivative, a derived RBF that is a derivative of the sigmoid function is used as the RBF in the neural network. so the proposed neural network is called an ONN(Orthogonal Neural Network) and the function mapping with the ONN can be treated as a kind of an orthogonal function series model. Identification results using a nonlinear function confirm both the orthogonal neural networks feasibility and characteristics by comparing with those obtained using a conventional neural network which has sigmoid function or RBF in hidden layer. Simulation results for DC servo motor demonstrate the applicability of the neural controller for controlling nonlinear system and experimental results for controlling DC servo motor demonstrate the usefulness for controlling practical systems.

      • 신경회로망을 이용한 지폐의 학습 및 인식

        이정원 경성대학교 산업대학원 1995 국내석사

        RANK : 247596

        Since a neural network has abilities in parellel distributed processing, teaming and fault tolerance, as compared with exsting methodologies artificial intelligence, the neural network has been made practical applications in the field of Pattern Recognition. In this paper, as perception data, bank note are used in order to show the usefulness on visual perception of neural network. A learning model of neural network is developed. The model is a multi - layer perceptron which consist of 3 different layers such as input layer, a hidden layer and an output layer respectively in with the Back - propagation algorithm. The objective of first simulation is to find out the optimum number of neurons in the hidden layer. The optimum perception result is obtained when the number of neuron is 35. Under this condition, the recognition ratio is proportion to learning ratio. The result of recognition is at it's highest when the learning ratio is 80%. The objective of second simulation is to certify what kind of learning data to teach as an affecting factor on the the recognition result in order to obtain a satistfactory recognition result. The analysis is processed with the randomly selected data compared with the specific data. As a result, the learning specific data turned out to be better.

      • 은닉층 특징정보 양자화를 통한 고속 다층 신경회로망 설계

        강명아 朝鮮大學校 大學院 1999 국내박사

        RANK : 247391

        The multi-layer neural network structure that uses back-propagation algorithm to learning algorithm is often utilized for solving complicated problems of artificial perception such as pattern recognition, computer vision, and phonetic recognition. However, these calculation amounts should design a suitable optimum neural network structure to solve a big problem. Especially, in the case of multi-layer neural network structure, the decision of the number of hidden layer and hidden node is very important. The hidden node plays a role of the functional units that classifies the features of input pattern in the given question. However, there is a problem that decides the number of hidden nodes based on back-propagation learning algorithm. If the number of hidden nodes is designated very small, perfect learning is not done because the input pattern given cannot be classified enough. On the other hand, if designated a lot, overfitting occurs due to the unnecessary execution of operation and extravagance of memory point. So, the recognition rate is been law and the generality is fallen. Therefore, a neural network that consists of the number of a suitable optimum hidden node has be on the rise as a factor that has an important effect upon a result. The existing neural network structure design process is a field that a fixed principle does not exist, so it has depended entirely upon subjective experiencing knowledge and trial and error of neural network development experts. According to this, various researches were progressed for the optimum neural network structure design, this method decides the number of hidden node using the error sum of spreading information during attending the study. However there is a disadvantage that eliminates an available hidden node because this method only uses output value of hidden layers for pruning hidden node. The power of neural network is dominated by parameters of learning algorithm, especially, influenced by weights, the number of article of hidden layers, and the number of article of nodes. Therefore, this monograph suggests a method that decides the number of neural network node with feature information consisted of the parameter of learning algorithm. This method uses for the parameter of learning algorithm that improves weight and offset that were come to the front as a problem of existing back-propagation learning algorithm. When looks for new weight, the improved weight reflects the changing rate of the error about output value of hidden node used in error function, and although the output value of hidden node changes, the improved weight prevents the error changing. The improved offset rule can restrain vibrating phenomenon reaching at the global minimum value as reflecting total errors changed by h e in offset, which uses sigmoid function for the threshold function. It can seek for the feature information of hidden layer using the improved weight, offset, and output value of hidden layer, and the feature information is used for the estimated value pruning the hidden node. It excludes a node in the pruning target, that has a maximum value among the feature value obtained and compares the average of the rest of hidden node feature value with the feature value of each hidden node, and then would like to improve the learning speed of neural network deciding the optimum structure of the multi-layer neural network as pruning the hidden node that has the feature value smaller than the average.

      • (A) study on machine learning-based USD/KRW exchange rate prediction model using forward exchange rate determination factors

        김영철 Graduate School, Yonsei University 2019 국내박사

        RANK : 247324

        Under the free-floating exchange rate regime, precise exchange rate forecasting is becoming increasingly important. Since the study of Meese & Rogoff (1983), which found that the structural analysis models and time series analysis models as well as the forward exchange rate cannot outperform the random walk model in predictive power, there has been a skepticism about the usefulness of exchange rate forecasting. However, as in the study of Clarida and Taylor (1997), there have been cases in which prediction performance is significantly superior to that of the random walk model through the selection of appropriate data and prediction models. In this study, we forecast the USD/KRW exchange rate in 1, 3, 6 months, and 1 year using key factors that determine the forward exchange rate such as the current USD/KRW spot rate, the nominal interest rate of the Korean won, the nominal interest rate of the US dollar, and the risk premium in dollar borrowing. We adopt a multi-layer perceptron model as a predictive model and use four forward exchange rate determination factors as input variables. We change the number of hidden layers and vary the number of nodes to find the optimum multi-layer perceptron structure with the best predictive power. We have confirmed the superiority of the proposed model by comparing and analyzing the experimental results with the other models, the forward exchange rate and the random walk model. We hope that it will help decision-making of economic agents related to foreign exchange, such as policy authorities, import-export companies, and FX investors.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼