The multi-layer neural network structure that uses back-propagation algorithm to learning algorithm is often utilized for solving complicated problems of artificial perception such as pattern recognition, computer vision, and phonetic recognition. How...
The multi-layer neural network structure that uses back-propagation algorithm to learning algorithm is often utilized for solving complicated problems of artificial perception such as pattern recognition, computer vision, and phonetic recognition. However, these calculation amounts should design a suitable optimum neural network structure to solve a big problem.
Especially, in the case of multi-layer neural network structure, the decision of the number of hidden layer and hidden node is very important. The hidden node plays a role of the functional units that classifies the features of input pattern in the given question. However, there is a problem that decides the number of hidden nodes based on back-propagation learning algorithm. If the number of hidden nodes is designated very small, perfect learning is not done because the input pattern given cannot be classified enough. On the other hand, if designated a lot, overfitting occurs due to the unnecessary execution of operation and extravagance of memory point. So, the recognition rate is been law and the generality is fallen. Therefore, a neural network that consists of the number of a suitable optimum hidden node has be on the rise as a factor that has an important effect upon a result. The existing neural network structure design process is a field that a fixed principle does not exist, so it has depended entirely upon subjective experiencing knowledge and trial and error of neural network development experts. According to this, various researches were progressed for the optimum neural network structure design, this method decides the number of hidden node using the error sum of spreading information during attending the study. However there is a disadvantage that eliminates an available hidden node because this method only uses output value of hidden layers for pruning hidden node.
The power of neural network is dominated by parameters of learning algorithm, especially, influenced by weights, the number of article of hidden layers, and the number of article of nodes.
Therefore, this monograph suggests a method that decides the number of neural network node with feature information consisted of the parameter of learning algorithm. This method uses for the parameter of learning algorithm that improves weight and offset that were come to the front as a problem of existing back-propagation learning algorithm.
When looks for new weight, the improved weight reflects the changing rate of the error about output value of hidden node used in error function, and although the output value of hidden node changes, the improved weight prevents the error changing.
The improved offset rule can restrain vibrating phenomenon reaching at the global minimum value as reflecting total errors changed by h e in offset, which uses sigmoid function for the threshold function. It can seek for the feature information of hidden layer using the improved weight, offset, and output value of hidden layer, and the feature information is used for the estimated value pruning the hidden node.
It excludes a node in the pruning target, that has a maximum value among the feature value obtained and compares the average of the rest of hidden node feature value with the feature value of each hidden node, and then would like to improve the learning speed of neural network deciding the optimum structure of the multi-layer neural network as pruning the hidden node that has the feature value smaller than the average.