http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
박철영,中島康治 대구대학교 과학기술연구소 1998 科學技術硏究 Vol.5 No.1
To aim at improving performance of a neural network as an associative memory or as an optimization problem solver, we propose two models using a nonmonotone analog neuron model which differs from traditional ones. Using the proposed model, we construct the energy function which can have two minimums. It is shown that our model can recall embedded patterns successfully. We also discuss the simulation method and the performance of the model by numerical simulations. The memory capacity strongly depends on the shape of input-output function as well as the sharpness. This model should be useful to devise a class of models for associative memory of temporal patterns.
박철영,中島康治 대구대학교 과학기술연구소 1998 科學技術硏究 Vol.5 No.2
We discuss the performance of the neural networks with quantized interconnections of +1, -1 and 0(Quantized Connection Neural Networks: QCNN, and how to choose the connection weights for the networks from the training set of examples. The basic characteristics of the networks and algorithm to decide the connection weights are presented. The layered QCNN to solve the parity problem with arbitrary number N of inputs is obtained by using the algorithm. The layered QCNN has a single hidden layer and no bias input when N is odd. When N is even, the network requires only one additional input as bias. The networks which perform any logic functions can be designed on the basis of the algorithm, which is slightly different from the way for solving N-parity problems. The network may be expected to have the same ability of generalization as the network trained with learning rules, because it is possible to decide the connection weights even if the given training set is small. It takes rather long time for the learning of the connection weights, however, one can decide them without learning in our case. Hence, we may expect some applications of QCNN for real-time processings.