http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
8비트 데이타 정밀도를 가지는 다층퍼셉트론의 역전파 학습 알고리즘
오상훈,송윤선 대한전자공학회 1996 전자공학회논문지-B Vol.b33 No.4
In this paper, we propose a learning method of multi-layer perceptrons (MLPs) with 8-bit data precision. The suggested method uses the cross-entropy cost function to remove the slope term of error signal in output layer. To decrease the possibility of overflows, we use 16-bit weighted sum results into the 8-bit data with appropriate range. In the forwared propagation, the range for bit-conversion is determined using the saturation property of sigmoid function. In the backwared propagation, the range for bit-conversion is derived using the probability density function of back-propagated signal. In a simulation study to classify hadwritten digits in the CEDAR database, our method shows similar generalization performance to the error back-propagation learning with 16-bit precision.
Phylogenetic analysis of PISTILLATA sequences in Neillia (Rosaceae)
오상훈 한국식물학회 2013 Journal of Plant Biology Vol.56 No.3
Putative PISTILLATA genes were generated in nine species of Neillia, to examine the phylogenetic relationships among the species, and to test the hypothesis of hybrid origin within the genus. The PI genes determined in Neillia have two introns in the I-box region, which is consistent with PI genes in other Rosaceae. Phylogenetic analyses of the I-box region, including the introns, indicated that the species formerly classified in Stephanandra were nested within Neillia, supporting the taxonomic merger of the two genera. The PI data do not have a sufficiently strong signal to reject the hypothesis that Stephanandra is a hybrid in origin. The PI data, in conjunction with nuclear LEAFY, ribosomal DNA, and chloroplast DNA data, suggest that N. affinis might have been derived from hybridization between N. thibetica and N. gracilis. The phylogenetic position of N. affinis in the N. thibetica clade is supported by the PI and rDNA data, whereas N. affinis is also supported as a sister to N. gracilis in the LEAFY and cpDNA data. The pattern of phylogenetic placements of N. affinis in two different clades in two different sets of data suggests that the genome of the species might be comprised of a combination of the putative parental species.
Effect of Nonlinear Transformations on Entropy of Hidden Nodes
오상훈 한국콘텐츠학회 2014 International Journal of Contents Vol.10 No.1
Hidden nodes have a key role in the information processing of feed-forward neural networks in which inputs are processed through a series of weighted sums and nonlinear activation functions. In order to understand the role of hidden nodes, we must analyze the effect of the nonlinear activation functions on the weighted sums to hidden nodes. In this paper, we focus on the effect of nonlinear functions in a viewpoint of information theory. Under the assumption that the nonlinear activation function can be approximated piece-wise linearly, we prove that the entropy of weighted sums to hidden nodes decreases after piece-wise linear functions. Therefore, we argue that the nonlinear activation function decreases the uncertainty among hidden nodes. Furthermore, the more the hidden nodes are saturated, the more the entropy of hidden nodes decreases. Based on this result, we can say that, after successful training of feed-forward neural networks, hidden nodes tend not to be in linear regions but to be in saturated regions of activation function with the effect of uncertainty reduction.
오류 역전파 알고리즘의 n차 크로스-엔트로피 오차신호에 대한 민감성 제거를 위한 가변 학습률 및 제한된 오차신호
오상훈,이수영 대한전자공학회 1998 電子工學會論文誌, C Vol.c35 No.6
다층퍼셉트론의 학습에서 나타나는 출력노드의 부적절한 포화를 해결하기 위해서 n차 크로스-엔트로피 오차함수가 제안되었으나, 이 오차함수를 이용한 학습성능은 오차함수의 차수에 민감하여 적절한 차수를 결정해야 하는 문제점이 있다. 이 논문에서는, 학습의 진행에 따라 학습률을 가변시키는 새로운 방법을 제시하여 다층퍼셉트론의 학습성능이 n차 크로스-엔트로피 오차함수의 차수에 덜 민감하도록 한다. 또한, 가변학습률이 매우 커지는 경우에 학습이 불안정해지는 것을 방지하기 위해서 오차신호의 크기를 제한하는 방법을 제시한다. 마지막으로, 필기체 숫자 인식 문제와 갑상선 진단 문제의 시뮬레이션으로 제안한 방법의 효용성을 검증한다. Although the nCE(n-th order cross-entropy) error function resolves the incorrect saturation problem of conventional EBP(error back-propagation) algorithm, the performance of MLP's (multilayer perceptrons) trained using the nCE function depends heavily on the order of the nCE function. In this paper, we propose an adaptive learning rate to make the performance of MLP's insensitive to the order of the nCE error. Additionally, we propose a limited error signal of output node to prevent unstable learning due to the adaptive learning rate. The effectiveness of the proposed method is demonstrated in simulations of handwritten digit recognition and thyroid diagnosis tasks.