http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Phylogenetic analysis of PISTILLATA sequences in Neillia (Rosaceae)
오상훈 한국식물학회 2013 Journal of Plant Biology Vol.56 No.3
Putative PISTILLATA genes were generated in nine species of Neillia, to examine the phylogenetic relationships among the species, and to test the hypothesis of hybrid origin within the genus. The PI genes determined in Neillia have two introns in the I-box region, which is consistent with PI genes in other Rosaceae. Phylogenetic analyses of the I-box region, including the introns, indicated that the species formerly classified in Stephanandra were nested within Neillia, supporting the taxonomic merger of the two genera. The PI data do not have a sufficiently strong signal to reject the hypothesis that Stephanandra is a hybrid in origin. The PI data, in conjunction with nuclear LEAFY, ribosomal DNA, and chloroplast DNA data, suggest that N. affinis might have been derived from hybridization between N. thibetica and N. gracilis. The phylogenetic position of N. affinis in the N. thibetica clade is supported by the PI and rDNA data, whereas N. affinis is also supported as a sister to N. gracilis in the LEAFY and cpDNA data. The pattern of phylogenetic placements of N. affinis in two different clades in two different sets of data suggests that the genome of the species might be comprised of a combination of the putative parental species.
Effect of Nonlinear Transformations on Entropy of Hidden Nodes
오상훈 한국콘텐츠학회 2014 International Journal of Contents Vol.10 No.1
Hidden nodes have a key role in the information processing of feed-forward neural networks in which inputs are processed through a series of weighted sums and nonlinear activation functions. In order to understand the role of hidden nodes, we must analyze the effect of the nonlinear activation functions on the weighted sums to hidden nodes. In this paper, we focus on the effect of nonlinear functions in a viewpoint of information theory. Under the assumption that the nonlinear activation function can be approximated piece-wise linearly, we prove that the entropy of weighted sums to hidden nodes decreases after piece-wise linear functions. Therefore, we argue that the nonlinear activation function decreases the uncertainty among hidden nodes. Furthermore, the more the hidden nodes are saturated, the more the entropy of hidden nodes decreases. Based on this result, we can say that, after successful training of feed-forward neural networks, hidden nodes tend not to be in linear regions but to be in saturated regions of activation function with the effect of uncertainty reduction.
8비트 데이타 정밀도를 가지는 다층퍼셉트론의 역전파 학습 알고리즘
오상훈,송윤선 대한전자공학회 1996 전자공학회논문지-B Vol.b33 No.4
In this paper, we propose a learning method of multi-layer perceptrons (MLPs) with 8-bit data precision. The suggested method uses the cross-entropy cost function to remove the slope term of error signal in output layer. To decrease the possibility of overflows, we use 16-bit weighted sum results into the 8-bit data with appropriate range. In the forwared propagation, the range for bit-conversion is determined using the saturation property of sigmoid function. In the backwared propagation, the range for bit-conversion is derived using the probability density function of back-propagated signal. In a simulation study to classify hadwritten digits in the CEDAR database, our method shows similar generalization performance to the error back-propagation learning with 16-bit precision.