http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
문경일 湖南大學校 情報通信硏究所 2002 정보통신연구 Vol.12 No.-
This paper presents an improved probabilistic formalization to the computational methods of the case-based reasoning parading. In particular, It is computationally very efficient. In contrast to earlier probabilistic methods, this method does not require a transformation step between the first case space and the distribution space. It is concentrated on applying the marginal distribution to the case matching problem, and proposed a probabilistic scoring metric for this problem. In the experiment, the probabilistic case is evaluated empirically by using the available real-world case bases. The result shows that when treated with cases where some of the features have been removed, a relatively small number of remaining cases is sufficient for retrieving the original case from the case base by using the proposed method.
문경일 호남대학교 정보통신연구소 2000 정보통신연구 Vol.10 No.-
Neuro-fuzzy systems have been considered for several years already. However, the term neuro-fuzzy still lacks proper definition, and still has the flavour of a buzzword to it. Few neuro-fuzzy methods do actually employ neural networks, even though they are very often depicted in form of some kind of neural network structure. However, all methods display some kind of learning capability, as it is known from neural networks. This article reviews neuro-fuzzy systems, which combine methods from neural network theory with fuzzy systems. Further, it presents NEFCLASS, an approach to neuro-fuzzy classification, and is discussed how a neuro-fuzzy classification, and is discussed how a neuro-fuzzy classifier can be initialized by rules generated by initial rule bases.
문경일 호남대학교 정보통신연구소 2001 정보통신연구 Vol.11 No.-
This paper proposes an architecture of fuzzy neural networks with triangular fuzzy weights. The proposed fuzzy neural network can handle fuzzy input vectors as well as real input vectors. In this paper, an error function is defined for the level sets of fuzzy outputs and fuzzy targets. It derives a learning algorithm for adjusting parameters of each fuzzy weights.
임베디드 시스템용 키패드의 알파벳 모양에 의한 영문자 입력방법
문경일,이현엽 湖南大學校 情報通信硏究所 2003 정보통신연구 Vol.13 No.-
In this paper, a new method capable of directly inputting English characters using the characteristic shapes of alphabets for a small keypad used in the imbedded systems has been introduced. The new method selects keypad buttons for each character by its ending and crossed points of the character line in order to input a specific character. This method is capable of directly inputting characters to the system through keypad since it utilizes the characteristics of each character and also the user does not need to read small character arrays written in keypad buttons, thus the rapid input of alphabets are possible and the fatigue in user's eyes can greatly be reduced, which leads to the improvement in overall embedded system performance.
문경일 호남대학교 정보통신연구소 1998 정보통신연구 Vol.8 No.-
The purpose of this paper is to obtain a single number which can be considered a typical value of a given fuzzy set. Quantities such as the arithmetic mean, the fuzzy expected value and the clustering fuzzy expected value fail to represent a typical value since they are all derived by some process of numerical averaging and lead to compromise solutions. In addition, they always provide a unique typical value even when it does not exist. In this paper a new quantity for fuzzy sets is defined. A given fuzzy set is first clustered and replaced by a finite set of clusters, each represented by the cluster center and its size.
Hard Limit 전달함수를 사용한 SCG 학습 알고리즘
문경일 호남대학교 1998 호남대학교 학술논문집 Vol.19 No.2
Training the output units of a neural network into saturations means that the derivative will be zero and that the training is caught in a local minima. Adding a small offset term to the derivative in a backpropagation method is known to eliminate local minima and increase learning speed. The same simple trick cannot be directly applied to second methods as the scaled conjugate gradient, as they use not only the derivative but also the error function itself. By simple integrating the derivative which includes the offset term, a formulation of the error function that corresponds to this derivative is given. Being able to calculate the derivative as well as the actual error, the offset term is now directly applicable in second order methods.