http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
오늘 본 자료
유전자 알고리즘을 이용한 진보된 헬름홀쯔 공명기의 설계기법
황상문,황성호,정의봉 한국소음진동공학회 1998 소음 진동 Vol.8 No.6
For an analysis of some Helmholtz resonators, it is likely to be more appropriate to consider acoustic field within cavity than just the 1-DOF analogous model. However, a design method that considers increased parameters than the lumped model. is not a trivial process due to the trade-off effect among the parameters. In this paper. the genetic algorithm. one of the optimization technique that rapidly converges to global fittest solution and robust convergence. is applied to the design process of Helmholtz resonators. Results show that the genetic algorithm can be successfully and efficiently used to find the resonant frequencies for both lumped model and distributed model.
단순확장관과 공명기 모듈 설계를 위한 유전자 알고리즘의 적용에 관한 연구
황상문,황성호,정의봉,김봉준,정융호 한국소음진동공학회 2000 소음 진동 Vol.10 No.1
With the increased requirement for automobile noise, a design fo mufflers with higher performances becomes more important in recent days. For a design of some mufflers, it must satisfy both minimizing back pressure and maximizing sound attenuation in broad range of frequecny. Even for a simple Helmholtz resonator, an important element in a muffler, a resonator design with accurate resonant frequency is difficult if one want to consider standing waves within the cavity. In this paper, the genetic algorithm, one of the optimization technique with high capability of global fittest solution and robust convergence, is applied to the design process of mufflers. Results show that the genetic algorithm can be successfully and efficiently used to find the fittest model for both mufflers and Helmoltz resonators.
황상문,박인규,백덕수,진달복 대한전자공학회 2002 電子工學會論文誌 IE (Industry electronics) Vol.39 No.1
본 논문은 2인용 보드게임의 정보에 대한 전략을 학습할 수 있는 방법을 역전파 신경회로망과 Q학습알고리즘을 이용하여 제안하였다. 학습의 과정은 단순히 상대프로세스와의 대국에 의하여 이루어진다. 시스템의 구성은 탐색을 담당하는 부분과 기물의 수를 발생하는 부분으로 구성되어 있다. 수의 발생부분은 보드의 상태에 따라서 갱신되고, 탐색커널은 αβ 탐색을 기본으로 역전파 신경회로망과 Q학습을 결합하여 게임에 대해 양호한 평가함수를 학습하였다. 학습의 과정에서 일련의 기물의 이동에 있어서 인접한 평가치들의 차이만을 줄이는 Temporal Difference학습과는 달리, 기물의 이동에 따른 평가치에 대해 갱신된 평가치들을 이용하여 평가함수를 학습함으로써 최적의 전략을 유도할 수 있는 Q학습알고리즘을 사용하였다. 일반적으로 많은 학습을 통하여 평가함수의 정확도가 보장되면 승률이 학습의 양에 비례함을 알 수 있었다. This paper proposed the strategy learning method by means of the fusion of Back-Propagation neural network and Q learning algorithm for two-person, deterministic janggi board game. The learning process is accomplished simply through the playing each other. The system consists of two parts of move generator and search kernel. The one consists of move generator generating the moves on the board, the other consists of back-propagation and Q learning plus $\alpha$$\beta$ search algorithm in an attempt to learn the evaluation function. while temporal difference learns the discrepancy between the adjacent rewards, Q learning acquires the optimal policies even when there is no prior knowledge of effects of its moves on the environment through the learning of the evaluation function for the augmented rewards. Depended on the evaluation function through lots of games through the learning procedure it proved that the percentage won is linearly proportional to the portion of learning in general.
황상문,하은주,박종욱 원광대학교 공업기술개발연구소 1991 工業技術開發硏究誌 Vol.11 No.-
Abstract In this paper, we have described a new edge detection algorithm which exactly extract edge-pixel with 45° and 135° direction. For that purpose, two iamge vectors are by convolution image function with four-operator. Two-operators of four-operators are first derivative of two-dimensional Gaussian in x-axis and y-axis and the other two-operator can be get from first derivative of rotating two-dimensional Gaussian on 45°. Results of edge detection are shown for some picture of real scenes and compare conventional method.