http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
멀티미디어 환경에서 정서의 표현을 위한 기술언어 ESDML의 제안
조철우 國立 昌原大學校 産業技術硏究所 1998 産技硏論文集 Vol.12 No.-
This paper proposes an ESDML(Emotional Speech Description Markup Language) which can describe an emotional speech in multimedia form. Also ESDML browser is proposed to output emotional speech and images effectively. The grammar of ESDML is described in detail. And the elements for the browser are described as well.
조철우 國立 昌原大學校 精報通信硏究所 1999 精報通信論文集 Vol.3 No.-
Recently interests about audio-visual speech activities are growing. The knowledge of audio-visual speech can be used for speech recognition to improve the recognition rate and also can be used for speech synthesis to add the visual images for enhanced message transfer. There are many researchers carried on about audio-visual speech area. But the way of measuring visual speech activity varies according to the research groups. Some of the method use expensive tracking device. Some of the method use simple markers to trace the movement of articulatory organs. The main aim of this experiments is to collect some audio-visual materials which can be used for later experiments to estimate and model the actions of human articulatory organs such as mouth, jaw etc. In this collection process we collect audio-visual data from the seven directions separately. Twelve makers are used to trace the movements.
조철우,리타오 國立 昌原大學校 產業技術硏究院 2004 産技硏論文集 Vol.18 No.-
In this paper we tried to classify the pathological voice signal with severe noise component based on two different parameters, the spectral slope and the ratio of energies in the harmonic and noise components (HNR). The spectral slope is obtained by using a curve fitting method and the HNR is computed in cepstrum quefrency domain. Speech data from normal peoples and patients are collected, diagnosed and divided into three different classes (normal, relatively less noisy and severely noisy data). The mean values and the standard deviations of the spectral slope and the HNR are computed and compared with in the three kinds of data to characterize and classify the severely noisy pathological voice signals from others.
실시간 포만트 합성기를 이용한 음성합성 실험시스템 구성에 관하여
조철우 國立 昌原大學校 産業技術硏究所 1989 産技硏論文集 Vol.3 No.-
This paper describes the implementation of the experimental system for the speech synthesis using the real-time formant speech synthesizer. Following the design of the real-time formant synthesizer, experimental synthesis system is constructed. This system is thought to be useful for the extraction of the phonetie parameters from the recursive synthesis procedures.
조철우 國立 昌原大學校 産業技術硏究所 1997 産技硏論文集 Vol.11 No.-
In this paper, the pitch and durations of speech materials which includes emotional contents are measured using CECIL, speech analysis tool. Speech materials are collected from actors and other subjects. Analysis is mainly focused on pitch variation and durations of utterances. In addition to actor's speech, speeches from autobiographical recall method are also analysed. Each measurements are tabled and graphed to examine general characteristics of emotional speech. This experiment can provide invaluable background materials for analysing informations contained to the emotional speech.
조철우 國立 昌原大學校 産業技術硏究所 1994 産技硏論文集 Vol.8 No.-
Several research results about successful implementation of speech synthsizers are being reported as improvements of speech synthesis technologies, but there are not so much research reports available for the systematic assessment of synthesized speech. In this paper some examples of EC and US researches, some examples of Korean synthesis systems and their assessments are reported.And some points about assessment using nonsense wordset are discussed.
신경회로망을 이용한 ARS 장애음성의 식별에 관한 연구
조철우,김광인,김대현,권순복,김기련,김용주,전계록,왕수건 한국음성과학회 2001 음성과학 Vol.8 No.2
Speech material, which is collected from AR (Automatic Response System), was analyzed and classified into disease and non-disease state. The material include 11 different kinds of diseases. Along with AR speech, DA` (igital Audio Tape) speech is collected in parallel to give the bench mark. To analyze speech material, analysis tools, which is developed local laboratory, are used to provide an improved and robust performance to the obtained parameters. To classify speech into disease and non-disease class, multi-layered neural network was used. Three different combinations of 3, G, 1 parameters are tested to obtain the proper network size and to find the best performance. From the experiment, the classification rate of 92.5% was obtained.
정서정보의 변화에 따른 음성신호의 특성분석에 관한 연구
조철우,조은경,민경환,Jo, Cheol-Woo,Jo, Eun-Kyung,Min, Kyung-Hwan 한국음향학회 1997 韓國音響學會誌 Vol.16 No.3
본 논문은 정서정보를 포함하여 수집된 음성자료를 여러 가지 신호처리 방법으로 분석한 결과에 대하여 기술하고 있다. 정서정보를 포함한 음성은 연극배우로부터 수집하였으며 분석은 주로 피치정보의 변화와 지속시간을 중심으로 행하였다. 수집된 음성에 대한 분석결과 정서정보의 변화에 따른 음성 파라미터의 변화치를 얻을 수 있었으며 이 실험은 앞으로의 정서음성정보의 분석에 필요한 기초적 실험으로 의의가 있다. This paper describes experimental results from emotional speech materials, which is analysed by various signal processing methods. Speech materials with emotional informations are collected from actors. Analysis is focused to the variations of pitch informations and durations. From the analysed results we can observe the characteristics of emotional speech. The materials from this experiment provides valuable resources for analysing emotional speech.