RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      Statistical graphical models for scene analysis, source separation and other audio applications.

      한글로보기

      https://www.riss.kr/link?id=T11420303

      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      소속기관이 구독 중이 아닌 경우 오후 4시부터 익일 오전 9시까지 원문보기가 가능합니다.

      부가정보

      다국어 초록 (Multilingual Abstract)

      The problem of separating overlapping sound sources has long been a research goal in sound processing, not least because of the apparent ease with which we as listeners achieve perceptual separation and isolation of sound sources in our everyday experiences.
      Human listeners use their prior knowledge of all the sound classes that they have experienced through their lives to impose constraints on the form that elements on a mixture can take. Listeners use the information obtained from partial observation of the unmixed context to disambiguate the components where the energy is locally swamped by interfering sources.
      Researchers working on this problem (Ellis 1996) argue that just as human listeners have top-down knowledge, prior constraints on the form that the mixture components can take is the critical component to making source separation systems work. In this thesis, we propose to encode these contraints in the form of models which capture the statistical distributions of the features of mixture components, using the framework of statistical graphical models, and then use those models to estimate obscured or corrupted portions from partial observations. Our overarching goal is to explain composed data as a composition of the models of the individual sources.
      After reviewing the basic statistical tools, this dissertation describes three models of this kind. The first uses multiple-microphone recordings from reverberant rooms combined in a filter-and-sum setup. The filter coefficients are optimized to match system output against a model of speech taken from a speech recognizer. The second model addresses the more difficult case of a single channel recording, and handles the tractability problems of the very large number of states required by decomposing the signal into subbands. The final model provides very precise fits to source signals without an enormous dictionary of prototypes, but instead by exploiting the observation that much of a real-world signal can be described as systematic local spectral deformations of adjacent time frames; by inferring these deformations between occasional spectral templates, the entire sound is accurately described. For this last model, we show in detail how a mixture of two sources can be segmented at points where local deformations do not provide adequate explanation, to delineate regions dominated by one source. Individual sources can then be reconstructed by interpolation of the deformation parameters to reconstruct estimates of the mixture components even when they are hidden behind high-energy maskers.
      Although acoustic scene analysis and source separation are used as motivating and illustrative applications throught, the intrinsic descriptions of the nature of sound sources captured by these models could have other, broader applications in signal recognition, compression and modification, and even beyond audio in other domains where signal properties have the appropriate nontrivial local structure.
      번역하기

      The problem of separating overlapping sound sources has long been a research goal in sound processing, not least because of the apparent ease with which we as listeners achieve perceptual separation and isolation of sound sources in our everyday expe...

      The problem of separating overlapping sound sources has long been a research goal in sound processing, not least because of the apparent ease with which we as listeners achieve perceptual separation and isolation of sound sources in our everyday experiences.
      Human listeners use their prior knowledge of all the sound classes that they have experienced through their lives to impose constraints on the form that elements on a mixture can take. Listeners use the information obtained from partial observation of the unmixed context to disambiguate the components where the energy is locally swamped by interfering sources.
      Researchers working on this problem (Ellis 1996) argue that just as human listeners have top-down knowledge, prior constraints on the form that the mixture components can take is the critical component to making source separation systems work. In this thesis, we propose to encode these contraints in the form of models which capture the statistical distributions of the features of mixture components, using the framework of statistical graphical models, and then use those models to estimate obscured or corrupted portions from partial observations. Our overarching goal is to explain composed data as a composition of the models of the individual sources.
      After reviewing the basic statistical tools, this dissertation describes three models of this kind. The first uses multiple-microphone recordings from reverberant rooms combined in a filter-and-sum setup. The filter coefficients are optimized to match system output against a model of speech taken from a speech recognizer. The second model addresses the more difficult case of a single channel recording, and handles the tractability problems of the very large number of states required by decomposing the signal into subbands. The final model provides very precise fits to source signals without an enormous dictionary of prototypes, but instead by exploiting the observation that much of a real-world signal can be described as systematic local spectral deformations of adjacent time frames; by inferring these deformations between occasional spectral templates, the entire sound is accurately described. For this last model, we show in detail how a mixture of two sources can be segmented at points where local deformations do not provide adequate explanation, to delineate regions dominated by one source. Individual sources can then be reconstructed by interpolation of the deformation parameters to reconstruct estimates of the mixture components even when they are hidden behind high-energy maskers.
      Although acoustic scene analysis and source separation are used as motivating and illustrative applications throught, the intrinsic descriptions of the nature of sound sources captured by these models could have other, broader applications in signal recognition, compression and modification, and even beyond audio in other domains where signal properties have the appropriate nontrivial local structure.

      더보기

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      주제

      연도별 연구동향

      연도별 활용동향

      연관논문

      연구자 네트워크맵

      공동연구자 (7)

      유사연구자 (20) 활용도상위20명

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼