RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      Towards Multi-Lingual Multi-Modal Dialogue Systems.

      한글로보기

      https://www.riss.kr/link?id=T16602230

      • 저자
      • 발행사항

        Ann Arbor : ProQuest Dissertations & Theses, 2022

      • 학위수여대학

        University of California, Davis Computer Science

      • 수여연도

        2022

      • 작성언어

        영어

      • 주제어
      • 학위

        Ph.D.

      • 페이지수

        106 p.

      • 지도교수/심사위원

        Advisor: Yu, Zhou.

      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      부가정보

      다국어 초록 (Multilingual Abstract)

      Having an intelligent assistant that can communicate with humans to serve their needs is a fundamental challenge in Artificial Intelligence (AI) research. Recently, owing to the development of deep learning techniques and the large-scale datasets, we have witnessed a great advancement in dialogue systems. Nowadays, conversational agents have been deployed in millions of smart devices such as Alexa, Google home assistant, and Smartphones (e.g. Siri) to serve as personal assistants or chat companions for human users. Although tremendous success has been achieved, there are still major limitations. The majority of current dialogue systems can only process and communicate with language context, which limits their application to conversational tasks that require situational understanding such as language-guided visual navigation or fashion shopping assistant. Additionally, while there are more than 6500 different languages used in our world, the dialogue systems are mainly studied on English. In order to broaden the access of such AI techniques to non-English speakers, it is essential to build conversational AI agents that can communicate in multiple languages. To address these limitations, we aim to build multi-lingual multi-modal dialogue systems that learn to process context from multi-modal signals (vision and language) and communicate in various languages via interacting with real users. In this dissertation, we introduce our effort to approach this goal in two different research directions: 1. Ground Vision and Action: we build multi-modal dialogue systems that can ground conversations in a visual environment and adopt optimal actions to improve task success. we also collect a new benchmark that helps the dialogue system to learn cross-modal grounding via simultaneously handling vision generation from textual context and text generation from visual context in a unified conversational task. 2. Cross-lingual Cross-modal Representation Learning: To enable dialogue systems to become multi-lingual speakers, we conduct research to align the vision and various languages in a learned semantic space. Specifically, we research multi-modal machine translation and cross-lingual cross-modal pre-training techniques to learn joint representations across languages and modalities. we have also introduced how to learn robust universal cross-modal representation without parallel image-text pairs.
      번역하기

      Having an intelligent assistant that can communicate with humans to serve their needs is a fundamental challenge in Artificial Intelligence (AI) research. Recently, owing to the development of deep learning techniques and the large-scale datasets, we...

      Having an intelligent assistant that can communicate with humans to serve their needs is a fundamental challenge in Artificial Intelligence (AI) research. Recently, owing to the development of deep learning techniques and the large-scale datasets, we have witnessed a great advancement in dialogue systems. Nowadays, conversational agents have been deployed in millions of smart devices such as Alexa, Google home assistant, and Smartphones (e.g. Siri) to serve as personal assistants or chat companions for human users. Although tremendous success has been achieved, there are still major limitations. The majority of current dialogue systems can only process and communicate with language context, which limits their application to conversational tasks that require situational understanding such as language-guided visual navigation or fashion shopping assistant. Additionally, while there are more than 6500 different languages used in our world, the dialogue systems are mainly studied on English. In order to broaden the access of such AI techniques to non-English speakers, it is essential to build conversational AI agents that can communicate in multiple languages. To address these limitations, we aim to build multi-lingual multi-modal dialogue systems that learn to process context from multi-modal signals (vision and language) and communicate in various languages via interacting with real users. In this dissertation, we introduce our effort to approach this goal in two different research directions: 1. Ground Vision and Action: we build multi-modal dialogue systems that can ground conversations in a visual environment and adopt optimal actions to improve task success. we also collect a new benchmark that helps the dialogue system to learn cross-modal grounding via simultaneously handling vision generation from textual context and text generation from visual context in a unified conversational task. 2. Cross-lingual Cross-modal Representation Learning: To enable dialogue systems to become multi-lingual speakers, we conduct research to align the vision and various languages in a learned semantic space. Specifically, we research multi-modal machine translation and cross-lingual cross-modal pre-training techniques to learn joint representations across languages and modalities. we have also introduced how to learn robust universal cross-modal representation without parallel image-text pairs.

      더보기

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      주제

      연도별 연구동향

      연도별 활용동향

      연관논문

      연구자 네트워크맵

      공동연구자 (7)

      유사연구자 (20) 활용도상위20명

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼