RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 음성지원유무
        • 원문제공처
          펼치기
        • 등재정보
        • 학술지명
          펼치기
        • 주제분류
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재후보

        영어지각동사의 다의성 연구

        지인영 한국현대영어영문학회 2003 현대영어영문학 Vol.47 No.1

        This paper deals with Metaphor and Metonymy represented in the use of English perception verbs like see or hear. The purpose of this paper is to analyse the polysemous phenomenon of perception verbs in terms of metaphor and metonymy and suggest a model of psychological semantic structure for them. English perception verbs are often used for representing a mental, cognitive activity as well as representing a physical, perceptive activity. This paper looks for a basis for the polysemous use in the creative system of metaphor and metonymy, especially in the meaning extension function of mind-as-body metaphor. English perception verbs show a good example of a metaphor of domain transfer from physical domain to mental or cognitive domain. This paper suggests the conceptual chain and the semantic structure for the perception verb to show the possibility of polysemy and contextual modulation.

      • KCI등재

        심층학습을 이용한 기계번역 기술과 정확도 연구

        지인영,김희동 국제언어인문학회 2017 인문언어 Vol.19 No.2

        In this study, we discuss the basic technology of machine learning of the deep neural network for natural language processing(NLP). We explain the distributed vector representation of words. Distributed vector representation is proved to be able to carry semantic meanings and are useful in various NLP tasks. The recurrent neural network(RNN) is employed to get the vector representation of sentences. We discuss the RNN encoder-decoder model and some modifications of the RNN structure to improve the accuracy of the machine translations. To test and verify the accuracy of Google translator, we performed the translation among Korean, English, and Japanese, and examined the meaning change between the original and the translated sentence. In neural network translation, we showed some inaccuracies of the translation such as wrong relation between subject and object, or some omission or repetition of the original meaning. In order to increase the performance and accuracy of machine translation, it is necessary to acquire more data for training.

      • KCI등재

        심층학습을 이용한 문서요약의 연구방법 분석

        지인영,김희동 국제언어인문학회 2018 인문언어 Vol.20 No.1

        In this study, we discuss the basic technology of Text Summarization based on the deep neural network for natural language processing(NLP). The text summarization task is divided into an extractive summary and an abstractive summary. The extractive summary is a method of extracting a summary of the words used in the input document in the output text, and the abstractive summary is a problem of understanding the input statement and generating a sentence of the same content. The abstractive sentence generation system is based on the encoder-decoder model with attention mechanism, and a selector that can select input sentence is added. The Copy network and Pointer network are the special mechanisms for selector. Such selector systems can make text summarization to be the hybrid form of abstractive and extractive summary. In the future, we expect that accuracy of text summarization will be improved by adding reinforcement learning method.

      • 신체 지각동사의 다의어 현상에 대한 인지 언어학적 분석 : 영어와 한국어의 비교검토

        지인영 한국체육대학교 2001 論文集 Vol.24 No.-

        This paper deals with the polysemous phenomena of perception verbs focused on the English verb hear and the Korean verb tut-ta. The purpose of this paper is to analyse the semantics of these verbs in terms of meaning extension and propose cognitive conceptual structures of them in the framework of cognitive semantics. Gonerally, perceptual verbs are taken as meaning sensory or perceiving activities. However, they are also commonly used as indicating mental activities. For instance, hear has its secondary extended meaning of 'listen to', 'heed', 'pay attention to', 'follow', and 'obey' as well as its basic meaning of sensory perceptive activities with ears. Likewise, tut-ta means not only hearing activities but also mental processing such as 'pay attention to', 'take the course of' or 'obey'. These two perception verbs in English and Korean show very similar polysemous phenomena. This paper analyses the data based on the Cognitive Semantics. Traditionally, Objectivists have treated the meaning of words as a direct relationship between words and the things in the world. Thus, this kind of approach considers mind as a machine that takes the words as input and yields the things as its output. In this way, it eliminates conceptual structure from the linguistic system. As a result, it cannot explain the metaphorical extended meaning of words including perceptual verbs. On the other hand, Cognitive Semanticists view the meaning of the word as a relationship between words and the world in the sense of 'the human experiential picture of the world', not 'actual world'. In 'the human experiential picture of the world', the conceptual structure links physical or sensory activities to mental activities. To explain polysemous phenomena of the verbs, meaning extension and Langacker(1987, 1991)'s concept 'Active Zone(AZ)' are applied. The cognitive conceptual representation for them will be suggested based on Langacker's framework.

      • KCI등재

        대조학습을 이용한 문장 임베딩 연구

        지인영,김희동,김태혁,배홍식 한국외국어대학교 언어연구소 2023 언어와 언어학 Vol.- No.99

        We discuss the sentence representation method using contrastive learning, in which various data augmentation methods are used to automatically generate positive samples such as word replacement, deletion, duplication, and word order change. These methods can make sentences grammatically incorrect or create sentences with different meanings, which seems to have bad or unexpected effects on the accuracy and reliability of the expression vector. We proposed a two-step model in which the self-supervised learning model is followed by the supervised learning with the data sets reflecting the special characteristics of Korean language. The Korean language allows word order scrambling that a positive dataset can be secured by word order change augmentation. In addition, various positive datasets can be proposed from a set of honorific-ordinary augmentation as well as active-passive, or passive-active augmentation in Korean. We confirmed that the proposed method is efficient via an experiment using a dataset with word order-changing augmentation.

      • KCI등재

        사전학습 언어모델의 기술 분석 연구

        지인영,김희동 국제언어인문학회 2020 인문언어 Vol.22 No.1

        The pre-training language model, BERT has achieved a great success in natural language processing by transferring knowledge from rich-resource pre-training task to the low-resource downstream tasks. The model was recognized as a breakthrough or an innovative technology that changed the paradigm of natural language processing. In this paper, a number of studies have been analyzed to classify and compare the research directions. We examined a technical challenges after BERT. In the pre-training process, self-supervised learning is performed, which relies entirely on training data. If we introduce the linguistic knowledge in the course of the training, it would be possible to get better result more effectively. Therefore, it is necessary to develop a method to insert external knowledge such as linguistic information in the training process. The mask language model and the next sentence prediction are being used in BERT's pre-training tasks. Though, to get much deeper understanding of the natural language, some other effective methods are to be studied and developed. Lastly, we should aim to develop eXplainable Artificial Intelligence (XAI) technology in natural language processing, helping us look into the transparent processing. The pre-trained language model focuses on the development of skills that can be used for all the tasks of natural language understanding. A lot of researches are focused on how to adapt language model effectively to downstream tasks based on common language models, even with the case with little data. It is also hoped that the technical analysis reviewed in this study will provide linguists and computer researchers with an opportunity to understand recent technological achievements in the field of natural language processing and to seek joint research.

      • KCI우수등재

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼