RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        Size-Independent Caption Extraction for Korean Captions with Edge Connected Components

        Je-Hee Jung,Jaekwang Kim,Jee-Hyong Lee 한국지능시스템학회 2012 INTERNATIONAL JOURNAL of FUZZY LOGIC and INTELLIGE Vol.12 No.4

        Captions include information which relates to the images. In order to obtain the information in the captions, text extraction methods from images have been developed. However, most existing methods can be applied to captions with a fixed height or stroke width using fixed pixel-size or block-size operators which are derived from morphological supposition. We propose an edge connected components based method that can extract Korean captions that are composed of various sizes and fonts. We analyze the properties of edge connected components embedding captions and build a decision tree which discriminates edge connected components which include captions from ones which do not. The images for the experiment are collected from broadcast programs such as documentaries and news programs which include captions with various heights and fonts. We evaluate our proposed method by comparing the performance of the latent caption area extraction. The experiment shows that the proposed method can efficiently extract various sizes of Korean captions.

      • KCI등재

        전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론

        김태진(Taejin Kim),김남규(Namgyu Kim) 한국지능정보시스템학회 2020 지능정보연구 Vol.26 No.2

        Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image’s constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called ‘inter-observation interference’ problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel ‘Character-Independent Transfer-learning’ that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of ‘image / expertise captions’ were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated thro

      • KCI등재

        CAPTION 상황 분류 모델의 문화적 타당화

        문희정,안현의 서강대학교 학생생활상담연구소 2023 人間理解 Vol.44 No.2

        본 연구에서는 인간이 일상에서 경험하는 상황을 복잡성(Complexity), 역경(Adversity), 긍정적 유인가(Positive Valence), 중요성(Importance), 전형성(Typicality), 유머(humOr), 부정적 유인가(Negative Valence)의 7개 요인으로 분류한 CAPTION 상황 분류 모델이 한국 문화권에서도타당하게 적용되는 모델인지에 대한 문화적 타당성을 확보하고자, 국내 표본을 대상으로CAPTION 모델을 측정하는 척도인 CAPTIONs-SF의 타당성을 검증하였다. 연구 참여자는25-39세의 성인 남녀 431명이었으며, 설문 조사는 두 가지의 자료 수집 방법으로 진행되었다. 척도 타당화를 위해 CAPTIONs-SF 자료에 대한 확인적 요인 분석을 실시하였으며, Situation Six 질문지 및 NEO 성인용 성격검사 단축형과의 상관관계를 통해 수렴 타당도와 준거 타당도를 확인하였다. 또한 각기 다른 방법으로 수집된 두 자료의 구성 요인이 동일한지 확인하기 위해 다집단 분석을 실시하였다. 그 결과 요인 분석의 모델 적합도는 수용 가능한 수준이었으며, 수렴 타당도 및 준거 타당도 역시 검증되었다. 또한 자료 수집 방법에 따른 두 자료의 요인 구조에 차이가 없는 것이 확인되어 CAPTION 모델의 문화적 타당성이 증명되었다. 마지막으로본 연구의 의의와 후속 연구의 방향성에 대해 논의하였다. This study sought to ascertain the cultural validity of the CAPTION model as a situational taxonomy within the Korean cultural context. We assessed the CAPTIONs-SF scale for this purpose, targeting 431 adults aged between 25 and 39. Data collection involved two distinct methods, each set undergoing separate scale validation. The confirmatory factor analysis indicated an acceptable model fit. Convergent and criterion validity were established through correlations with the Situation Six questionnaire and NEO-Adult-PAS-SF . Furthermore, the two data collection methods revealed no differences in factor structure. These findings confirm the applicability of the CAPTION model within the Korean cultural context. The study also discussed the significance of the findings and highlighted its limitations.

      • KCI등재

        위치적 연관성과 어휘적 유사성을 이용한 웹 이미지 캡션 추출

        이형규(Hyoung-Gyu Lee),김민정(Min-Jeong Kim),홍금원(Gumwon Hong),임해창(Hae-Chang Rim) 한국정보과학회 2009 정보과학회논문지 : 소프트웨어 및 응용 Vol.36 No.4

        이 논문은 웹 문서의 이미지 캡션 추출을 위한 방법으로서 이미지와 캡션의 위치적 연관성과 본문과 캡션의 어휘적 유사성을 동시에 고려한 방법을 제안한다. 이미지와 캡션의 위치적 연관성은 거리와 방향 관점에서 캡션이 이미지에 상대적으로 어떻게 위치하고 있는지를 나타내며, 본문과 캡션의 어휘적 유사성은 이미지를 설명하고 있는 캡션이 어휘적으로 본문과 어느 정도 유사한지를 나타낸다. 이미지와 캡션을 독립적으로 고려한 자질만을 사용한 캡션 추출 방법을 기저 방법으로 놓고 제안하는 방법들을 추가적인 자질로 사용하여 캡션을 추출하였을 때, 캡션 추출 정확률과 캡션 추출 재현율이 모두 향상되며, 캡션 추출 F-measure가 약 28% 향상되었다. In this paper, we propose a new web image caption extraction method considering the positional relation between a caption and an image and the lexical similarity between a caption and the main text containing the caption. The positional relation between a caption and an image represents how the caption is located with respect to the distance and the direction of the corresponding image. The lexical similarity between a caption and the main text indicates how likely the main text generates the caption of the image. Compared with previous image caption extraction approaches which only utilize the independent features of image and captions, the proposed approach can improve caption extraction recall rate, precision rate and 28% F-measure by including additional features of positional relation and lexical similarity.

      • KCI등재

        Size-Independent Caption Extraction for Korean Captions with Edge Connected Components

        정지희,김재광,이지형 한국지능시스템학회 2012 INTERNATIONAL JOURNAL of FUZZY LOGIC and INTELLIGE Vol.12 No.4

        Captions include information which relates to the images. In order to obtain the information in the captions, text extraction methods from images have been developed. However, most existing methods can be applied to captions with a fixed height or stroke width using fixed pixel-size or block-size operators which are derived from morphological supposition. We propose an edge connected components based method that can extract Korean captions that are composed of various sizes and fonts. We analyze the properties of edge connected components embedding captions and build a decision tree which discriminates edge connected components which include captions from ones which do not. The images for the experiment are collected from broadcast programs such as documentaries and news programs which include captions with various heights and fonts. We evaluate our proposed method by comparing the performance of the latent caption area extraction. The experiment shows that the proposed method can efficiently extract various sizes of Korean captions.

      • A Study on the Trend of Caption Communication of TV Video Contents

        Janghan Lim,Bomin Jeong (사)한국디지털디자인협의회 2007 (사)한국디지털디자인협의회 conference Vol.2007 No.1

        The most remarkable feature shown on TV lately seems activated caption. Since some of broadcasting utilized it starting the latter half of 1990s, caption is recently being given a great deal of weigh on the most of amusement programs. Caption Communication means to make smooth and interesting communications with viewers by utilizing such caption. Caption Communication is already settled down in ordinary video culture, to the extent that it seems unnatural without caption while watching TV. Absolutely it might be resulted from the change in technology and environment in the ubiquitous age. That is, in many cases of watching TV in the public place where sound is not audible, caption must be activated. This study is aimed ant finding out current state of application of Caption Communication and efficient method of application.

      • KCI등재

        Size-Independent Caption Extraction for Korean Captions with Edge Connected Components

        Jung, Je-Hee,Kim, Jaekwang,Lee, Jee-Hyong Korean Institute of Intelligent Systems 2012 INTERNATIONAL JOURNAL of FUZZY LOGIC and INTELLIGE Vol.12 No.4

        Captions include information which relates to the images. In order to obtain the information in the captions, text extraction methods from images have been developed. However, most existing methods can be applied to captions with a fixed height or stroke width using fixed pixel-size or block-size operators which are derived from morphological supposition. We propose an edge connected components based method that can extract Korean captions that are composed of various sizes and fonts. We analyze the properties of edge connected components embedding captions and build a decision tree which discriminates edge connected components which include captions from ones which do not. The images for the experiment are collected from broadcast programs such as documentaries and news programs which include captions with various heights and fonts. We evaluate our proposed method by comparing the performance of the latent caption area extraction. The experiment shows that the proposed method can efficiently extract various sizes of Korean captions.

      • A Study on the Trend of Caption Communication of TV Video Contents

        Janghan Lim,Bomin Jeong 한국디자인지식학회 2007 한국디자인지식학회 Conference Vol.2007 No.8

        The most remarkable feature shown on TV lately seems activated caption. Since some of broadcasting utilized it starting the latter half of 1990s, caption is recently being given a great deal of weigh on the most of amusement programs. Caption Communication means to make smooth and interesting communications with viewers by utilizing such caption. Caption Communication is already settled down in ordinary video culture, to the extent that it seems unnatural without caption while watching TV. Absolutely it might be resulted from the change in technology and environment in the ubiquitous age. That is, in many cases of watching TV in the public place where sound is not audible, caption must be activated. This study is aimed ant finding out current state of application of Caption Communication and efficient method of application.

      • KCI등재

        신경망을 이용한 자막 크기에 무관한 연결 객체 기반의 자막 추출

        정제희(Je-Hee Jung),윤태복(Tae Bok Yoon),김동문(Dong-Moon Kim),이지형(Jee-Hyong Lee) 한국지능시스템학회 2007 한국지능시스템학회논문지 Vol.17 No.7

        영상에 나타나는 자막은 영상과 관계가 있는 정보를 포함한다. 이러한 영상과 관련 있는 정보를 이용하기 위해 영상으로부터 자막을 추출하는 연구는 근래에 들어 활발히 진행되고 있다. 기존의 연구는 일정한 높이의 자막이나 획의 두께를 지닌 자막에서만 정상적인 작동을 한다. 본 논문에서는 일정 크기 이상의 자막에 대해서 적용할 수 있는 크기에 무관한 자막 추출 방법을 제안한다. 먼저, 자막 연결 객체의 패턴 추출을 위해서 자막이 포함된 영상을 수집하고, 신경망을 이용해서 자막의 패턴을 분석한다. 그 후로는 사전에 추출한 패턴을 이용하여 입력 영상에서 자막을 추출한다. 실험에 사용된 영상은 뉴스, 다큐멘터리, 쇼 프로그램과 같은 대중 방송에서 수집하였다. 실험 결과는 다양한 크기의 자막을 포함한 영상을 사용하여 실험하였고, 자막 추출의 결과는 찾아진 연결객체 중에 자막의 비율과 자막 중에 찾아진 자막의 비율로 분석하였다. 실험 결과를 보면 제안한 방법에 의해 다양한 크기의 자막을 추출할 수 있음을 보여준다. Captions which appear in images include information that relates to the images. In order to obtain the information carried by captions, the methods for text extraction from images have been developed. However, most existing methods can be applied to captions with fixed height or stroke's width. We propose a method which can be applied to various caption size. Our method is based on connected components. And then the edge pixels are detected and grouped into connected components. We analyze the properties of connected components and build a neural network which discriminates connected components which include captions from ones which do not. Experimental data is collected from broadcast programs such as news, documentaries, and show programs which include various height caption. Experimental result is evaluated by two criteria : recall and precision. Recall is the ratio of the identified captions in all the captions in images and the precision is the ratio of the captions in the objects identified as captions. The experiment shows that the proposed method can efficiently extract captions various in size.

      • KCI등재

        Enhancing Korean EFL Learners’ Vocabulary Learning and Listening Comprehension Through Video Captions

        강은영 영상영어교육학회 2019 영상영어교육 (STEM journal) Vol.20 No.2

        This study examined the effects of captions on Korean EFL learners’ listening comprehension and vocabulary learning. 66 students from two intact classes at a public high school were randomly assigned to (i) the captioned (n = 33) or (ii) the uncaptioned viewing condition (n = 33). Both groups of participants watched two video clips twice. A multiple-choice comprehension test was administered immediately after the second viewing of each clip. After the participants finished watching the second video and completed the relevant comprehension test, they completed vocabulary tests. The results showed that the caption group performed significantly better on the listening comprehension test than the comparison group did. This study also found that captions were beneficial for vocabulary learning as compared to no captions. Specifically, captions were found to have positive effects on the learning of several aspects of vocabulary knowledge, including (i) word-form recognition, (ii) meaning recognition, and (iii) meaning recall. Watching captioned videos appears to supply textual support for language learners concerning what they heard in the videos, consequently increasing their comprehension of the information presented. In addition, imagery support provided by video input and textual information from captions seems to enhance L2 vocabulary learning.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼