RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 무료
      • 기관 내 무료
      • 유료
      • Image Semantic Description and Automatic Semantic Annotation

        Liang Meiyu,Du Junping,Jia Yingmin,Sun Zengqi 제어로봇시스템학회 2010 제어로봇시스템학회 국제학술대회 논문집 Vol.2010 No.10

        Making the semantic description and automatic semantic annotation of the image which contains rich contents and intuitive expression is a research subject that is challenging. It is a key technology of realizing fast and effective image retrieval and a research focusing on cross media mining. Also it has great application value in various kinds of fields. This paper studies and discusses image media semantic description and automatic semantic annotation. By extracting SIFT visual features, we make the description of the image semantic, then establish the association between local image visual features and semantic keywords, and finally realize the image to the text feature mapping and the automatic semantic annotation. The simulation experiment result shows that this method can accomplish the image automatic semantic annotation efficiently, and also it can reach a higher accuracy.

      • KCI등재

        KNN-based Image Annotation by Collectively Mining Visual and Semantic Similarities

        ( Qian Ji ),( Liyan Zhang ),( Zechao Li ) 한국인터넷정보학회 2017 KSII Transactions on Internet and Information Syst Vol.11 No.9

        The aim of image annotation is to determine labels that can accurately describe the semantic information of images. Many approaches have been proposed to automate the image annotation task while achieving good performance. However, in most cases, the semantic similarities of images are ignored. Towards this end, we propose a novel Visual-Semantic Nearest Neighbor (VS-KNN) method by collectively exploring visual and semantic similarities for image annotation. First, for each label, visual nearest neighbors of a given test image are constructed from training images associated with this label. Second, each neighboring subset is determined by mining the semantic similarity and the visual similarity. Finally, the relevance between the images and labels is determined based on maximum a posteriori estimation. Extensive experiments were conducted using three widely used image datasets. The experimental results show the effectiveness of the proposed method in comparison with state-of-the-arts methods.

      • KCI등재

        모바일 환경에서 사용자 정의 규칙과 추론을 이용한의미 기반 이미지 어노테이션의 확장

        서광원,임동혁 한국멀티미디어학회 2018 멀티미디어학회논문지 Vol.21 No.2

        Since a large amount of multimedia image has dramatically increased, it is important to search semantically relevant image. Thus, several semantic image annotation methods using RDF(Resource Description Framework) model in mobile environment are introduced. Earlier studies on annotating image semantically focused on both the image tag and the context-aware information such as temporal and spatial data. However, in order to fully express their semantics of image, we need more annotations which are described in RDF model. In this paper, we propose an annotation method inferencing with RDFS entailment rules and user defined rules. Our approach implemented in Moment system shows that it can more fully represent the semantics of image with more annotation triples.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼