RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • Large-Scale Text Similarity Computing with Spark

        Xiaoan Bao,Shichao Dai,Na Zhang,Chenghai Yu 보안공학연구지원센터 2016 International Journal of Grid and Distributed Comp Vol.9 No.4

        Text understanding is a hot research in Natural Language Processing and Information Retrieval. In recent years, it has received wide attention and research. In the era of big data, Understanding text in large-scale datasets is a challenge. Although the earliest systems designed for these workloads, such as MapReduce, gave users a powerful, but low-level, procedural programming interface. So, MapReduce doesn’t compose well for lager text applications. Recently, Spark, an in-memory cluster-computing platform, has been proposed. It has emerged as a popular framework for large-scale data processing and analytics. It provides a general-purpose efficient cluster computing engine and simpler for the end users. In this work, we consider using Vector Space Model (VSM) and TF-IDF weighting schema and feature hashing feature extraction techniques in order to solve the problem of large-scale text data similarity computing by Spark. As a result, Experimental results that using Spark in order to solve document similarity computation problems as soon as quickly by 20Newsgroups. In additions, It is more benefit from document classification and clustering of machine learning tasks.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼