RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 음성지원유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • A Tripartite Edge Histogram Scheme for a License Plate Recognition System

        Ching-Hao Lai 한국산학기술학회 2012 SmartCR Vol.2 No.1

        License plate recognition is usually divided into three stages that are license plate location, character segmentation, and character recognition, which is the most important stage. In recent years, many methods of character recognition have been proposed. This paper presents a mapping-based scheme of character recognition to improve the accuracy and efficiency of some previous methods. The proposed method divides each character to be recognized into three parts, and three edge histograms (EH) are created for each partial character to account for the eight different edges featured in this type of character recognition. Therefore, an edge histogram has eight features of a tripartite character, and each character is recognized by three or less edge histograms. In this paper, therefore, the proposed three-edge histogram extraction method is called tripartite edge histogram (TEH). Experimental results show an average recognition accuracy of 95.295% for the proposed scheme, and the recognition speed is 0.35 seconds per character.

      • KCI등재

        다양한 문자열영상의 개별문자분리 및 인식 알고리즘

        구근휘(Keunhwi Koo),최성후(SungHoo Choi),윤종필(Jong Pil Yun),최종현(JongHyun Choi),김상우(Sang Woo Kim) 대한전기학회 2009 전기학회논문지 Vol.58 No.4

        Character recognition system consists of four step; text localization, text segmentation, character segmentation, and recognition. The character segmentation is very important and difficult because of noise, illumination, and so on. For high recognition rates of the system, it is necessary to take good performance of character segmentation algorithm. Many algorithms for character segmentation have been developed up to now, and many people have been recently making researches in segmentation of touching or overlapping character. Most of algorithms cannot apply to the text regions of management number marked on the slab in steel image, because the text regions are irregular such as touching character by strong illumination and by trouble of nozzle in marking machine, and loss of character. It is difficult to gain high success rate in various cases. This paper describes a new algorithm of character segmentation to recognize slab management number marked on the slab in the steel image. It is very important that pre-processing step is to convert gray image to binary image without loss of character and touching character. In this binary image, non-touching characters are simply separated by using vertical projection profile. For separating touching characters, after we use combined profile to find candidate points of boundary, decide real character boundary by using method based on recognition. In recognition step, we remove noise of character images, then recognize respective character images. In this paper, the proposed algorithm is effective for character segmentation and recognition of various text regions on the slab in steel image.

      • A Survey on Arabic Character Recognition

        보안공학연구지원센터(IJSIP) 보안공학연구지원센터 2015 International Journal of Signal Processing, Image Vol.8 No.2

        Off-line recognition of text play a significant role in several application such as the automatic sorting of postal mail or editing old documents. It is the ability of the computer to distinguish characters and words. Automatic off-line recognition of text can be divided into the recognition of printed and handwritten characters. Off-line Arabic handwriting recognition still faces great challenges. This paper provides a survey of Arabic character recognition systems which are classified into the character recognition categories: printed and handwritten. Also, it examines the literature on the most significant work in handwritten text recognition without segmentation and discusses algorithms which split the words into characters.

      • KCI등재

        관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템

        이교혁(Kyohyuk Lee),김태연(Taeyeon Kim),김우주(Wooju Kim) 한국지능정보시스템학회 2020 지능정보연구 Vol.26 No.2

        In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturin

      • KCI등재

        Low-Quality Banknote Serial Number Recognition Based on Deep Neural Network

        장운수,Kun Ha Suh,이의철 한국정보처리학회 2020 Journal of information processing systems Vol.16 No.1

        Recognition of banknote serial number is one of the important functions for intelligent banknote counterimplementation and can be used for various purposes. However, the previous character recognition method islimited to use due to the font type of the banknote serial number, the variation problem by the solid status, andthe recognition speed issue. In this paper, we propose an aspect ratio based character region segmentation anda convolutional neural network (CNN) based banknote serial number recognition method. In order to detect thecharacter region, the character area is determined based on the aspect ratio of each character in the serial numbercandidate area after the banknote area detection and de-skewing process is performed. Then, we designed andcompared four types of CNN models and determined the best model for serial number recognition. Experimental results showed that the recognition accuracy of each character was 99.85%. In addition, it wasconfirmed that the recognition performance is improved as a result of performing data augmentation. Thebanknote used in the experiment is Indian rupee, which is badly soiled and the font of characters is unusual,therefore it can be regarded to have good performance. Recognition speed was also enough to run in real timeon a device that counts 800 banknotes per minute.

      • KCI등재

        Character Recognition using Regional Structure

        Suk Won Yoo 국제문화기술진흥원 2019 International Journal of Advanced Culture Technolo Vol.7 No.1

        With the advent of the fourth industry, the need for office automation with automatic character recognition capabilities is increasing day by day. Therefore, in this paper, we study a character recognition algorithm that effectively recognizes a new experimental data character by using learning data characters. The proposed algorithm computes the degree of similarity that the structural regions of learning data characters match the corresponding regions of the experimental data character. It has been confirmed that satisfactory results can be obtained by selecting the learning data character with the highest degree of similarity in the matching process as the final recognition result for a given experimental data character.

      • KCI등재후보

        Mellin 변환 방식과 BPEJTC를 이용한 영상 문자 인식

        서춘원,고성원,이병선 한국조명전기설비학회 2003 조명·전기설비학회논문지 Vol.17 No.4

        자연계에서 다양한 형태로 입력되는 물체 영상을 효과적으로 인식하려면, 물체의 위치, 회전, 크기 변화에 관계없이 인식할 수 있는 왜곡 불변 특성의 추출이 반드시 요구된다. 이러한 왜곡 불변 특성은 동일한 영상의 변화에 대하여 인식 특성이 같고, 서로 다른 영상의 변화에 대해서는 분리 식별이 용이해야 한다. 이러한 인식 특성을 얻기위해 다각도로 많은 연구가 진행되고 있으며, 특히 회전 및 크기에 불변 특성을 동시에 얻을 수 있는 Mellin변환을 이용한 방법 등이 영상 인식에 많이 이용되고 있다[1][2][3]. 따라서, 본 논문에서는 Mellin 변환 방법에 의한 크기 및 회전에 대한 불변 특성을 얻을 수 있는 문자 인식 시스템을 위한 문자 특징 추출 방법을 제시하고자 하였으며, 영문자 26 문자의 입력 영상에 대하여 무게 중심법에 의한 문자 이동과 Mellin 변환 방법에 의한 특징 추출 방법에 보간법을 이용하여 특징을 추출하였으며, 추출된 특징에 대하여 특징의 이질도를 검사하여, 각 특징의 이질도가 약 50% 이상의 결과를 얻었다. 또한, Mellin 변환 방법에 의해 추출된 특징을 기준 영상으로 하는 BPEJTC(Binary Phase Extraction Joint Transform Correlator)를 이용하여 크기, 회전 및 이동에 따른 입력 문자의 인식이 가능한 BPEJTC 시스템을 구현하였으며, 이에 따라 본 논문에서는 약 90%의 인식률을 얻을 수 있었다. 따라서 본 논문에서 제시하는 Mellin 변환 방법에 따라 추출된 문자의 특징과 BPEJTC를 이용하여 영상 문자를 인식할 수 있는 영상 문자 인식 시스템의 가능성을 제시하였다. For the recognizing system to be classified the same or different images in the nature the rotation, scale and transition invariant features is to be necessary. There are many investigations to get the feature for the recognition system and the log-polar transform which is to be get the invariant feature for the scale and rotation is used. In this paper, we suggested the character recognition methods which are used the centroid method and the log-polar transform with the interpolation to get invariant features for the character recognition system and obtained the results of the above 50% differential ratio for the character features. And we obtained the about 90% recognition ratio from the suggested character recognition system using the BPEJTC which is used the invariant feature from the Mellin transform method for the reference image. and can be recognized the scaled and rotated input character. Therefore, we suggested the image character recognition system using the Mellin transform method and the BPEJTC is possible to recognize with the invariant feature for rotation scale and transition.

      • KCI등재

        타이어 브랜드 캐릭터의 형태분석을 통한 시각정보의 효용성 - 미쉐린, 금호 타이어 브랜드 캐릭터를 중심으로 -

        정영혜 ( Jeong Young Hye ),김준교 ( Kim Jun Kyo ) 한국디자인트렌드학회 2013 한국디자인포럼 Vol.40 No.-

        브랜드 캐릭터는 브랜드나 상품의 홍보를 이끌어 내는 중요한 브랜드 아이덴티티로서의 기능을 수행하고 있다. 이에 브랜드 캐릭터가 가지고 있는 형태의 시각정보가 브랜드의 이미지를 소비자에게 얼마만큼 전달해 내며 그 시각 정보가 소비자에게 인지되기 위해 이용되는 효율적인 홍보 여건들이 어떠한 것들이 있는지를 탐구하고 국내 타이어 브랜드 캐릭터 금호타이어의 `또로`와 해외 타이어 브랜드 캐릭터 미쉐린의 `무슈 비벤덤`을 중심으로 연구를 진행해 보고자 한다. 그리고 캐릭터의 형태적 정보 분석을 위해 형태재인이론( 形態再認理論, Morphological recognition theory)을 이용한 분석법을 통해 브랜드 캐릭터의 시각적 정보와 인지 형태를 연구하며 효율성 증명을 위해 형태 정보를 통하여 브랜드 캐릭터의 효용성을 분석할 것이다. 이는 전체적인 브랜드 캐릭터의 형태 분석에서 나타나는 시각적 정보가 시대에 따른 변화와 발맞추고 있으며 소비자의 인식의 변화와 공감대 또한 브랜드와 상품에 미치는 이미지와 함께한다는 것을 증명 할 것이다. 형태 인지를 통한 소비자의 브랜드 이미지의 긍정적인 효과를 증명하며 브랜드 캐릭터의 중요성을 이 연구를 통해 나타내리라 기대한다. Brand character plays the role of an important brand identity that brings out the promotion of the particular brand or product. Thus, the purpose of this research was to determine to what extent the visual information of the brand characters delivers the brand image to the customers and what kind of effective promotion conditions are used so that the customers are able to acknowledge such visual information. The research was conducted by focusing on “Ttoro” of Kumho Tire, a domestic tire brand character and “Monsieur Bibendum” of Michelin, a foreign tire brand character. In addition, for a morphological information analysis, an analysis methodology of using morphological recognition theory was utilized to research the visual information of brand characters and the cognitive morphology. Also, to verify the usefulness, a survey based on morphology information was executed to analyze the usefulness of the brand characters. This proves that the visual information shown in the morphology analysis of the overall brand character is changing in line with the changes of the era and that the changes of consumer awareness and common ground too are followed by the image that influences the brand and the product. In this research, the positive effects of the consumer`s brand image are expected to be verified through morphological recognition and the importance of the brand characters would be highlighted.

      • PCA-based Offline Handwritten Character Recognition System

        Munish Kumar,M. K. Jindal,R. K. Sharma 한국산학기술학회 2013 SmartCR Vol.3 No.5

        Principal component analysis (PCA) has been used widely in pattern recognition to reduce the extent of the data. In this paper, we explore using this technique to recognize offline handwritten Gurmukhi characters, and a system for offline handwritten Gurmukhi character recognition using PCA is proposed. The system first prepares a skeleton of the character so that meaningful feature information about the character can be extracted. For classification, we used k-nearest neighbor, Linear-SVM, polynomial-SVM and RBF-SVM based approaches and combinations of these approaches. In this work, we collected 16,800 samples of isolated offline handwritten Gurmukhi characters. These samples were divided into three categories. In category 1 (5600 samples), each Gurmukhi character was written 100 times by a single writer. In category 2 (5600 samples), each Gurmukhi character was written 10 times by 10 different writers, and in category 3 (5600 samples), each Gurmukhi character was written by 100 different writers. The set of the basic 35 akhars of Gurmukhi has been considered here. A partitioning strategy for selecting the training and testing patterns is also explored in this work. We used zoning, diagonal, directional, transition, intersection and open end point, parabola curve fitting?based and power curve fitting?based feature extraction in order to find the feature set for a given character. The proposed system achieves a recognition accuracy of 99.06% in category 1, 98.73% in category 2 and 78.30% in category 3.

      • KCI등재

        웨이브릿 변환과 모멘트를 이용한 문자인식에 관한 연구

        조민환(Meen-Hwan Cho) 한국컴퓨터정보학회 2010 韓國컴퓨터情報學會論文誌 Vol.15 No.10

        본 논문에서는 웨이브릿 변환을 이용한문자인식 방법 중 문자의 최소 단위인자음과 모음을 분리시켜 문자의 모멘트를 분석하여 산출되는 정보를 사전에 컴퓨터에 입력시켜 문서화된 수기 문자를 컴퓨터에 저장하고 인식시키는 방법에 접근 하였다. 연구는 획득한 문장 이미지에서 잡음을 없애고 줄 단위로 분리, 분리된 줄 단위 문장은 한 문자 단위로 다시 분리된 후 자음과 모음으로 분리 하였다. 분리된 자소는 CVIPtools를 사용하여 히스토그램 평활화와 침식 및 평균값 필터를 처리한 후 C++를 이용하여 세선화 처리하고 세선화된 자소는 팽창 및 크기 변환하여 모든 자소가 동일 굵기, 크기 이미지로 만들었다. 표준화 이미지는 이진화 이미지로 변환하여 3단계 웨이브릿 변환을 이용하여 데이터의 양을 1/64로 줄인 후 해밍거리를 조사하였다. 연구 결과 다양한 'ㄱ'상호간 및 'ㅅ'상호간의 일치도는 매우 높게 나타났고, 서로 상이한 'ㄱ'과 'ㅅ'을 비교 했을 때 상호간 일치도가 매우 낮게 나옴을 알 수 있었다. 이 연구 결과로 더 많은 수기 자소들에 대한 해밍거리조사가 이루어지면 각각의 자음과 모음의 모멘트 구분하여 수기 문자 인식에 중요한 정보를 알 수 있을 것으로 판단된다. In this thesis, We studied on hand-written character recognition, that characters entered into a digital input device and remove noise and separating character elements using preprocessing. And processed character images has done thinning and 3-level wavelet transform for making normalized image and reducing image data. The structural method among the numerical Hangul recognition methods are suitable for recognition of printed or hand-written characters because it is usefull method deal with distortion. so that method are applied to separating elements and analysing texture. The results show that recognition by analysing texture is easily distinguished with respect to consonants. But hand-written characters are tend to decreasing successful recognition rate for the difficulty of extraction process of the starting point, of interconnection of each elements, of mis-recognition from vanishing at the thinning process, and complexity of character combinations. Some characters associated with the separation process is more complicated and sometime impossible to separating elements. However, analysis texture of the proposed character recognition with the exception of the complex handwritten is aware of the character.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼