RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
          펼치기
        • 등재정보
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        BF-YOLOv3-tiny를 사용한 정확한 동공 추적 시스템 구현

        최건호,주재한,김석찬 한국멀티미디어학회 2024 멀티미디어학회논문지 Vol.27 No.3

        Fast and accurate pupil tracking, even in environments with limited computing resources, is critical for applications such as eye tracking and driver drowsiness warning systems. This paper proposes BF-YOLOv3-tiny for fast and accurate pupil tracking. Key improvements include: A bi-directional fusion method was applied to interconnect low-resolution and high-resolution feature maps, and anchors boxes were selected by considering distribution changes due to data augmentation during training process. In addition, a signal processing technique to remove grid sensitivity and an IoU-based loss function were adopted when model predicts the bounding boxes. Data provided by Department of Ophthalmology of Pusan National University hospital was used to evaluate the proposed model, and the results were compared and analyzed through comparative experiments with five lightweight networks. The proposed model shows performance up to 98.0 AP 50, 78.8 AP 75, and 44.6 AP T, outperforming compared to existing YOLOv3-tiny and other lightweight networks. Lastly, as a result of implementing the model with the best performance on NVIDIA Jetson Nano, it achieved up to 100.0 AP 50 and 26.2 FPS, demonstrating its feasibility and an accurate and real-time pupil tracking system even in an environment with limited computing resources.

      • KCI등재

        남성의 동공 크기를 이용한 뉴로 스포츠 마케팅의 접근 방법: 농구 경기를 중심으로

        고의석 ( Eui-suk Ko ),송기현 ( Ki-hyeon Song ),조수현 ( Soo-hyun Cho ),김종하 ( Jong-ha Kim ) 한국감성과학회 2017 감성과학 Vol.20 No.1

        The present study used one of research techniques which is eye gaze tracking for neuromarketing. When pupil`s size of men dilated over than three sigma (0.135%), the interest and eye movement in observation were measured. According to statistical analysis of previous studies, three sigma range is meaningful therefore sigma range was used as operational definition because `pupil dilatation` is difficult to be define in eye gaze tracking data. Pictures of basketball games were selected as visual stimuli and 90% effective ratio of total 7,200 data were calculated. Thus, 29 of 34 participants were used for test. Pupil`s size was calculated by applying pupil`s width and height into a formular; [Pupil`s size = Pupil width/2 × Pupil height/2 × π]. In conclusion, billboard utilized for sports marketing had meaningless effects because gaze frequency to basketball player and surrounding environment was higher than that to billboard when participantsas game spectators diltaed their pupil`s size over than three sigma. Thus, it was required using new marketing strategies like neuromarketing to increase utility through the present study.

      • KCI등재후보

        능동적 적외선 조명을 이용한 실시간 3차원 얼굴 방향 식별

        박호식,배철수 한국정보통신학회 2004 한국정보통신학회논문지 Vol.8 No.3

        본 논문에서는 능동적 적외선 조명을 이용한 3차원 얼굴 방향 식별을 위한 새로운 방법을 제안하고자 한다. 적외선 조명 하에서 밝게 나타나는 동공을 효과적으로 실시간 검출하여 추적할 수 있는 알고리즘을 제안한다. 다른 방향의 얼굴들에서 동공의 기하학적 왜곡을 탐지하여, 3차원 얼굴 방향과 동공의 기하학적 특성 사이의 관계를 나타낸 학습 데이터를 사용하여 고유한 눈 특징 공간을 구축하였고, 입력된 질의 영상에 대한 3차원 얼굴 방향을 고유한 눈 특징 공간을 사용하여 실시간으로 얼굴 방향을 측정할 수 있었다. 실험결과 카메라에 근접한 실험 대상자들에 대하여 최소 94.67%, 최고 100%의 식별 결과를 나타내었다. In this paper, we introduce a new approach for real-time 3D face pose discrimination based on active IR illumination from a monocular view of the camera. Under the IR illumination, the pupils appear bright. We develop algorithms for efficient and robust detection and tracking pupils in real time. Based on the geometric distortions of pupils under different face orientations, an eigen eye feature space is built based on training data that captures the relationship between 3D face orientation and the geometric features of the pupils. The 3D face pose for an input query image is subsequently classified using the eigen eye feature space. From the experiment, we obtained the range of results of discrimination from the subjects which close to the camera are from 94,67%, minimum from 100%, maximum.

      • SCOPUSKCI등재

        Webcam-Based 2D Eye Gaze Estimation System By Means of Binary Deformable Eyeball Templates

        Kim, Jin-Woo The Korea Institute of Information and Commucation 2010 Journal of information and communication convergen Vol.8 No.5

        Eye gaze as a form of input was primarily developed for users who are unable to use usual interaction devices such as keyboard and the mouse; however, with the increasing accuracy in eye gaze detection with decreasing cost of development, it tends to be a practical interaction method for able-bodied users in soon future as well. This paper explores a low-cost, robust, rotation and illumination independent eye gaze system for gaze enhanced user interfaces. We introduce two brand-new algorithms for fast and sub-pixel precise pupil center detection and 2D Eye Gaze estimation by means of deformable template matching methodology. In this paper, we propose a new algorithm based on the deformable angular integral search algorithm based on minimum intensity value to localize eyeball (iris outer boundary) in gray scale eye region images. Basically, it finds the center of the pupil in order to use it in our second proposed algorithm which is about 2D eye gaze tracking. First, we detect the eye regions by means of Intel OpenCV AdaBoost Haar cascade classifiers and assign the approximate size of eyeball depending on the eye region size. Secondly, using DAISMI (Deformable Angular Integral Search by Minimum Intensity) algorithm, pupil center is detected. Then, by using the percentage of black pixels over eyeball circle area, we convert the image into binary (Black and white color) for being used in the next part: DTBGE (Deformable Template based 2D Gaze Estimation) algorithm. Finally, using DTBGE algorithm, initial pupil center coordinates are assigned and DTBGE creates new pupil center coordinates and estimates the final gaze directions and eyeball size. We have performed extensive experiments and achieved very encouraging results. Finally, we discuss the effectiveness of the proposed method through several experimental results.

      • KCI등재후보

        실시간 영상감시 시스템 개발

        조현섭(Cho, Hyeon-Seob) 한국산학기술학회 2007 한국산학기술학회논문지 Vol.8 No.2

        본 논문에서는 실시간으로 눈을 검출하고 추적하는 새로운 방법을 제안하고자 한다. 기존의 능동적 적외선을 이용한 눈 검출 및 추적 방법은 외부의 조명에 매우 민감하게 반응하는 문제점을 가지고 있으므로, 본 논문에서는 적외선 조명을 이용한 밝은 동공 효과와 전형적인 외형을 기반으로 한 사물 인식 기술을 결합하여 외부 조명의 간섭으로 밝은 동공 효과가 나타나지 않는 경우에도 견실하게 눈을 검출하고 추적 할 수 있는 방법을 제안한다. 눈 검출과 추적을 위해 SVM과 평균 이동 추적방법을 사용하였고, 적외선 조명과 카메라를 포함한 영상 획득 장치를 구성하여 제안된 방법이 효율적으로 다양한 조명하에서 눈 검출과 추적을 할 수 있음을 보여 주었다. Non-intrusive methods based on active remote IR illumination for eye tracking is important for many applications of vision-based man-machine interaction. One problem that has plagued those methods is their sensitivity to lighting condition change. This tends to significantly limit their scope of application. In this paper, we present a new real-time eye detection and tracking methodology that works under variable and realistic lighting conditions. Based on combining the bright-pupil effect resulted from IR light and the conventional appearance-based object recognition technique, our method can robustly track eyes when the pupils are not very bright due to significant external illumination interferences. The appearance model is incorporated in both eyes detection and tracking via the use of support vector machine and the mean shift tracking. Additional improvement is achieved from modifying the image acquisition apparatus including the illuminator and the camera.

      • KCI등재

        영상정보를 이용한 HMD용 실시간 아이트랙커 시스템

        노은정(Eunjung Roh),홍진성(Jinsung Hong),방효충(Hyochoong Bang) 한국항공우주학회 2007 韓國航空宇宙學會誌 Vol.35 No.6

        본 논문은 영상정보를 이용하여 사용자의 눈의 움직임을 통해 응시점을 추적하는 실시간 아이트랙커 시스템 개발에 대한 연구이다. 개발된 시스템은 광학기반의 동공추적 기법을 이용하여 사용자의 눈의 움직임을 추적한다. 광학기반의 방법은 사용자의 눈에 아무런 장애도 일으키지 않고 눈의 위치를 매우 정확하게 측정 할 수 있다는 장점을 가진다. 동공영상을 획득하기 위해 적외선 카메라를 사용하며, 획득한 영상으로부터 정확한 동공영역을 추출하기 위해 적외선 LED를 사용한다. 실시간 영상처리가 가능하게 하기위해 칼만필터를 적용한 동공추적 알고리즘을 개발하고 DSP(Digital Signal Processing) 시스템을 사용하여 동공영상을 획득한다. 실시간 아이트랙커 시스템을 통하여 실시간으로 사용자의 동공움직임을 추적하고 사용자가 바라보는 배경영상에 사용자의 응시점을 나타낸다. In this paper, development and tests of a real-time eye-tracker system are discussed. The tracker system tracks a user′s gaze point through movement of eyes by means of vision-based pupil detection. The vision-based method has an advantage of detecting the exact positions of user′s eyes. An infrared camera and a LED are used to acquire a user's pupil image and to extract pupil region, which was hard to extract with software only, from the obtained image, respectively. We develop a pupil-tracking algorithm with Kalman filter and grab the pupil images by using DSP(Digital Signal Processing) system for real-time image processing technique. The real-time eye-tracker system tracks the movements of user′s pupils to project their gaze point onto a background image.

      • KCI등재

        실시간 3차원 얼굴 방향 식별

        김태우(Kim, Tae-Woo) 한국정보전자통신기술학회 2010 한국정보전자통신기술학회논문지 Vol.3 No.1

        본 논문에서는 능동적 적외선 조명을 이용한 3차원 얼굴 방향 식별을 위한 새로운 방법을 제안하고자 한다. 적외선 조명 하에서 밝게 나타나는 동공을 효과적으로 실시간 검출하여 추적할 수 있는 알고리즘을 제안한다. 다른 방향의 얼굴들에서 동공의 기하학적 왜곡을 탐지하여, 3차원 얼굴 방향과 동공의 기하학적 특성 사이의 관계를 나타낸 학습 데이터를 사용하여 고유한 눈 특징 공간을 구축하였고, 입력된 질의 영상에 대한 3차원 얼굴 방향을 고유한 눈 특징 공간을 사용하여 실시간으로 얼굴 방향을 측정할 수 있었다. 실험결과 카메라에 근접한 실험 대상자들에 대하여 최소 94.67%, 최고 100% 의 식별 결과를 나타내었다. In this paper, we introduce a new approach for real-time 3D face pose discrimination based on active IR illumination from a monocular view of the camera. Under the IR illumination, the pupils appear bright. We develop algorithms for efficient and robust detection and tracking pupils in real time. Based on the geometric distortions of pupils under different face orientations, an eigen eye feature space is built based on training data that captures the relationship between 3D face orientation and the geometric features of the pupils. The 3D face pose for an input query image is subsequently classified using the eigen eye feature space. From the experiment, we obtained the range of results of discrimination from the subjects which close to the camera are from 94,67%, minimum from 100%, maximum.

      • KCI등재후보

        얼굴 특징 정보를 이용한 향상된 눈동자 추적을 통한 졸음운전 경보 시스템 구현

        정도영,홍기천 (사)디지털산업정보학회 2009 디지털산업정보학회논문지 Vol.5 No.2

        In this paper, a system that detects driver's drowsiness has been implemented based on the automatic extraction and the tracking of pupils. The research also focuses on the compensation of illumination and reduction of background noises that naturally exist in the driving condition. The system, that is based on the principle of Haar-like feature, automatically collects data from areas of driver's face and eyes among the complex background. Then, it makes decision of driver's drowsiness by using recognition of characteristics of pupils area, detection of pupils, and their movements. The implemented system has been evaluated and verified the practical uses for the prevention of driver's drowsiness.

      • KCI등재

        당일 라식, 라섹 수술과 기존 수술법의 수술 후 결과 비교

        김욱겸(Wook Kyum Kim),류익희(Ik Hee Ryu),이인식(In Sik Lee),김희선(Hee Sun Kim),김정섭(Jung Sub Kim),김진국(Jin Kuk Kim) 대한안과학회 2018 대한안과학회지 Vol.59 No.5

        Purpose: To evaluate the postoperative results of one day laser-assisted in-situ keratomileusis (LASIK) or laser-assisted sub-epithelial keratectomy (LASEK) procedures, which were performed on the same day as preoperative examinations, including fundus examinations after dilating the pupil. Methods: This study included 226 LASIK patients (226 eyes) and 201 LASEK patients (201 eyes) who underwent surgery with Visumax and EX500 from January to December in 2016. We divided the patients into two groups. The one-day surgery group (one-day group) underwent surgery on the same day as preoperative examinations, including dilated fundus examinations. The scheduled surgery group (scheduled group) underwent surgery on the scheduled day after the preoperative examinations. In the one-day group, the surgery was usually performed 2–5 hours after instillation of the pupil dilating eye drops. Results: Among LASIK patients, the one-day group included 109 patients and the scheduled group included 117 patients. The postoperative myopic errors were 0.06 ± 0.37 diopters (D) and 0.07 ± 0.36 D, respectively (p = 0.91). The postoperative astigmatism was -0.38 ± 0.24 D and -0.37 ± 0.24 D, respectively (p = 0.77). The postoperative uncorrected visual acuity was -0.05 ± 0.03 logMAR and -0.06 ± 0.03 logMAR, respectively (p = 0.13). Among LASEK patients, the one-day group included 107 patients and the scheduled group included 94 patients. The postoperative myopic error was 0.18 ± 0.52 D and 0.22 ± 0.54 D, respectively (p = 0.95). The postoperative astigmatism was -0.48 ± 0.30 D and -0.46 ± 0.29 D, respectively (p = 0.14). The postoperative uncorrected visual acuity was -0.05 ± 0.03 logMAR and -0.06 ± 0.03 logMAR, respectively (p = 0.33). Conclusions: The postoperative results of the one-day LASIK and LASEK patients, whose surgery was performed on the same day as the preoperative examinations, were not significantly different from those using the conventional method. J Korean Ophthalmol Soc 2018;59(5):410-418

      • SCOPUSKCI등재

        Visual Modeling and Content-based Processing for Video Data Storage and Delivery

        Hwang Jae-Jeong,Cho Sang-Gyu The Korea Institute of Information and Commucation 2005 Journal of information and communication convergen Vol.3 No.1

        In this paper, we present a video rate control scheme for storage and delivery in which the time-varying viewing interests are controlled by human gaze. To track the gaze, the pupil's movement is detected using the three-step process : detecting face region, eye region, and pupil point. To control bit rates, the quantization parameter (QP) is changed by considering the static parameters, the video object priority derived from the pupil tracking, the target PSNR, and the weighted distortion value of the coder. As results, we achieved human interfaced visual model and corresponding region-of-interest rate control system.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼