http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Adaptive Learning of Background Model for Crowd Scenes
Gwang-Gook Lee,Suhan Song,Ja-Young Yoon,Jae-Jun Kim,Whoi-Yul Kim 한국멀티미디어학회 2008 한국멀티미디어학회 국제학술대회 Vol.2008 No.-
Background modeling is widely employed in many surveillance applications. However, previous background modeling methods often fail under crowded circumstances. This paper proposes a background modeling method robust to crowded situation. The proposed method is based on the Gaussian mixture background model proposed by Grimson [6]. Adopting simple frame level analysis, the proposed method controls the learning rate of background model adaptively to the crowded ness of the scene. As a result, the proposed method works similar to the static background model in crowded scenes, but operates as the original Grimson model in normal scenes. Experiment showed that the proposed method increases the accuracy of background subtraction by 14% compared to the original algorithm proposed by Grimson in crowded scenes.
이광국(Gwang-Gook Lee),강정원(Jung Won Kang),김재곤(Jae-Gon Kim),김회율(Whoi-Yul Kim) 한국방송·미디어공학회 2006 방송공학회논문지 Vol.11 No.1
Due to the rapid development of multimedia appliances, the increasing amount of multimedia data enforces the development of automatic video analysis techniques. In this paper, a method of ToC generation is proposed for educational video contents. The proposed method consists of two parts: scene segmentation followed by scene annotation. First, video sequence is divided into scenes by the proposed scene segmentation algorithm utilizing the characteristics of educational video. Then each shot in the scene is annotated in terms of scene type, existence of enclosed caption and main speaker of the shot. The ToC generated by the proposed method represents the structure of a video by the hierarchy of scenes and shots and gives description of each scene and shot by extracted features. Hence the generated ToC can help users to perceive the content of a video at a glance and to access a desired position of a video easily. Also, the generated ToC automatically by the system can be further edited manually for the refinement to effectively reduce the required time achieving more detailed description of the video content. The experimental result showed that the proposed method can generate ToC for educational video with high accuracy.
이광국(Gwang-Gook Lee),윤자영(Ja-Young Yoon),김재준(Jae-Jun Kim),김회율(Whoi-Yul Kim) 한국방송·미디어공학회 2009 방송공학회논문지 Vol.14 No.4
This paper proposes a method to estimate the flow speed of pedestrians in surveillance videos. In the proposed method, the average moving speed of pedestrians is measured by estimating the size of real-world motion from the observed motion vectors. For this purpose, a pixel-to-meter conversion factor is introduced which is calculated from camera parameters. Also, the height information, which is missing because of camera projection, is predicted statistically from simulation experiments. Compared to the previous works for flow speed estimation, our method can be applied to various camera views because it separates scene parameters explicitly. Experiments are performed on both simulation image sequences and real video. In the experiments on simulation videos, the proposed method estimated the flow speed with average error of about 0.08㎧. The proposed method also showed promising results for the real video.