http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
처리속도와 처리용량이 영상 속도에 따라 이야기 이해력에 미치는 영향
윤서린,황령희,김수진,문지현,임동선 한국언어치료학회 2024 언어치료연구 Vol.33 No.2
Purpose: The purpose of this study was to examine the difference in children’s story comprehension skills according to video speed and comprehension question type, and to identify a correlation between story comprehension skills, processing speed, and processing capacity. Methods: A total of 23 preschool children aged 4 to 6 participated in the study. The participants’ parents completed a survey, while the participants completed the digit span test as processing capacity task, Rapid Automatized Naming (RAN) as processing speed task, and story comprehension task. For the story comprehension task, 3 episodes from ‘Caillou’ were selected based on the average number of syllables. The three different videos were randomly assigned to three video speeds, .75x, 1.0x, and 1.25x, and were presented to the participants. Ten literal and inferential questions were asked after each video. Results: There was no significant difference in story comprehension according to video speed. However, there was a significant difference in story comprehension according to question types. Also, a significant correlation was found between processing speed and literal and inferential questions from slow speed video, and inferential questions from regular speed video. However, no significant correlation between processing capacity and video speed was observed. Conclusion: The faster the information processing speed, the more advantageous a child is at story comprehension. However, fast speed videos cause information processing load in all children regardless of their processing ability. Thus, videos at appropriate speed are to be provided according to the children’s developmental stage.
그래픽 가속을 이용한 비디오의 실시간 스타일화기법에 관한 연구
이지형(JiHyung Lee),황치정(Chi Jung Hwang) 한국색채학회 2010 한국색채학회 논문집 Vol.24 No.4
비디오 관련 기술의 발달과 디지털 비디오 영상의 보급으로, 컴퓨터를 이용하여 디지털 비디오에 다양한 특수효과를 적용하는 요구가 증가하였다. 기존의 이미지 변환 효과와 달리, 비사실적 렌더링 기술(NPR, non-photorealistic rendering)은 비디오 영상에 적용하가기가 쉽지 않다. 카툰이나 일러스트레이션 같은 스타일화를 비디오에 적용하면 느린 처리 속도, 반자동화로 인한 수작업, 결과 영상의 튀는 현상 등 단점들로 인해 실제 영상 제작 현장에서 사용하기 곤란하였다. 본 연구에서는 이러한 문제들을 극복하기 위해 자동으로 이미지 또는 비디오의 실시간 스타일화를 수행하는 2차원 이미지 기반의 비사실적 렌더링 기법을 제안한다. 비디오 영상에 비사실적 렌더링 효과를 적용하기 위해서 비디오의 각 프레임 이미지를 추출하고, 영상분석을 통해 얻은 쉐이딩 톤(shading tone)과 추상화(abstraction) 결과를 기반으로 순차적으로 변환한다. 변환된 각 프레임 이미지의 결과들을 모아서 먼저 흑백 일러스트레이션 효과가 적용된 스타일화된 비디오로 만든다. 흑백 일러스트레이션은 원 영상의 특징들을 잘 보여주지만, 영상의 의미를 잃어버리므로 컬러 일러스트레이션으로 확장하였다. 또한 내재적인 표현(implicit representation)을 갖도록 설계된 간단한 비사실적 렌더링 구조 덕분에 시간적 일관성(temporal coherence)을 지원한다. 마지막으로 실시간 처리를 위해 NVidia 사의 GPU를 이용하여 가속이 가능하게 되어, 실시간으로 동작 하도록 구현되었다. 제안하는 비사실적 렌더링으로 만들어진 스타일화 기법은 기존 영상과 차별화된 다양한 효과를 제공할 수 있고, 영상의 특징을 강조함으로써 필요한 정보를 더 효과적으로 전달하는 장점이 있다. 또한 자동으로 비디오 영상을 실시간에 스타일화가 가능하여, 방송, CF, 영화, 게임 등 디지털영상 제작 현장에서 핵심 기술로 활용될 수 있다. According to advancements in digital video technology, there are lots of needs for various special effects of videos by computer. The conventional image-transform effects could be applied to video streams, but non-photorealistic rendering effects are not easy to apply. In professional video productions, it is hard to use the non-photorealistic rendering (NPR) video effects like cartoon or illustration, because of problems which are low response time in processing and labor work, semi-automation using labor, jerkiness in video. To solve the above problems, we present a two-dimensional image-based NPR system that automatically delivers stylization of images and videos. In order to apply NPR effects to a video, we transform each frame image sequentially in the video based on shading tone and abstraction from the corresponding frame. When the above process is ended, results of the frame images are integrated as a black and white style video with our illustration effects. Although black and white illustration preserves the salient features of video, it couldn"t show the meanings of video without color. Therefore, we try to make the color illustration. Moreover, because of simple NPR structures, our system can support the temporal coherence. As a result, our system is designed to have an implicit representation, and implemented to achieve real-time performances using the GPU hardware with NVIDIA’s CUDA. Our NPR stylization has 2 advantages: Outstanding video effects, and effective message representation by the emphasis of salient features in images. As our approach is fully automatic and real-time processing, it will be the essential technology for the digital contents production fields like broadcasting, commercial, movie and game.
항공용 디지털 비디오 버스(ADVB)를 적용한 항공전자시스템용 VPM 개발
전은선,반창봉,박준현 한국항공우주학회 2015 한국항공우주학회 학술발표회 논문집 Vol.2015 No.4
항공기의 센서 및 시현기의 영상정보의 증가에 따라 항공전자시스템의 비디오 버스에 대한 대역폭 증가에 대한 요구사항이 대두되었다. ARINC818 항공용 디지털 오디오 버스(ADVB)규격은 넓은 대역폭, 빠른 응답속도, 비 압축 디지털 비디오 전송을 특징으로 하며 항공전자시스템 비디오 버스의 표준으로 자리매김하고 있다. ARINC818 ADVB 인터페이스를 적용한 하드웨어 모듈인 VPM은 Video Bus Bridge/Gateway 및 비디오 프로세싱 기능을 모두 제공한다. 이 논문에서는 VPM의 개발 결과를 보인다. With an increase in video data of aircraft’s sensor and display, the requirement for increasing video bus bandwidth of the avionics systems has emerged. ARINC 818 ADVB(Avionics Digital Video Bus) protocol standard features high bandwidth, low latency and uncompressed digital video transmission, is poised to be the de facto standard. VPM(Video Processing Module) is hardware module by applying ARINC818 ADVB interface, provides all of the functionality of video bus bridge, gateway and video processing. In this paper, show the result of the development of VPM.
큐브맵 영상에 Wavefront 병렬 처리를 적용하는 방법
홍석종(Seok Jong Hong),박광훈(Gwang Hoon Park) 한국방송·미디어공학회 2017 방송공학회논문지 Vol.22 No.3
The 360 VR video has a format of a stereoscopic shape such as an isometric shape or a cubic shape or a cubic shape. Although these formats have different characteristics, they have in common that the resolution is higher than that of a normal 2D video. Therefore, it takes much longer time to perform coding/decoding on 360 VR video than 2D Video, so parallel processing techniques are essential when it comes to coding 360 VR video. HEVC, the state of art 2D video codec, uses Wavefront Parallel Processing (WPP) technology as a standard for parallelization. This technique is optimized for 2D videos and does not show optimal performance when used in 3D videos. Therefore, a suitable method for WPP is required for 3D video. In this paper, we propose WPP coding/decoding method which improves WPP performance on cube map format 3D video. The experiment was applied to the HEVC reference software HM 12.0. The experimental results show that there is no significant loss of PSNR compared with the existing WPP, and the coding complexity of 15% to 20% is further reduced. The proposed method is expected to be included in the future 3D VR video codecs.
VIDEO TRAFFIC MODELING BASED ON GEO<SUP>Y</SUP>/G/∞ INPUT PROCESSES
SANG HYUK KANG,BARA KIM 한국산업응용수학회 2008 Journal of the Korean Society for Industrial and A Vol.12 No.3
With growing applications of wireless video streaming, an efficient video traffic model featuring modem high-compression techniques is more desirable than ever, because the wireless channel bandwidths are ever limited and time-varying. We propose a modeling and analysis method for video traffic by a class of stochastic processes, which we call 'Geo<SUP>Y</SUP>/G/∞ input processes'. We model video traffic by Geo<SUP>Y</SUP>/G/∞ input process with gamma-distributed batch sizes Y and Weibull-like autocorrelation function. Using four real-encoded, full-length video traces including action movies, a drama, and an animation, we evaluate our modeling performance against existing model, transformed- M/G/∞ input process, which is one of most recently proposed video modeling methods in the literature. Our proposed Geo<SUP>Y</SUP>/G/∞ model is observed to consistently provide conservative performance predictions, in terms of packet loss ratio, within acceptable error at various traffic loads of interest in practical multimedia streaming systems, while the existing transformed- M/G/∞ fails. For real-time implementation of our model, we analyze G/D/1/K queueing systems with Geo<SUP>Y</SUP>/G/∞ input process to upper estimate the packet loss probabilities.
홍정현,김원진,정기석 大韓電子工學會 2012 電子工學會論文誌-SD (Semiconductor and devices) Vol.49 No.7
고해상도의 동영상 서비스가 보편화 되면서 동영상을 빠르게 처리하기 위한 연구가 활발히 이루어지고 있다. 특히 멀티 코어 시스템 상에서 멀티스레드를 사용한 데이터 레벨 병렬화 방법을 적용하여 비디오 디코더의 성능을 향상 시킬 수 있었다. 기존에 제안된 병렬화 방법들을 통해 디코딩 과정의 성능을 향상 시킬 수 있었지만, 이 방법들은 엔트로피 디코딩 부분을 제외하거나 엔트로피 디코딩 부분만의 병렬화를 별도로 고려한 부분적인 병렬화 방법이기 때문에 전체 디코딩 과정의 성능 향상에는 부족한 부분이 있다. 따라서 본 논문에서는 기존 병렬화 디코딩 과정뿐만 아니라 엔트로피 병렬화 디코딩 과정까지 함께 고려한 통합적인 비디오 디코딩 병렬화 방법을 제안한다. 우리는 각각의 비디오 디코더 병렬화 방법을 분석하여 최적화 방법을 제시하고 이의 성능평가를 해보았다. 그리고 우리는 비디오 디코딩 과정 내부에 존재하는 코어의 개수에 따른 성능향상의 차이를 고려해 성능을 최적화한 Integrated Parallelization 방법을 제안한다. 우리는 인텔 i7 멀티코어 시스템의 물리적 코어에서 엔트로피 디코딩 부분을 최대로 병렬화 하면서, 내부 자원을 공유하는 하이퍼스레딩 기술을 사용하여 데이터레벨 병렬화 방법에는 물리적 코어 수의 2배까지 스레드를 할당했다. 그리고 디코딩 과정 내부 특성을 고려한 멀티스레드 스케쥴링으로 전체 디코딩 과정의 성능을 멀티코어 시스템에 최적화해서 최대 70%까지 성능을 향상시킬 수 있었다. Demand for high resolution video services leads to active studies on high speed video processing. Especially, widespread deployment of multi-core systems accelerates researches on high resolution video processing based on parallelization of multimedia software. Previously proposed parallelization approach could improve the decoding performance. However, some parallelization methods did not consider the entropy decoding and others considered only a partial decoding parallelization. Therefore, we consider parallel entropy decoding integrated with other parallel video decoding process on a multi-core system. We propose a novel parallel decoding method called Integrated Parallelization. We propose a method on how to optimize the parallelization of video decoding when we have a multi-core system with many cores. We parallelized the KTA 2.7 decoder with the proposed technique on an Intel i7 Quad-Core platform with Intel Hyper-Threading technology and multi-threads scheduling. We achieved up to 70% performance improvement using IP method.
비디오 모니터링을 이용한 연안환경 관측기술에 대한 고찰
김태림,이광수,서경덕 한국해안해양공학회 1998 한국해안해양공학회 논문집 Vol.10 No.1
비디오_모니터링 기술 및 이의 해안에서의 적용에 대하여 검토하였다. 최근에 비디오 하드웨어 및 영상처리 기술의 발달로 인하여 비디오카메라를 이용한 해안선 변화, 연안 사주의 형태, 파의 쳐올림 및 스워쉬(swash)운동 등에 대한 관측이 가능해졌다 특히 영상의 디지털화(digitization),좌표수정(rectification)및 영상 처리과정을 통하여 비디오 영상으로부터 정량적인 정보를 얻을 수 있게 되었다. 비디오 모니터링 기술은, 비록 정밀도가 낮고 육지 및 해수 표면에 대한 정보만을 간접적으로 제공하지만, 기존의 관측 기술에 비해 훨씬 적은 비용으로 장기간의 관측을 가능하게 해 준다. Video monitoring techniques and their applications to beaches were reviewed. The recent development of video hardware and image process made it possible to measure shoreline changes, sandbar morphology, wave runup, swash motion, and so on using video camaras. Especially, quantitative information from the video image can be obtained by digitization of image, rectification procedure, and image process. Using video monitoring techniques, measurements can be made at much lower cost and for long periods of time compared to the traditional measurement techniques, although these techniques are of lower accuracy and provide only indirect information on the land and water surface.
조현태,윤경로,배효철,김민욱 한국반도체디스플레이기술학회 2014 반도체디스플레이기술학회지 Vol.13 No.1
Video from unmanned aerial vehicle (UAV) is influenced by natural environments due to the light-weight UAV, specifically by winds. Thus UAV's shaking movements make the video shaking. Objective of this paper is making a stabilized video by removing shakiness of video acquired by UAV. Stabilizer estimates camera's motion from calculation of optical flow between two successive frames. Estimated camera's movements have intended movements as well as unintended movements of shaking. Unintended movements are eliminated by smoothing process. Experimental results showed that our proposed method performs almost as good as the other off-line based stabilizer. However estimation of camera's movements, i.e., calculation of optical flow, becomes a bottleneck to the real-time stabilization. To solve this problem, we make parallel stabilizer making average 30 frames per second of stabilized video. Our proposed method can be used for the video acquired by UAV and also for the shaking video from non-professional users. The proposed method can also be used in any other fields which require object tracking, or accurate image analysis/representation.
Moving Object Detection with High Precision via Marked Watershed and Image Morphology
Qingqing Fu,Silin Xu,Aiping Wu 보안공학연구지원센터 2016 International Journal of Signal Processing, Image Vol.9 No.11
This paper presents a non-stationary object detection method by exploring time-varying spatial domain information in full motion video. Initially, the edge maps of difference image between two adjacent frames and current frame is generated via the well-known Canny edge detector. The distance of the edge pixels between the difference image and the current video frame are confined within a small value to determine the initial edge mask for the object in motion. The horizontal and vertical filling followed by morphological opening and closing operator are applied on the initial edge mask to create initial temporal segmentation mask of the moving object. The morphological dilation and corrosion operator are utilized to obtain binary marker image of the foreground and background which are used to modify the multi-scale morphological gradient image of current frame. Finally, the watershed algorithm is performed on the modified gradients to find the non-stationary objects accurately in the spatial domain of motion frames. Processed video results show detection accuracy of 98% and 99% for four different video experimentation test-beds involving fast and slow human motion. In this operation, the proposed technique eliminates the problem of over-segmentation of the watershed algorithm and extracts visually distinct, contextually meaningful non-stationary objects as they randomly appear (or disappear) in video sequences.
비디오 감시 시스템을 위한 멀티코어 프로세서 기반의 병렬 SVM
김희곤,이성주,정용화,박대희,이한성,Kim, Hee-Gon,Lee, Sung-Ju,Chung, Yong-Wha,Park, Dai-Hee,Lee, Han-Sung 한국정보보호학회 2011 정보보호학회논문지 Vol.21 No.6
Recent intelligent video surveillance system asks for development of more advanced technology for analysis and recognition of video data. Especially, machine learning algorithm such as Support Vector Machine (SVM) is used in order to recognize objects in video. Because SVM training demands massive amount of computation, parallel processing technique is necessary to reduce the execution time effectively. In this paper, we propose a parallel processing method of SVM training with a multi-core processor. The results of parallel SVM on a 4-core processor show that our proposed method can reduce the execution time of the sequential training by a factor of 2.5.