http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
오토 인코더 기반 소리 - 이미지 복원 신경망 모델 구현
하현우(Ha Hyunwoo),김성빈(Kim Sungbin),Arda Senocak,오태현(Tae-Hyun Oh) 대한전자공학회 2021 대한전자공학회 학술대회 Vol.2021 No.6
When given an ambient sound, humans can imagine a visual scene corresponding to that sound. In this paper, we study the task of reconstructing a visual scene from an ambient sound. We design and train the deep neural network on AVE dataset to perform this task. During training, our model learns to generate an image embedding from an audio, which then is used to reconstruct an image. By leveraging a pre-trained image decoder, the model is able to reconstruct a high-resolution image on the training set. We evaluate our network qualitatively on seen and unseen dataset and visualize the audio embedding.
대화형 프로젝션 시스템을 위한 컴퓨터 비전 기반의 그림자 제스처 인식
하현우(Hyunwoo Ha),김대운(Daewoon Kim),지중현(Joonghyun Ji),윤동호(Dongho Yun),Abid Hasan,Mamona Awan,고광희(Kwanghee Ko) (사)한국CDE학회 2014 한국 CAD/CAM 학회 학술발표회 논문집 Vol.2014 No.8
This paper proposes a vision-based shadow gesture recognition method for interactive projection system. Gesture recognition method is based on the screen image obtained the web camera installed in the same position as the projector. Recognition method separates only the shadow area by combining the binary image with learning algorithm isolated background from the input image. The region of interest is set by labeling the shadow of separated regions, and then it can isolate hand shadow using defect, convex-hull and moment of each region. To distinguish hand gestures, Hu’s invariant moment method is used. In addition, we have used the multiscale retinex algorithm to solve the problem that camera cannot recognize the gesture in bright place. Optical Flow algorithm is used for tracking the fingertip and OpenGL is used for representing the result of drawing.
하현우(Hyunwoo Ha),지중현(Joonghyun Ji),윤동호(Dongho Yun),이정현(Junghyun Lee),고광희(Kwanghee Ko) (사)한국CDE학회 2014 한국 CAD/CAM 학회 학술발표회 논문집 Vol.2014 No.2
This paper proposes a method of 3D augmented reality using an interactive shadow on mobile (Android platform). The existing technology of human Interaction augmented reality, in the portion of scalability and speed, there was a number of limitations in the rendering and image processing in a single hardware such as mobile and PC. However, in the proposed method, by image processing Human Interaction from the PC, and then the virtual object augmented processing in Mobile, it is improved in terms of scalability and application speed. In the experiment, after produced on white screen the shadow of the user using the beam projector, an event that recognizes the hand shadows is generated corresponding to the movement of the hand by using a web camera for tracking. The marker is projected on a screen for the creation of the virtual object, and two camera of smartphone are used in order to watch in 3D the change of marker corresponding to the event.