http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
User Identity Authentication Based on the Combination of Mouse and Keyboard Behavior
Haitao Tang,Wang Mantao 보안공학연구지원센터 2016 International Journal of Security and Its Applicat Vol.10 No.6
In order to improve the recognition rate of user identity authentication system, a user identity authentication method based on the combination of mouse and keyboard behavior is proposed. First of all, the characteristics of the two indicators of the mouse and keyboard are extracted, and then realizing the use of support to the rationale for the establishment of an identity authentication device, and finally test through a number of user identification and authentication. The results show that this method can improve the recognition rate of the user identity authentication, which greatly reduces the error rate and rejection rate and the results are obviously superior to the traditional method.
동일한 움직임 벡터들의 방향과 크기를 고려한 프레임율 증가기법
박종근(Jonggeun Park),정제창(Jechang Jeong) 한국방송·미디어공학회 2015 방송공학회논문지 Vol.20 No.6
In this paper, frame rate up conversion (FRUC) algorithm considering the direction and magnitude of identical motion vectors is proposed. extended bilateral motion estimation (EBME) has higher complexity than bilateral motion estimation (BME). By using average magnitude of motion vector with x and y direction respectively, dynamic frame and static frame are decided. We reduce complexity to decide EBME. also, After we compare the direction and magnitude of identical motion vectors, We reduce complexity to decide motion vector smoothing(MVS). Experimental results show that this proposed algorithm has fast computation and better peak singnal to noise ratio(PSNR) results compared with EBME.
An Adaptation Method in Noise Mismatch Conditions for DNN-based Speech Enhancement
( Xu Si-ying ),( Niu Tong ),( Qu Dan ),( Long Xing-yan ) 한국인터넷정보학회 2018 KSII Transactions on Internet and Information Syst Vol.12 No.10
The deep learning based speech enhancement has shown considerable success. However, it still suffers performance degradation under mismatch conditions. In this paper, an adaptation method is proposed to improve the performance under noise mismatch conditions. Firstly, we advise a noise aware training by supplying identity vectors (i-vectors) as parallel input features to adapt deep neural network (DNN) acoustic models with the target noise. Secondly, given a small amount of adaptation data, the noise-dependent DNN is obtained by using L2 regularization from a noise-independent DNN, and forcing the estimated masks to be close to the unadapted condition. Finally, experiments were carried out on different noise and SNR conditions, and the proposed method has achieved significantly 0.1%-9.6% benefits of STOI, and provided consistent improvement in PESQ and segSNR against the baseline systems.
PLDA 모델 적응과 데이터 증강을 이용한 짧은 발화 화자검증
윤성욱(Yoon, Sung-Wook),권오욱(Kwon, Oh-Wook) 한국음성학회 2017 말소리와 음성과학 Vol.9 No.2
Conventional speaker verification systems using time delay neural network, identity vector and probabilistic linear discriminant analysis (TDNN-Ivector-PLDA) are known to be very effective for verifying long-duration speech utterances. However, when test utterances are of short duration, duration mismatch between enrollment and test utterances significantly degrades the performance of TDNN-Ivector-PLDA systems. To compensate for the I-vector mismatch between long and short utterances, this paper proposes to use probabilistic linear discriminant analysis (PLDA) model adaptation with augmented data. A PLDA model is trained on vast amount of speech data, most of which have long duration. Then, the PLDA model is adapted with the I-vectors obtained from short-utterance data which are augmented by using vocal tract length perturbation (VTLP). In computer experiments using the NIST SRE 2008 database, the proposed method is shown to achieve significantly better performance than the conventional TDNN-Ivector-PLDA systems when there exists duration mismatch between enrollment and test utterances.