- 요약
- Abstract
- 1. 서론
- 2. 관련 연구
- 3. 제안하는 방법
http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
https://www.riss.kr/link?id=A107368246
2021
Korean
569
KCI우수등재
학술저널
369-376(8쪽)
0
0
상세조회0
다운로드목차 (Table of Contents)
참고문헌 (Reference)
1 Micaelli, P., "Zero-shot knowledge transfer via adversarial belief matching" 9551-9561, 2019
2 Micaelli, P., "Zero-shot knowledge transfer via adversarial belief matching" 9551-9561, 2019
3 Xue, J., "Singular value decomposition based low-footprint speaker adaptation and personalization for deep neural network" 6359-6363, 2014
4 Lee, S. H., "Selfsupervised knowledge distillation using singular value decomposition" 339-354, 2018
5 Zagoruyko, S., "Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer"
6 Kang, D., "Noscope: optimizing neural network queries over video at scale"
7 Howard, A. G., "Mobilenets: Efficient convolutional neural networks for mobile vision applications"
8 Han, S., "Mcdnn: An approximation-based execution framework for deep stream processing under resource constraints" 123-136, 2016
9 Han, S., "Learning both weights and connections for efficient neural network" 1135-1143, 2015
10 Krizhevsky, A., "Learning Multiple Layers of Features from Tiny Images" University of Toronto 2009
1 Micaelli, P., "Zero-shot knowledge transfer via adversarial belief matching" 9551-9561, 2019
2 Micaelli, P., "Zero-shot knowledge transfer via adversarial belief matching" 9551-9561, 2019
3 Xue, J., "Singular value decomposition based low-footprint speaker adaptation and personalization for deep neural network" 6359-6363, 2014
4 Lee, S. H., "Selfsupervised knowledge distillation using singular value decomposition" 339-354, 2018
5 Zagoruyko, S., "Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer"
6 Kang, D., "Noscope: optimizing neural network queries over video at scale"
7 Howard, A. G., "Mobilenets: Efficient convolutional neural networks for mobile vision applications"
8 Han, S., "Mcdnn: An approximation-based execution framework for deep stream processing under resource constraints" 123-136, 2016
9 Han, S., "Learning both weights and connections for efficient neural network" 1135-1143, 2015
10 Krizhevsky, A., "Learning Multiple Layers of Features from Tiny Images" University of Toronto 2009
11 Radford, A., "Language models are unsupervised multitask learners" 1 (1): 9-, 2019
12 Yoo, J., "Knowledge extraction with no observable data" 2705-2714, 2019
13 Hinton, G., "Distilling the knowledge in a neural network"
14 He, K., "Deep residual learning for image recognition" 770-778, 2016
15 Fang, G., "Data-Free Adversarial Distillation"
16 Chen, W., "Compressing neural networks with the hashing trick" 2285-2294, 2015
17 Yim, J., "A gift from knowledge distillation: Fast optimization, network minimization and transfer learning" 4133-4141, 2017
Dynamic Function Relevance based Fuzzing for High Coverage
Facial Emotion Recognition Data Augmentation using Generative Adversarial Network
ConvLSTM-Based COVID-19 Outbreak Prediction using Feature Combination of Multivariate Dataset
학술지 이력
연월일 | 이력구분 | 이력상세 | 등재구분 |
---|---|---|---|
2021 | 평가예정 | 계속평가 신청대상 (등재유지) | |
2016-01-01 | 평가 | 우수등재학술지 선정 (계속평가) | |
2015-01-01 | 평가 | 등재학술지 유지 (등재유지) | |
2002-01-01 | 평가 | 학술지 통합 (등재유지) |
학술지 인용정보
기준연도 | WOS-KCI 통합IF(2년) | KCIF(2년) | KCIF(3년) |
---|---|---|---|
2016 | 0.19 | 0.19 | 0.19 |
KCIF(4년) | KCIF(5년) | 중심성지수(3년) | 즉시성지수 |
0.2 | 0.18 | 0.373 | 0.07 |