http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
박재민(Jaemin Park),박영준(Yongjun Park) 대한전자공학회 2023 대한전자공학회 학술대회 Vol.2023 No.11
Although the significant advances in image super-resolution (SR) with convolutional neural networks (CNNs) have led to continuous performance improvement, the state-of-the-art (SOTA) SR models face the challenges conducting in resource-constrained environments due to its complexity. Quantization, one of the neural network optimizations, is used to address these challenges. However, Quantization for the SR model should take its characteristics into 1account. To address this issue, we propose Frequency Aware Multi-Bit Quantization (FAMBQ), which considers the frequency in input image and layers to select adequate bit-width to quantize each layer. Frequency is the basic component in image that helps improve the SR performance, proved in numerous recent works[1-2]. We apply our quantization approach across various widely used super-resolution architectures, including EDSR[3], IMDN[4]. Experiments show that our framework achieve comparable PSNR to full precision original model and SOTA frameworks for low bit-width quantization on four benchmark datasets.