RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • Frequency-splitting dynamic MRI reconstruction using multi-scale 3D convolutional sparse coding and automatic parameter selection

        Nguyen-Duc, Thanh,Quan, Tran Minh,Jeong, Won-Ki Elsevier 2019 Medical image analysis Vol.53 No.-

        <P><B>Abstract</B></P> <P>In this paper, we propose a novel image reconstruction algorithm using multi-scale 3D convolutional sparse coding and a spectral decomposition technique for highly undersampled dynamic Magnetic Resonance Imaging (MRI) data. The proposed method recovers high-frequency information using a shared 3D convolution-based dictionary built progressively during the reconstruction process in an unsupervised manner, while low-frequency information is recovered using a total variation-based energy minimization method that leverages temporal coherence in dynamic MRI. Additionally, the proposed 3D dictionary is built across three different scales to more efficiently adapt to various feature sizes, and elastic net regularization is employed to promote a better approximation to the sparse input data. We also propose an automatic parameter selection technique based on a genetic algorithm to find optimal parameters for our numerical solver which is a variant of the alternating direction method of multipliers (ADMM). We demonstrate the performance of our method by comparing it with state-of-the-art methods on 15 single-coil cardiac, 7 single-coil DCE, and a multi-coil brain MRI datasets at different sampling rates (12.5%, 25% and 50%). The results show that our method significantly outperforms the other state-of-the-art methods in reconstruction quality with a comparable running time and is resilient to noise.</P> <P><B>Highlights</B></P> <P> <UL> <LI> Convolutional dictionary reconstructs high-frequency component of MRI images well. </LI> <LI> Temporal total variation reconstructs low-frequency component of MRI images well. </LI> <LI> Multi-scale dictionary improves MRI reconstruction quality. </LI> <LI> Elastic net regularization works better than L1 or L2 regularization only. </LI> <LI> Genetic algorithm automatically finds optimal parameters for MRI reconstruction. </LI> </UL> </P> <P><B>Graphical abstract</B></P> <P>[DISPLAY OMISSION]</P>

      • KCI등재

        컨볼루션 희소 코딩을 사용한 기반 반사율 분해

        윤종수,최윤식 대한전기학회 2020 전기학회논문지 Vol.69 No.3

        Color constancy is a feature of the human visual system, which detects the relative inconsistency of the perceived color of an object so that the perceived color may not be changed under various illumination conditions, even when the conditions for observing color are changed. Thus, the Retinex theory was designed in consideration of this color constancy. The physics based Retinex algorithms have been popularly used to effectively decompose the illumination and reflectance of the object. However, if there are many detail areas in the image or the illumination changes rapidly, the illumination and reflectance may not be decomposed properly, because of the violation of the smoothness constraint on illumination. In this paper, we use the convolutional sparse coding model to represent the reflectance in more detail. This allows the reflectance component to provide improved visual quality over conventional methods, as shown in experimental results. Consequently, we can decompose Retinex based illumination and reflectance more precisely, then, reduce the perception gap between humans and machines.

      • SCOPUSKCI등재

        컨볼루션 희소 코딩을 사용한 Retinex 기반 반사율 분해

        윤종수(Jongsu Yoon),최윤식(Yoonsik Choe) 대한전기학회 2020 전기학회논문지 Vol.69 No.3

        Color constancy is a feature of the human visual system, which detects the relative inconsistency of the perceived color of an object so that the perceived color may not be changed under various illumination conditions, even when the conditions for observing color are changed. Thus, the Retinex theory was designed in consideration of this color constancy. The physics based Retinex algorithms have been popularly used to effectively decompose the illumination and reflectance of the object. However, if there are many detail areas in the image or the illumination changes rapidly, the illumination and reflectance may not be decomposed properly, because of the violation of the smoothness constraint on illumination. In this paper, we use the convolutional sparse coding model to represent the reflectance in more detail. This allows the reflectance component to provide improved visual quality over conventional methods, as shown in experimental results. Consequently, we can decompose Retinex based illumination and reflectance more precisely, then, reduce the perception gap between humans and machines.

      • Automatic Extraction of Abnormalities on Temporal CT Subtraction Images Using Sparse Coding and 3D-CNN

        Yuichiro Koizumi,Noriaki MIYAKE,Huimin Lu,Hyoungseop Kim,Seiichi MURAKAMI,Takatoshi AOKI,Shoji KIDO 제어로봇시스템학회 2018 제어로봇시스템학회 국제학술대회 논문집 Vol.2018 No.10

        In recent years, the proportion of deaths from cancer tends to increase in Japan, especially the number of deaths from lung cancer is increasing. CT device is effective for early detection of lung cancer. However, there is concern that an increase in burden on doctors will be caused by high performance of CT improving. Therefore, by presenting the “second opinion” by the CAD system, it reduces the burden on the doctor. In this paper, we develop a CAD system for automatic detection of lesion candidate regions such as lung nodules or ground glass opacity (GGO) from 3D CT images. Our proposed method consists of three steps. In the first step, lesion candidate regions are extracted using temporal subtraction technique. In the second step, the image is reconstructed by sparse coding for the extracted region. In the final step, 3D Convolutional Neural Network (3D-CNN) identification using reconstructed images is performed. We applied our method to 51 cases and True Positive rate (TP) of 79.81 % and False Positive rate (FP) of 37.65 % are obtained.

      • 빗줄기 방향과 강도를 고려한 심층 합성곱 신경망 기반의 빗줄기 제거 기법

        최연수(Yeonsu Choi),박기태(Gi-Tae Park),손창환(Chang-Hwan Son) 한국정보기술학회 2018 Proceedings of KIIT Conference Vol.2018 No.11

        최근 인공지능 기술의 발달로 무인자동차, 무인드론 및 자율운항선박시스템 등이 개발되고 있다. 그러나 컴퓨터 비전 기반의 보행자 검출, 영상분할 같은 기법은 기상 환경에 상당한 영향을 받는다. 특히 비가 내리는 상황에서 영상을 획득할 때 캡처된 영상에서 빗줄기 패턴이 형성되고 이러한 빗줄기 패턴은 컴퓨터 비전 알고리즘에서 사용되는 특징 추출에 부정적인 영향을 줄 수 있다. 따라서 본 논문에서는 빗줄기의 강도와 방향을 고려한 심층 합성곱 신경망 기법을 제안하고자 한다. 특히 빗줄기 강도와 방향을 구별 짓기 위한 심층 합성곱 신경망과 빗줄기 타입 별 빗줄기 제거를 위한 잔차 네크워크 즉, 두 종류의 서브 네트워크를 학습해서 빗줄기를 제거하고자 한다. 제안한 기법을 적용할 때, 기존의 방법보다 빗줄기 제거와 디테일 보존 성능 측면에서 더 나은 결과를 얻을 수 있었으며 정량적 화질 평가에서도 우위를 달성할 수 있었다. Recently, autonomous cars, autonomous drones, and self-driving ship systems are being developed, thanks to the development of artificial intelligence technologies However, computer vision algorithms such as pedestrian detections and image segmentations, are significantly affected by weather conditions. When capturing images in a rainy day, rain streaks are formed in the captured images, thereby having negative effects on feature extractors used in comaputer vision algorithms. Therefore, this paper proposes the deep convolution neural networks that consider the strength and orientation of rain streaks. More specifically, in this paper, two types of sub-networks are learned for rain streaks removal. One is to detect the strength and orientation of rain streaks and the other is to remove rain streaks via residual networks, which are trained optimally to each type of rain streaks. Experimental results show that the proposed method is more effective in removing rain streaks and preserving details than the conventional methods. Moreover, quantitative image quality assessments also show that the performance of the proposed method is superior to the conventional methods.

      • KCI등재

        빗줄기 방향과 강도를 고려한 심층 합성곱 신경망 기반의 빗줄기 제거 기법

        최연수,손창환 한국정보기술학회 2019 한국정보기술학회논문지 Vol.17 No.1

        최근 인공지능 기술의 발달로 무인자동차, 무인드론 및 자율운항선박시스템 등이 개발되고 있다. 그러나 컴퓨터 비전 기반의 보행자 검출, 영상분할 같은 기법은 기상 환경에 상당한 영향을 받는다. 특히 비가 내리는 상황에서 영상을 획득할 때 캡처된 영상에서 빗줄기 패턴이 형성되고 이러한 빗줄기 패턴은 컴퓨터 비전 알고리즘에서 사용되는 특징 추출에 부정적인 영향을 줄 수 있다. 따라서 본 논문에서는 빗줄기의 강도와 방향을 고려한 심층 합성곱 신경망 기법을 제안하고자 한다. 특히 빗줄기 강도와 방향을 구별 짓기 위한 심층 합성곱 신경망과 빗줄기 타입 별 빗줄기 제거를 위한 잔차 네크워크 즉, 두 종류의 서브 네트워크를 학습해서 빗줄기를 제거하고자 한다. 제안한 기법을 적용할 때, 기존의 방법보다 빗줄기 제거와 디테일 보존 성능 측면에서 더 나은 결과를 얻을 수 있었으며 정량적 화질 평가에서도 우위를 달성할 수 있었다. Recently, autonomous cars, autonomous drones, and self-driving ship systems are being developed, thanks to the development of artificial intelligence technologies However, computer vision algorithms such as pedestrian detections and image segmentations, are significantly affected by weather conditions. When capturing images in a rainy day, rain streaks are formed in the captured images, thereby having negative effects on feature extractors used in computer vision algorithms. Therefore, this paper proposes the deep convolution neural networks that consider the strength and orientation of rain streaks. More specifically, in this paper, two types of sub-networks are learned for rain streaks removal. One is to detect the strength and orientation of rain streaks and the other is to remove rain streaks via residual networks, which are trained optimally to each type of rain streaks. Experimental results show that the proposed method is more effective in removing rain streaks and preserving details than the conventional methods. Moreover, quantitative image quality assessments also show that the performance of the proposed method is superior to the conventional methods.

      • KCI등재

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼