RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      3D Depth Reconstruction from Focal Stack and Depth Refinement = 초점 스택에서 3D 깊이 재구성 및 깊이 개선

      한글로보기

      https://www.riss.kr/link?id=T15825788

      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      부가정보

      다국어 초록 (Multilingual Abstract)

      Three-dimensional (3D) depth recovery from two-dimensional images is a fundamental and challenging objective in computer vision, and is one of the most important prerequisites for many applications such as 3D measurement, robot location and navigation, self-driving, and so on. Depth-from-focus (DFF) is one of the important methods to reconstruct a 3D depth in the use of focus information. Reconstructing a 3D depth from texture-less regions is a typical issue associated with the conventional DFF. Further more, it is difficult for the conventional DFF reconstruction techniques to preserve depth edges and fine details while maintaining spatial consistency. In this dissertation, we address these problems and propose an DFF depth recovery framework which is robust over texture-less regions, and can reconstruct a depth image with clear edges and fine details.

      The depth recovery framework proposed in this dissertation is composed of two processes: depth reconstruction and depth refinement. To recovery an accurate 3D depth, We first formulate the depth reconstruction as a maximum a posterior (MAP) estimation problem with the inclusion of matting Laplacian prior. The nonlocal principle is adopted during the construction stage of the matting Laplacian matrix to preserve depth edges and fine details. Additionally, a depth variance based confidence measure with the combination of the reliability measure of focus measure is proposed to maintain the spatial smoothness, such that the smooth depth regions in initial depth could have high confidence value and the reconstructed depth could be more derived from the initial depth. As the nonlocal principle breaks the spatial consistency, the reconstructed depth image is spatially inconsistent. Meanwhile, it suffers from texture-copy artifacts. To smooth the noise and suppress the texture-copy artifacts introduced in the reconstructed depth image, we propose a closed-form edge-preserving depth refinement algorithm that formulates the depth refinement as a MAP estimation problem using Markov random fields (MRFs). With the incorporation of pre-estimated depth edges and mutual structure information into our energy function and the specially designed smoothness weight, the proposed refinement method can effectively suppress noise and texture-copy artifacts while preserving depth edges. Additionally, with the construction of undirected weighted graph representing the energy function, a closed-form solution is obtained by using the Laplacian matrix corresponding to the graph.

      The proposed framework presents a novel method of 3D depth recovery from a focal stack. The proposed algorithm shows the superiority in depth recovery over texture-less regions owing to the effective variance based confidence level computation and the matting Laplacian prior. Additionally, this proposed reconstruction method can obtain a depth image with clear edges and fine details due to the adoption of nonlocal principle in the construct]ion of matting Laplacian matrix. The proposed closed-form depth refinement approach shows that the ability in noise removal while preserving object structure with the usage of common edges. Additionally, it is able to effectively suppress texture-copy artifacts by utilizing mutual structure information. The proposed depth refinement provides a general idea for edge-preserving image smoothing, especially for depth related refinement such as stereo vision.

      Both quantitative and qualitative experimental results show the supremacy of the proposed method in terms of robustness in texture-less regions, accuracy, and ability to preserve object structure while maintaining spatial smoothness.
      번역하기

      Three-dimensional (3D) depth recovery from two-dimensional images is a fundamental and challenging objective in computer vision, and is one of the most important prerequisites for many applications such as 3D measurement, robot location and navigation...

      Three-dimensional (3D) depth recovery from two-dimensional images is a fundamental and challenging objective in computer vision, and is one of the most important prerequisites for many applications such as 3D measurement, robot location and navigation, self-driving, and so on. Depth-from-focus (DFF) is one of the important methods to reconstruct a 3D depth in the use of focus information. Reconstructing a 3D depth from texture-less regions is a typical issue associated with the conventional DFF. Further more, it is difficult for the conventional DFF reconstruction techniques to preserve depth edges and fine details while maintaining spatial consistency. In this dissertation, we address these problems and propose an DFF depth recovery framework which is robust over texture-less regions, and can reconstruct a depth image with clear edges and fine details.

      The depth recovery framework proposed in this dissertation is composed of two processes: depth reconstruction and depth refinement. To recovery an accurate 3D depth, We first formulate the depth reconstruction as a maximum a posterior (MAP) estimation problem with the inclusion of matting Laplacian prior. The nonlocal principle is adopted during the construction stage of the matting Laplacian matrix to preserve depth edges and fine details. Additionally, a depth variance based confidence measure with the combination of the reliability measure of focus measure is proposed to maintain the spatial smoothness, such that the smooth depth regions in initial depth could have high confidence value and the reconstructed depth could be more derived from the initial depth. As the nonlocal principle breaks the spatial consistency, the reconstructed depth image is spatially inconsistent. Meanwhile, it suffers from texture-copy artifacts. To smooth the noise and suppress the texture-copy artifacts introduced in the reconstructed depth image, we propose a closed-form edge-preserving depth refinement algorithm that formulates the depth refinement as a MAP estimation problem using Markov random fields (MRFs). With the incorporation of pre-estimated depth edges and mutual structure information into our energy function and the specially designed smoothness weight, the proposed refinement method can effectively suppress noise and texture-copy artifacts while preserving depth edges. Additionally, with the construction of undirected weighted graph representing the energy function, a closed-form solution is obtained by using the Laplacian matrix corresponding to the graph.

      The proposed framework presents a novel method of 3D depth recovery from a focal stack. The proposed algorithm shows the superiority in depth recovery over texture-less regions owing to the effective variance based confidence level computation and the matting Laplacian prior. Additionally, this proposed reconstruction method can obtain a depth image with clear edges and fine details due to the adoption of nonlocal principle in the construct]ion of matting Laplacian matrix. The proposed closed-form depth refinement approach shows that the ability in noise removal while preserving object structure with the usage of common edges. Additionally, it is able to effectively suppress texture-copy artifacts by utilizing mutual structure information. The proposed depth refinement provides a general idea for edge-preserving image smoothing, especially for depth related refinement such as stereo vision.

      Both quantitative and qualitative experimental results show the supremacy of the proposed method in terms of robustness in texture-less regions, accuracy, and ability to preserve object structure while maintaining spatial smoothness.

      더보기

      목차 (Table of Contents)

      • Chapter 1 Introduction 1
      • 1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
      • 1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
      • 1.3 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
      • 1.4 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
      • Chapter 1 Introduction 1
      • 1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
      • 1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
      • 1.3 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
      • 1.4 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
      • Chapter 2 Related Works 9
      • 2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
      • 2.2 Principle of depth-from-focus . . . . . . . . . . . . . . . . . . . . 9
      • 2.2.1 Focus measure operators . . . . . . . . . . . . . . . . . . . 12
      • 2.3 Depth-from-focus reconstruction . . . . . . . . . . . . . . . . . . 14
      • 2.4 Edge-preserving image denoising . . . . . . . . . . . . . . . . . . 23
      • Chapter 3 Depth-from-Focus Reconstruction using Nonlocal Matting Laplacian Prior 38
      • 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
      • 3.2 Image matting and matting Laplacian . . . . . . . . . . . . . . . 40
      • 3.3 Depth-from-focus . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
      • 3.4 Depth reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . 47
      • 3.4.1 Problem statement . . . . . . . . . . . . . . . . . . . . . . 47
      • 3.4.2 Likelihood model . . . . . . . . . . . . . . . . . . . . . . . 48
      • 3.4.3 Nonlocal matting Laplacian prior model . . . . . . . . . . 50
      • 3.5 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . 55
      • 3.5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
      • 3.5.2 Data configuration . . . . . . . . . . . . . . . . . . . . . . 55
      • 3.5.3 Reconstruction results . . . . . . . . . . . . . . . . . . . . 56
      • 3.5.4 Comparison between reconstruction using local and nonlocal matting Laplacian . . . . . . . . . . . . . . . . . . . 56
      • 3.5.5 Spatial consistency analysis . . . . . . . . . . . . . . . . . 59
      • 3.5.6 Parameter setting and analysis . . . . . . . . . . . . . . . 59
      • 3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
      • Chapter 4 Closed-form MRF-based Depth Refinement 63
      • 4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
      • 4.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . 65
      • 4.3 Closed-form solution . . . . . . . . . . . . . . . . . . . . . . . . . 69
      • 4.4 Edge preservation . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
      • 4.5 Texture-copy artifacts suppression . . . . . . . . . . . . . . . . . 73
      • 4.6 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . 76
      • 4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
      • Chapter 5 Evaluation 82
      • 5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
      • 5.2 Evaluation metrics . . . . . . . . . . . . . . . . . . . . . . . . . . 83
      • 5.3 Evaluation on synthetic datasets . . . . . . . . . . . . . . . . . . 84
      • 5.4 Evaluation on real scene datasets . . . . . . . . . . . . . . . . . . 89
      • 5.5 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
      • 5.6 Computational performances . . . . . . . . . . . . . . . . . . . . 93
      • Chapter 6 Conclusion 96
      • Bibliography 99
      더보기

      참고문헌 (Reference) 논문관계도

      1 Shengyang DaiYing Wu, "Motion from blur", 20082008

      2 C. HazirbasS. G. SoyerM. C. StaabL. Leal-Taix´eD. Cremers, "Deep depth from focus", Asian Conference on Computer Vision , pages 525 ? 541, 2018

      3 Kaiming HeJian SunXiaoou Tang, "Guided image filtering", European Conference on Computer Vision , pages 1 ? 14 . Springer, 2010

      4 Kaiming HeJian SunXiaoou Tang, "Guided image filtering", IEEE Transactions on Pattern Analysis and Machine Intelligence , 35 ( 6 ) :1397 ? 1409, 2013

      5 Jonathan T BarronBen Poole, "The fast bilateral solver", European Conference on Computer Vision , pages 617 ? 632, 2016

      6 D Dansereau, "Light field toolbox v0 . 4", May 20172016

      7 Michael J BlackGuillermo SapiroDavid H MarimontDavid Heeger, "Robust anisotropic diffusion", IEEE Transactions on Image Processing , 7 ( 3 ) :421 ? 432, 1998

      8 N. LiJ. YeY. JiH. LingJ. Yu, "Saliency detection on light field", IEEE Transactions on Pattern Analysis and Machine Intelligence , 39 ( 8 ) :1605 ? 1616, Aug 2017

      9 X. ShenC. ZhouL. XuJ. Jia, "Mutual-structure for joint filtering", 2015Dec 2015

      10 Said PertuzDomenec PuigMiguel Angel Garcia, "Reliability measure for shape-from-focus", Image and Vision Computing , 31 ( 10 ) :725 ? 734, 2013

      1 Shengyang DaiYing Wu, "Motion from blur", 20082008

      2 C. HazirbasS. G. SoyerM. C. StaabL. Leal-Taix´eD. Cremers, "Deep depth from focus", Asian Conference on Computer Vision , pages 525 ? 541, 2018

      3 Kaiming HeJian SunXiaoou Tang, "Guided image filtering", European Conference on Computer Vision , pages 1 ? 14 . Springer, 2010

      4 Kaiming HeJian SunXiaoou Tang, "Guided image filtering", IEEE Transactions on Pattern Analysis and Machine Intelligence , 35 ( 6 ) :1397 ? 1409, 2013

      5 Jonathan T BarronBen Poole, "The fast bilateral solver", European Conference on Computer Vision , pages 617 ? 632, 2016

      6 D Dansereau, "Light field toolbox v0 . 4", May 20172016

      7 Michael J BlackGuillermo SapiroDavid H MarimontDavid Heeger, "Robust anisotropic diffusion", IEEE Transactions on Image Processing , 7 ( 3 ) :421 ? 432, 1998

      8 N. LiJ. YeY. JiH. LingJ. Yu, "Saliency detection on light field", IEEE Transactions on Pattern Analysis and Machine Intelligence , 39 ( 8 ) :1605 ? 1616, Aug 2017

      9 X. ShenC. ZhouL. XuJ. Jia, "Mutual-structure for joint filtering", 2015Dec 2015

      10 Said PertuzDomenec PuigMiguel Angel Garcia, "Reliability measure for shape-from-focus", Image and Vision Computing , 31 ( 10 ) :725 ? 734, 2013

      11 Antoni BuadesBartomeu CollJ-M Morel, "A non-local algorithm for image denoising", 20052005

      12 Wei LiuXiaogang ChenJie YangQiang Wu, "Robust color guided depth map restoration", IEEE Transactions on Image Processing , 26 ( 1 ) :315 ? 327, 2017

      13 Jialue FanXiaohui ShenYing Wu, "Closed-loop adaptation for robust tracking", European Conference on Computer Vision , pages 411 ? 424 . Springer, 2010

      14 Victor GaganovAlexey Ignatenko, "Robust shape from focus via markov random fields", Proceedings of Graphicon Conference , pages 74 ? 80, 2010

      15 Tarkan AydinYusuf Sinan Akgul, "A new adaptive focus measure for shape from focus", BMVC , pages 1 ? 10, null

      16 Qiong YanXiaoyong ShenLi XuShaojie ZhuoXiaopeng ZhangLiang ShenJiaya Jia, "Cross-field joint image restoration via scale map", Proceedings of the IEEE International Conference on Computer Vision , pages 1537 ? 1544, 2013

      17 Y. BoykovO. VekslerR. Zabih, "Markov random fields with efficient approximations", Proceedings . 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition ( Cat . No.98CB36231 ) , pages 648 ? 655, June 1998

      18 K. HeJ . SunX. Tang, "Single image haze removal using dark channel prior", IEEE Transactions on Pattern Analysis and Machine Intelligence , 33 ( 12 ) :2341 ? 2353, Dec 2011

      19 Jaesik ParkHyeongwoo KimYu-Wing TaiMichael S BrownInso Kweon, "High quality depth map upsampling for 3d-tof cameras", 20112011

      20 Stephen M SmithJ Michael Brady, "Susan ? a new approach to low level image processing", International Journal of Computer Vision23 ( 1 ) :45 ? 78, 1997

      21 J. LiM. LuZ. Li, "Continuous depth map reconstruction from light fields", IEEE Transactions on Image Processing24 ( 11 ) :3257 ? 3265, Nov 2015

      22 N. Otsu, "A threshold selection method from gray-level histograms", IEEE Transactions on Systems , Man , and Cybernetics , 9 ( 1 ) :62 ? 66, Jan 1979

      23 Kaiming HeJian SunXiaoou Tang, "Fast matting using large kernel matting laplacian matrices", 20102010

      24 Pietro PeronaJitendra Malik, "Scale-space and edge detection using anisotropic diffusion", IEEE Transactions on Pattern Analysis and Machine Intelligence , 12 ( 7 ) :629 ? 639, 1990

      25 J. SurhH. JeonY . ParkS. ImH. HaI. S. Kweon, "Noise robust depth from focus using a ring difference filter", 2017July 2017

      26 RR SahayAN Rajagopalan, "Shape extraction of low-textured objects in video microscopy", Journal of Microscopy245 ( 3 ) :252 ? 264, 2012

      27 M. SubbaraoTao Choi, "Accurate recovery of three-dimensional shape from image focus", IEEE Transactions on Pattern Analysis and Machine Intelligence , 17 ( 3 ) :266 ? 274, March 1995

      28 Qifeng ChenDingzeyu LiChi-Keung TangKnn matting, Q. ChenD. LiC. TangKnn matting, "IEEE Transactions on Pattern Analysis and Machine Intelligence", 35 ( 9 ) :2175 ? 2188, 2013

      29 Peter OchsYunjin ChenThomas BroxThomas Pock, "ipiano : Inertial proximal algorithm for nonconvex optimization", SIAM Journal on Imaging Sciences7 ( 2 ) :1388 ? 1419, 2014

      30 Joachim WeickertBMTH RomenyMax A Viergever, "Efficient and reliable schemes for nonlinear diffusion filtering", IEEE Transactions on Image Processing7 ( 3 ) :398 ? 410, 1998

      31 Karen SimonyanAndrew Zisserman, "Very deep convolutional networks for large-scale image recognition", arXiv preprint arXiv:1409.1556, 2014

      32 Fr´edo DurandJulie Dorsey, "Fast bilateral filtering for the display of high-dynamic-range images", Proceedings of the 29th annual conference on Computer graphics and interactive techniques , pages 257 ? 266, 2002

      33 S. LeeJ. YooY. KumarS. Kim, "Reduced energy-ratio measure for robust autofocusing in digital camera", IEEE Signal Processing Letters16 ( 2 ) :133 ? 136, Feb 2009

      34 C. TsengS. Wang, "Shape-from-focus depth reconstruction with a spatial consistency model", IEEE Transactions on Circuits and Systems for Video Technology , 24 ( 12 ) :2063 ? 2076, Dec 2014

      35 Wu-Chih HuJia-Jie JhuCheng-Pin Lin, "Unsupervised and reliable image matting based on modified spectral matting", Journal of Visual Communication and Image Representation , 23 ( 4 ) :665 ? 676, 2012

      36 David FerstlChristian ReinbacherRene RanftlMatthias R¨utherHorst Bischof, "Image guided depth upsampling using anisotropic total generalized variation", Proceedings of the IEEE International Conference on Computer Vision , pages 993 ? 1000, 2013

      37 Anat LevinAlex Rav-AchaDani Lischinski, "Spectral mattingIEEE Transactions on Pattern Analysis and Machine Intelligence", 30 ( 10 ) :1699 ? 1712, 2008

      38 S. GemanD. Geman, "Stochastic relaxation , gibbs distributions , and the bayesian restoration of images", IEEE Transactions on Pattern Analysis and Machine Intelligence , PAMI-6 ( 6 ) :721 ? 741, Nov 1984

      39 Sergey IoffeChristian Szegedy, "Batch normalization : Accelerating deep network training by reducing internal covariate shift", arXiv preprint arXiv:1502.03167, 2015

      40 Stephen BoydNeal ParikhEric Chu, "Distributed optimization and statistical learning via the alternating direction method of multipliers", Now Publishers Inc, 2011

      41 Aamir Saeed MalikTae-Sun Choi, "A novel algorithm for estimation of depth map using image focus for 3d shape recovery in the presence of noise", Pattern Recognition , 41 ( 7 ) :2200 ? 2225, 2008

      42 Yu SunStefan DuthalerBradley J Nelson, "Autofocusing in computer microscopy : selecting the optimal focus algorithm . Microscopy research and technique", 65 ( 3 ) :139 ? 149, 2004

      더보기

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      주제

      연도별 연구동향

      연도별 활용동향

      연관논문

      연구자 네트워크맵

      공동연구자 (7)

      유사연구자 (20) 활용도상위20명

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼