Photometric stereo aims to reconstruct 3D geometry by recovering the dense surface orientation of a 3D object from multiple images under differing illumination.Traditional methods normally adopt simplified reflectance...Photometric stereo aims to reconstruct 3D geometry by recovering the dense surface orientation of a 3D object from multiple images under differing illumination.Traditional methods normally adopt simplified reflectance models to make the surface orientation computable.However,the real reflectances of surfaces greatly limit applicability of such methods to real-world objects.While deep neural networks have been employed to handle non-Lambertian surfaces,these methods are subject to blurring and errors,especially in high-frequency regions(such as crinkles and edges),caused by spectral bias:neural networks favor low-frequency representations so exhibit a bias towards smooth functions.In this paper,therefore,we propose a self-learning conditional network with multiscale features for photometric stereo,avoiding blurred reconstruction in such regions.Our explorations include:(i)a multi-scale feature fusion architecture,which keeps high-resolution representations and deep feature extraction,simultaneously,and(ii)an improved gradient-motivated conditionally parameterized convolution(GM-CondConv)in our photometric stereo network,with different combinations of convolution kernels for varying surfaces.Extensive experiments on public benchmark datasets show that our calibrated photometric stereo method outperforms the state-of-the-art.展开更多
基金supported by the National Key Scientific Instrument and Equipment Development Projects of China(41927805)the National Natural Science Foundation of China(61501417,61976123)+1 种基金the Key Development Program for Basic Research of Shandong Province(ZR2020ZD44)the Taishan Young Scholars Program of Shandong Province.
文摘Photometric stereo aims to reconstruct 3D geometry by recovering the dense surface orientation of a 3D object from multiple images under differing illumination.Traditional methods normally adopt simplified reflectance models to make the surface orientation computable.However,the real reflectances of surfaces greatly limit applicability of such methods to real-world objects.While deep neural networks have been employed to handle non-Lambertian surfaces,these methods are subject to blurring and errors,especially in high-frequency regions(such as crinkles and edges),caused by spectral bias:neural networks favor low-frequency representations so exhibit a bias towards smooth functions.In this paper,therefore,we propose a self-learning conditional network with multiscale features for photometric stereo,avoiding blurred reconstruction in such regions.Our explorations include:(i)a multi-scale feature fusion architecture,which keeps high-resolution representations and deep feature extraction,simultaneously,and(ii)an improved gradient-motivated conditionally parameterized convolution(GM-CondConv)in our photometric stereo network,with different combinations of convolution kernels for varying surfaces.Extensive experiments on public benchmark datasets show that our calibrated photometric stereo method outperforms the state-of-the-art.