摘要
目的 在神经辐射场虚拟视点画面合成过程中,因视图数量过少或视图颜色不一致产生离群稀疏深度值问题,提出利用深度估计网络的密集深度值监督神经辐射场虚拟视点画面合成的方法来解决此问题。方法 首先输入视图进行运动恢复结构获取稀疏深度值,其次将RGB视图输入New CRFs(neural window fully-connected CRFs for monocular depth estimation)深度估计网络得到预估深度值,计算预估深度值和稀疏深度值之间的标准差。最后,利用预估深度值和计算得到的标准差,对神经辐射场的训练进行监督。结果 实验在NeRF Real数据集上与其他算法进行了实验对比。在少量视图合成实验中,本文方法在图像质量和效果优于仅使用RGB监督的NeRF(neural radiance fields)方法和使用稀疏深度信息监督的方法,峰值信噪比较NeRF方法提高24%,较使用稀疏深度信息监督的方法提高19.8%;结构相似度比NeRF方法提高36%,比使用稀疏深度信息监督的方法提高16.6%。同时为了验证算法的数据效率,进行了相同的迭代次数达到的峰值信噪比的比较,相较于NeRF方法,数据效率也有明显提高。结论 实验结果表明,本文所提出的利用深度估计网络密集深度值监督神经辐射场虚拟视点画面合成的方法,解决了视图数量过少或者视图颜色不一致产生离群稀疏深度值问题。
Objective Viewpoint synthesis techniques are widely applied to computer graphics and computer vision.In accordance with whether they depend on geometric information or not,virtual viewpoint synthesis methods can be classified into two distinct categories:image-based rendering and model-based rendering.1) Image-based rendering typically utilizes input data from camera arrays or light field cameras to achieve higher-quality rendering outcomes without the need to reconstruct the geometric information of the scene.Among the image-based rendering methods,depth map-based rendering technology is currently a popular research topic for virtual viewpoint rendering.However,this technology is prone to be affected by depth errors,leading to challenges such as holes and artifacts in the generated virtual viewport image.In addition,obtaining precise depth information for real-world scenes poses difficulties in practical applications.2) Model-based rendering involves 3D geometric modeling of real-world scenes.This method utilizes techniques such as projection transformation,cropping,fading,and texture mapping to synthesize virtual viewpoint images.However,quickly modeling realworld scenes is a significant disadvantage of this approach.With the emergence of neural rendering technology,the neural radiance fields technique employs a neural network to represent the 3D scene and combines it with volume rendering technology for viewpoint synthesis,thus producing photo-realistic viewpoint synthesis results.However,this approach is heavily reliant on the appearance of the view and requires a substantial number of views to be input for modeling.As a result,this method may be capable of perfectly explaining the training images but generalizes poorly to novel test views.Depth information is introduced for supervision to reduce the dependence of the neural radiance fields on the view appearance.However,structure from motion produces sparse depth values with inaccuracy and outliers due to the limited number of view inputs.Therefore,this study proposes a virtual viewpoint synthesis algorithm for supervising the neural radiance fields by using dense depth values obtained from a depth estimation network and introduces an embedding vector in the fitting function of the neural radiance fields to improve the virtual viewport image quality.Method First,the camera's internal and external reference matrices were calibrated for the input view.The 3D point cloud data in the world coordinate system were then converted to 3D point cloud data in the camera coordinate system by using the camera's external reference matrix.After that,the 3D point cloud data in the camera coordinate system were projected onto the image plane by using the camera's internal reference matrix to obtain the sparse depth value.Next,the RGB view was input into the new conditional random fields(CRFs) network to obtain an estimated depth value,and the standard deviation between the estimated depth value and the sparse depth value was calculated.The new CRFs network used the FC-CRFs module,which was constructed using a multi-headed attention mechanism,as the decoder and used the visual converter as the encoder to construct a U-shaped codec structure to estimate the depth value.Finally,the training of the neural radiance fields was supervised using the estimated depth values and the computed standard deviations.The training process of the neural radiance fields began by emitting camera rays on the input view to determine the sampling locations and the sampling point parameterization scheme.The re-parameterized sample point locations were then fed into the network for fitting,and the network outputted the volume density and color values to calculate the rendered color values and rendered depth values by using the volume rendering technique.The training process was supervised using the color loss between the rendered color value and the true color value and the depth loss between the predicted depth value and the rendered depth value.Result Experiments were conducted on the NeRF Real dataset,which comprises eight real-world scenes captured by forward-facing cameras.The evaluation involved the comparison of the proposed method with other algorithms,including the neural radiance field(NeRF) method that only uses RGB supervision and the method that employs sparse depth information supervision.The assessment criteria included peak signal-to-noise ratio,structural similarity index,and learned perceptual image patch similarity.Results indicate that the performance of proposed method surpassed that of the NeRF method that relied solely on RGB supervision and the method that employed sparse depth information supervision in a limited number of view synthesis experiments in terms of graphical quality and effectiveness.Specifically,the proposed method achieved a 24% improvement in peak signal-to-noise ratio over the NeRF method and a 19.8% improvement over the sparse depth information supervision method.In addition,the proposed method exhibited a 36% improvement in structural similarity index over the NeRF method and a 16.6% improvement over the sparse depth information supervision method.The data efficiency of the algorithm was evaluated by comparing the peak signal-to-noise ratio achieved by the same number of iterations.The proposed method demonstrated a significant improvement compared with the NeRF method.Conclusions In this study,we proposed a method for synthesizing virtual viewport images by using neural radiance fields supervised by dense depth.The method uses the dense depth values outputted by the depth estimation network to supervise the training of the neural radiance fields and introduced embedding vector during training fitting function.The experiments demonstrated that our approach effectively addresses the issue of sparse depth values resulting from insufficient views or inconsistent view colors and can achieve high-quality synthesized images,particularly when the number of input views is limited.
作者
刘晓楠
陈纯毅
胡小娟
于海洋
Liu Xiaonan;Chen Chunyi;Hu Xiaojuan;Yu Haiyang(School of Computer Science and Technology,Changchun University of Science and Technology,Changchun 130022,China)
出处
《中国图象图形学报》
CSCD
北大核心
2024年第7期2035-2045,共11页
Journal of Image and Graphics
基金
国家自然科学基金项目(U19A2063)
吉林省科技发展计划项目(20230201080GX)。
关键词
视点合成
神经辐射场(NeRF)
深度监督
深度估计
体渲染
viewpoint synthesis
neural radiance field(NeRF)
depth supervision
depth estimation
volume rendering