期刊文献+
共找到18篇文章
< 1 >
每页显示 20 50 100
Stepwise approach for view synthesis 被引量:1
1
作者 CHAI Deng-feng PENG Qun-sheng 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2007年第8期1218-1226,共9页
This paper presents some techniques for synthesizing novel view for a virtual viewpoint from two given views cap-tured at different viewpoints to achieve both high quality and high efficiency. The whole process consis... This paper presents some techniques for synthesizing novel view for a virtual viewpoint from two given views cap-tured at different viewpoints to achieve both high quality and high efficiency. The whole process consists of three passes. The first pass recovers depth map. We formulate it as pixel labelling and propose a bisection approach to solve it. It is accomplished in log2n(n is the number of depth levels) steps,each of which involves a single graph cut computation. The second pass detects occluded pixels and reasons about their depth. It fits a foreground depth curve and a background depth curve using depth of nearby fore-ground and background pixels,and then distinguishes foreground and background pixels by minimizing a global energy,which involves only one graph cut computation. The third pass finds for each pixel in the novel view the corresponding pixels in the input views and computes its color. The whole process involves only a small number of graph cut computations,therefore it is efficient. And,visual artifacts in the synthesized view can be removed successfully by correcting depth of the occluded pixels. Experimental results demonstrate that both high quality and high efficiency are achieved by the proposed techniques. 展开更多
关键词 view synthesis OCCLUSION Graph cut
下载PDF
Intermediate view synthesis from stereoscopic images 被引量:1
2
作者 LüChaohui AnPing ZhangZhaoyang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2005年第2期279-283,共5页
A new method is proposed for synthesizing intermediate views from a pair of stereoscopic images. In order to synthesize high-quality intermediate views, the block matching method together with a simplified multi-windo... A new method is proposed for synthesizing intermediate views from a pair of stereoscopic images. In order to synthesize high-quality intermediate views, the block matching method together with a simplified multi-window technique and dynamic programming is used in the process of disparity estimation. Then occlusion detection is performed to locate occluded regions and their disparities are compensated. After the projecton of the left-to-right and right-to-left disparities onto the intermediate image, intermediate view is synthesized considering occluded regions. Experimental results show that our synthesis method can obtain intermediate views with higher quality. 展开更多
关键词 intermediate view synthesis disparity estimation dynamic programming occlusion detection.
下载PDF
DELAUNAY TRIANGULATION AND IMAGE DENSE MATCHING IN VIEW SYNTHESIS
3
作者 沈沛意 王伟 吴成柯 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 1999年第3期45-49,共5页
A new method of view synthesis is proposed based on Delaunay triangulation. The first step of this method is making the Delaunay triangulation of 2 reference images. Secondly, matching the image points using the epipo... A new method of view synthesis is proposed based on Delaunay triangulation. The first step of this method is making the Delaunay triangulation of 2 reference images. Secondly, matching the image points using the epipolar geometry constraint. Finally, constructing the third view according to pixel transferring under the trilinear constraint. The method gets rid of the classic time consuming dense matching technique and takes advantage of Delaunay triangulation. So it can not only save the computation time but also enhance the quality of the synthesized view. The significance of this method is that it can be used directly in the fields of video coding, image compressing and virtual reality. 展开更多
关键词 view synthesis Delaunay triangulation image matching pixel transferring
下载PDF
View synthesis based on the serial images from camera lengthways motion
4
作者 Zhang Jing Wang Changshun +2 位作者 Liao Wuling Ou Zongying Hua Shungang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2006年第2期284-289,共6页
For the pre-acquired serial images from camera lengthways motion, a view synthesis algorithm based on epipolar geometry constraint is proposed in this paper. It uses the whole matching and maintaining order characters... For the pre-acquired serial images from camera lengthways motion, a view synthesis algorithm based on epipolar geometry constraint is proposed in this paper. It uses the whole matching and maintaining order characters of the epipolar line, Fourier transform and dynamic programming matching theories, thus truly synthesizing the destination image of current viewpoint. Through the combination of Fourier transform, epipolar geometry constraint and dynamic programming matching, the circumference distortion problem resulting from conventional view synthesis approaches is effectively avoided. The detailed implementation steps of this algorithm are given, and some running instances are presented to illustrate the results. 展开更多
关键词 Image-based rendering view synthesis epipolar geometry constraint Fourier transform dynamic prog-ramming matching.
下载PDF
3DV quality model based depth maps for view synthesis in FTV system
5
作者 张秋闻 安平 +2 位作者 张艳 张兆杨 王元庆 《Journal of Shanghai University(English Edition)》 CAS 2011年第4期335-341,共7页
Depth maps are used for synthesis virtual view in free-viewpoint television (FTV) systems. When depth maps are derived using existing depth estimation methods, the depth distortions will cause undesirable artifacts ... Depth maps are used for synthesis virtual view in free-viewpoint television (FTV) systems. When depth maps are derived using existing depth estimation methods, the depth distortions will cause undesirable artifacts in the synthesized views. To solve this problem, a 3D video quality model base depth maps (D-3DV) for virtual view synthesis and depth map coding in the FTV applications is proposed. First, the relationships between distortions in coded depth map and rendered view are derived. Then, a precisely 3DV quality model based depth characteristics is develop for the synthesized virtual views. Finally, based on D-3DV model, a multilateral filtering is applied as a pre-processed filter to reduce rendering artifacts. The experimental results evaluated by objective and subjective methods indicate that the proposed D-3DV model can reduce bit-rate of depth coding and achieve better rendering quality. 展开更多
关键词 free-viewpoint television (FTV) 3D video quality model base depth maps (D-3DV) view synthesis
下载PDF
Adaptive luminance adjustment and neighborhood spreading strength information based view synthesis
6
作者 Zhizhong Fu Xue Wang +2 位作者 Yuan Li Xiaohui Yang Jin Xu 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2016年第3期721-729,共9页
View synthesis is an important building block in three dimension(3D) video processing and communications.Based on one or several views,view synthesis creates other views for the purpose of view prediction(for compr... View synthesis is an important building block in three dimension(3D) video processing and communications.Based on one or several views,view synthesis creates other views for the purpose of view prediction(for compression) or view rendering(for multiview-display).The quality of view synthesis depends on how one fills the occlusion area as well as how the pixels are created.Consequently,luminance adjustment and hole filling are two key issues in view synthesis.In this paper,two views are used to produce an arbitrary virtual synthesized view.One view is merged into another view using a local luminance adjustment method,based on local neighborhood region for the calculation of adjustment coefficient.Moreover,a maximum neighborhood spreading strength hole filling method is presented to deal with the micro texture structure when the hole is being filled.For each pixel at the hole boundary,its neighborhood pixels with the maximum spreading strength direction are selected as candidates;and among them,the pixel with the maximum spreading strength is used to fill the hole from boundary to center.If there still exist disocclusion pixels after once scan,the filling process is repeated until all hole pixels are filled.Simulation results show that the proposed method is efficient,robust and achieves high performance in subjection and objection. 展开更多
关键词 view synthesis three dimension(3D) multiview local luminance correction hole filling maximum spreading strength
下载PDF
SG-NeRF:Sparse-Input Generalized Neural Radiance Fields for Novel View Synthesis
7
作者 Kuo Xu Jie Li +1 位作者 Zhen-Qiang Li Yang-Jie Cao 《Journal of Computer Science & Technology》 SCIE EI CSCD 2024年第4期785-797,共13页
Traditional neural radiance fields for rendering novel views require intensive input images and pre-scene optimization,which limits their practical applications.We propose a generalization method to infer scenes from ... Traditional neural radiance fields for rendering novel views require intensive input images and pre-scene optimization,which limits their practical applications.We propose a generalization method to infer scenes from input images and perform high-quality rendering without pre-scene optimization named SG-NeRF(Sparse-Input Generalized Neural Radiance Fields).Firstly,we construct an improved multi-view stereo structure based on the convolutional attention and multi-level fusion mechanism to obtain the geometric features and appearance features of the scene from the sparse input images,and then these features are aggregated by multi-head attention as the input of the neural radiance fields.This strategy of utilizing neural radiance fields to decode scene features instead of mapping positions and orientations enables our method to perform cross-scene training as well as inference,thus enabling neural radiance fields to generalize for novel view synthesis on unseen scenes.We tested the generalization ability on DTU dataset,and our PSNR(peak signal-to-noise ratio)improved by 3.14 compared with the baseline method under the same input conditions.In addition,if the scene has dense input views available,the average PSNR can be improved by 1.04 through further refinement training in a short time,and a higher quality rendering effect can be obtained. 展开更多
关键词 Neural Radiance Fields(NeRF) multi-view stereo(MVS) new view synthesis(NVS)
原文传递
STATE:Learning structure and texture representations for novel view synthesis
8
作者 Xinyi Jing Qiao Feng +3 位作者 Yu-Kun Lai Jinsong Zhang Yuanqiang Yu Kun Li 《Computational Visual Media》 SCIE EI CSCD 2023年第4期767-786,共20页
Novel viewpoint image synthesis is very challenging,especially from sparse views,due to large changes in viewpoint and occlusion.Existing image-based methods fail to generate reasonable results for invisible regions,w... Novel viewpoint image synthesis is very challenging,especially from sparse views,due to large changes in viewpoint and occlusion.Existing image-based methods fail to generate reasonable results for invisible regions,while geometry-based methods have difficulties in synthesizing detailed textures.In this paper,we propose STATE,an end-to-end deep neural network,for sparse view synthesis by learning structure and texture representations.Structure is encoded as a hybrid feature field to predict reasonable structures for invisible regions while maintaining original structures for visible regions,and texture is encoded as a deformed feature map to preserve detailed textures.We propose a hierarchical fusion scheme with intra-branch and inter-branch aggregation,in which spatio-view attention allows multi-view fusion at the feature level to adaptively select important information by regressing pixel-wise or voxel-wise confidence maps.By decoding the aggregated features,STATE is able to generate realistic images with reasonable structures and detailed textures.Experimental results demonstrate that our method achieves qualitatively and quantitatively better results than state-of-the-art methods.Our method also enables texture and structure editing applications benefiting from implicit disentanglement of structure and texture.Our code is available at http://cic.tju.edu.cn/faculty/likun/projects/STATE. 展开更多
关键词 novel view synthesis sparse views spatioview attention structure representation texture representation
原文传递
ReLoc:Indoor Visual Localization with Hierarchical Sitemap and View Synthesis 被引量:1
9
作者 Hui-Xuan Wang Jing-Liang Peng +3 位作者 Shi-Yi Lu Xin Cao Xue-Ying Qin Chang-He Tu 《Journal of Computer Science & Technology》 SCIE EI CSCD 2021年第3期494-507,共14页
Indoor visual localization,i.e.,6 Degree-of-Freedom camera pose estimation for a query image with respect to a known scene,is gaining increased attention driven by rapid progress of applications such as robotics and a... Indoor visual localization,i.e.,6 Degree-of-Freedom camera pose estimation for a query image with respect to a known scene,is gaining increased attention driven by rapid progress of applications such as robotics and augmented reality.However,drastic visual discrepancies between an onsite query image and prerecorded indoor images cast a significant challenge for visual localization.In this paper,based on the key observation of the constant existence of planar surfaces such as floors or walls in indoor scenes,we propose a novel system incorporating geometric information to address issues using only pixelated images.Through the system implementation,we contribute a hierarchical structure consisting of pre-scanned images and point cloud,as well as a distilled representation of the planar-element layout extracted from the original dataset.A view synthesis procedure is designed to generate synthetic images as complementary to that of a sparsely sampled dataset.Moreover,a global image descriptor based on the image statistic modality,called block mean,variance,and color(BMVC),was employed to speed up the candidate pose identification incorporated with a traditional convolutional neural network(CNN)descriptor.Experimental results on a popular benchmark demonstrate that the proposed method outperforms the state-of-the-art approaches in terms of visual localization validity and accuracy. 展开更多
关键词 visual localization planar surface statistic information view synthesis
原文传递
Joint view synthesis and disparity refinement for stereo matching
10
作者 Gaochang WU Yipeng LI +1 位作者 Yuanhao HUANG Yebin LIU 《Frontiers of Computer Science》 SCIE EI CSCD 2019年第6期1337-1352,共16页
Typical stereo algorithms treat disparity estimation and view synthesis as two sequential procedures.In this paper,we consider stereo matching and view synthesis as two complementary components,and present a novel ite... Typical stereo algorithms treat disparity estimation and view synthesis as two sequential procedures.In this paper,we consider stereo matching and view synthesis as two complementary components,and present a novel iterative refinement model for joint view synthesis and disparity refinement.To achieve the mutual promotion between view synthesis and disparity refinement,we apply two key strategies,disparity maps fusion and disparity-assisted plane sweep-based rendering(DAPSR).On the one hand,the disparity maps fusion strategy is applied to generate disparity map from synthesized view and input views.This strategy is able to detect and counteract disparity errors caused by potential artifacts from synthesized view.On the other hand,the DAPSR is used for view synthesis and updating,and is able to weaken the interpolation errors caused by outliers in the disparity maps.Experiments on Middlebury benchmarks demonstrate that by introducing the synthesized view,disparity errors due to large occluded region and large baseline are eliminated effectively and the synthesis quality is greatly improved. 展开更多
关键词 stereo matching view synthesis disparity refinement
原文传递
View interpolation networks for reproducing the material appearance of specular objects
11
作者 Chihiro HOSHIZAWA Takashi KOMURO 《Virtual Reality & Intelligent Hardware》 2023年第1期1-10,共10页
Background In this study, we propose view interpolation networks to reproduce changes in the brightness of an object′s surface depending on the viewing direction, which is important for reproducing the material appea... Background In this study, we propose view interpolation networks to reproduce changes in the brightness of an object′s surface depending on the viewing direction, which is important for reproducing the material appearance of a real object. Method We used an original and modified version of U-Net for image transformation. The networks were trained to generate images from the intermediate viewpoints of four cameras placed at the corners of a square. We conducted an experiment using with three different combinations of methods and training data formats. Result We determined that inputting the coordinates of the viewpoints together with the four camera images and using images from random viewpoints as the training data produces the best results. 展开更多
关键词 view synthesis Image transformation network Reflectance reproduction Material appearance U-Net
下载PDF
Real-time distance field acceleration based free-viewpoint video synthesis for large sports fields
12
作者 Yanran Dai Jing Li +5 位作者 Yuqi Jiang Haidong Qin Bang Liang Shikuan Hong Haozhe Pan Tao Yang 《Computational Visual Media》 SCIE EI CSCD 2024年第2期331-353,共23页
Free-viewpoint video allows the user to view objects from any virtual perspective,creating an immersive visual experience.This technology enhances the interactivity and freedom of multimedia performances.However,many ... Free-viewpoint video allows the user to view objects from any virtual perspective,creating an immersive visual experience.This technology enhances the interactivity and freedom of multimedia performances.However,many free-viewpoint video synthesis methods hardly satisfy the requirement to work in real time with high precision,particularly for sports fields having large areas and numerous moving objects.To address these issues,we propose a freeviewpoint video synthesis method based on distance field acceleration.The central idea is to fuse multiview distance field information and use it to adjust the search step size adaptively.Adaptive step size search is used in two ways:for fast estimation of multiobject three-dimensional surfaces,and synthetic view rendering based on global occlusion judgement.We have implemented our ideas using parallel computing for interactive display,using CUDA and OpenGL frameworks,and have used real-world and simulated experimental datasets for evaluation.The results show that the proposed method can render free-viewpoint videos with multiple objects on large sports fields at 25 fps.Furthermore,the visual quality of our synthetic novel viewpoint images exceeds that of state-of-the-art neural-rendering-based methods. 展开更多
关键词 free-viewpoint video view synthesis camera array distance field sports video
原文传递
A Survey on Multiview Video Synthesis and Editing 被引量:1
13
作者 Shaoping Lu Taijiang Mu Songhai Zhang 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2016年第6期678-695,共18页
Multiview video can provide more immersive perception than traditional single 2-D video. It enables both interactive free navigation applications as well as high-end autostereoscopic displays on which multiple users c... Multiview video can provide more immersive perception than traditional single 2-D video. It enables both interactive free navigation applications as well as high-end autostereoscopic displays on which multiple users can perceive genuine 3-D content without glasses. The multiview format also comprises much more visual information than classical 2-D or stereo 3-D content, which makes it possible to perform various interesting editing operations both on pixel-level and object-level. This survey provides a comprehensive review of existing multiview video synthesis and editing algorithms and applications. For each topic, the related technologies in classical 2-D image and video processing are reviewed. We then continue to the discussion of recent advanced techniques for multiview video virtual view synthesis and various interactive editing applications. Due to the ongoing progress on multiview video synthesis and editing, we can foresee more and more immersive 3-D video applications will appear in the future. 展开更多
关键词 multiview video view synthesis video editing color correction SURVEY
原文传递
Robust Local Light Field Synthesis via Occlusion-aware Sampling and Deep Visual Feature Fusion
14
作者 Wenpeng Xing Jie Chen Yike Guo 《Machine Intelligence Research》 EI CSCD 2023年第3期408-420,共13页
Novel view synthesis has attracted tremendous research attention recently for its applications in virtual reality and immersive telepresence.Rendering a locally immersive light field(LF)based on arbitrary large baseli... Novel view synthesis has attracted tremendous research attention recently for its applications in virtual reality and immersive telepresence.Rendering a locally immersive light field(LF)based on arbitrary large baseline RGB references is a challenging problem that lacks efficient solutions with existing novel view synthesis techniques.In this work,we aim at truthfully rendering local immersive novel views/LF images based on large baseline LF captures and a single RGB image in the target view.To fully explore the precious information from source LF captures,we propose a novel occlusion-aware source sampler(OSS)module which efficiently transfers the pixels of source views to the target view′s frustum in an occlusion-aware manner.An attention-based deep visual fusion module is proposed to fuse the revealed occluded background content with a preliminary LF into a final refined LF.The proposed source sampling and fusion mechanism not only helps to provide information for occluded regions from varying observation angles,but also proves to be able to effectively enhance the visual rendering quality.Experimental results show that our proposed method is able to render high-quality LF images/novel views with sparse RGB references and outperforms state-of-the-art LF rendering and novel view synthesis methods. 展开更多
关键词 Novel view synthesis light field(LF)imaging multi-view stereo occlusion sampling deep visual feature(DVF)fusion
原文传递
Recent advances in 3D Gaussian splatting
15
作者 Tong Wu Yu-Jie Yuan +4 位作者 Ling-Xiao Zhang Jie Yang Yan-Pei Cao Ling-Qi Yan Lin Gao 《Computational Visual Media》 SCIE EI CSCD 2024年第4期613-642,共30页
The emergence of 3D Gaussian splatting(3DGS)has greatly accelerated rendering in novel view synthesis.Unlike neural implicit representations like neural radiance fields(NeRFs)that represent a 3D scene with position an... The emergence of 3D Gaussian splatting(3DGS)has greatly accelerated rendering in novel view synthesis.Unlike neural implicit representations like neural radiance fields(NeRFs)that represent a 3D scene with position and viewpoint-conditioned neural networks,3D Gaussian splatting utilizes a set of Gaussian ellipsoids to model the scene so that efficient rendering can be accomplished by rasterizing Gaussian ellipsoids into images.Apart from fast rendering,the explicit representation of 3D Gaussian splatting also facilitates downstream tasks like dynamic reconstruction,geometry editing,and physical simulation.Considering the rapid changes and growing number of works in this field,we present a literature review of recent 3D Gaussian splatting methods,which can be roughly classified by functionality into 3D reconstruction,3D editing,and other downstream applications.Traditional point-based rendering methods and the rendering formulation of 3D Gaussian splatting are also covered to aid understanding of this technique.This survey aims to help beginners to quickly get started in this field and to provide experienced researchers with a comprehensive overview,aiming to stimulate future development of the 3D Gaussian splatting representation. 展开更多
关键词 3D Gaussian splatting(3DGS) radiance field novel view synthesis 3D editing scene generation
原文传递
Homography-guided stereo matching for wide-baseline image interpolation 被引量:1
16
作者 Yuan Chang Congyi Zhang +1 位作者 Yisong Chen Guoping Wang 《Computational Visual Media》 SCIE EI CSCD 2022年第1期119-133,共15页
Image interpolation has a wide range of applications such as frame rate-up conversion and free viewpoint TV.Despite significant progresses,it remains an open challenge especially for image pairs with large displacemen... Image interpolation has a wide range of applications such as frame rate-up conversion and free viewpoint TV.Despite significant progresses,it remains an open challenge especially for image pairs with large displacements.In this paper,we first propose a novel optimization algorithm for motion estimation,which combines the advantages of both global optimization and a local parametric transformation model.We perform optimization over dynamic label sets,which are modified after each iteration using the prior of piecewise consistency to avoid local minima.Then we apply it to an image interpolation framework including occlusion handling and intermediate image interpolation.We validate the performance of our algorithm experimentally,and show that our approach achieves state-of-the-art performance. 展开更多
关键词 image interpolation view synthesis homo-graphy propagation belief propagation
原文传递
A Depth Video Coding In-Loop Median Filter Based on Joint Weighted Sparse Representation
17
作者 Lü Haitao YIN Cao +1 位作者 CUI Zongmin HU Jinhui 《Wuhan University Journal of Natural Sciences》 CAS CSCD 2016年第4期351-357,共7页
The existing depth video coding algorithms are generally based on in-loop depth filters, whose performance are unstable and easily affected by the outliers. In this paper, we design a joint weighted sparse representat... The existing depth video coding algorithms are generally based on in-loop depth filters, whose performance are unstable and easily affected by the outliers. In this paper, we design a joint weighted sparse representation-based median filter as the in-loop filter in depth video codec. It constructs depth candidate set which contains relevant neighboring depth pixel based on depth and intensity similarity weighted sparse coding, then the median operation is performed on this set to select a neighboring depth pixel as the result of the filtering. The experimental results indicate that the depth bitrate is reduced by about 9% compared with anchor method. It is confirmed that the proposed method is more effective in reducing the required depth bitrates for a given synthesis quality level. 展开更多
关键词 depth video coding virtual view synthesis joint weighted sparse representation
原文传递
TSR: algorithm of image hole-filling based on three-step repairing 被引量:1
18
作者 Li Fucheng Deng Junyong +2 位作者 Zhu Yun Luo Jiaying Ren Han 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2022年第5期83-91,共9页
In order to solve the hole-filling mismatch problem in virtual view synthesis, a three-step repairing(TSR) algorithm was proposed. Firstly, the image with marked holes is decomposed by the non-subsampled shear wave tr... In order to solve the hole-filling mismatch problem in virtual view synthesis, a three-step repairing(TSR) algorithm was proposed. Firstly, the image with marked holes is decomposed by the non-subsampled shear wave transform(NSST), which will generate high-/low-frequency sub-images with different resolutions. Then the improved Criminisi algorithm was used to repair the texture information in the high-frequency sub-images, while the improved curvature driven diffusion(CDD) algorithm was used to repair the low-frequency sub-images with the image structure information. Finally, the repaired parts of high-frequency and low-frequency sub-images are synthesized to obtain the final image through inverse NSST. Experiments show that the peak signal-to-noise ratio(PSNR) of the TSR algorithm is improved by an average of 2-3 dB and 1-2 dB compared with the Criminisi algorithm and the nearest neighbor interpolation(NNI) algorithm, respectively. 展开更多
关键词 virtual view point synthesis hole-filling three-step repairing(TSR) Criminisi algorithm curvature driven diffusions(CDD)algorithm
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部