摘要
混合现实系统可以提供虚拟信息和真实环境实时叠加的虚实融合场景,在教育培训、文物保护、军事仿真、装备制造、手术医疗和展览展示等领域具有十分广阔的应用前景。混合现实系统首先利用标定数据构建虚拟摄像机模型,然后根据头部跟踪结果和虚拟摄像机位置实时绘制虚拟内容并将其叠加在真实环境中,用户通过虚实融合场景中渲染的图形化线索和虚拟物体特征感知其深度信息,但存在用于指导虚实融合场景绘制的视觉规律和感知理论匮乏、图形化线索可提供的绝对深度信息缺失和虚拟物体的渲染维度和特征指标不足等问题。本文分析了面向虚实融合场景绘制渲染的视觉规律,从用户感知的角度出发,围绕虚实融合场景中图形化线索绘制和虚拟物体渲染等展开综述,并对虚实融合场景中深度感知的研究趋势和重点进行展望和预测。
Mixed reality systems can provide virtual and real fusion environment,in which the virtual objects add to the real world in real time.Mixed reality systems have been widely used in education,training,heritage preservation,military simulation,equipment manufacturing,surgery,and exhibition.The mixed reality systems use the calibration data to build a virtual camera model,and then draw virtual content in real time based on the head tracking data and the position of the virtual camera.Finally,the virtual content is superimposed in the real environment.The user perceives the virtual object’s depth information according to the integration of graphical cues and virtual object rendering features in the virtual and real fusion environment.When the user observes the virtual-real fusion scene presented by the mixed reality system,the following processes are included:1)different distance information are converted into respective distance signals.The key role in this process is to present the virtual-real fusion scene through rendering technology.The user judges the distance on the basis of the inherent characteristics of the virtual object.2)The user recognizes other visual stimulus variables in the scene and converts respective distance signal into the final distance signal.The key role in this process is to provide cues of depth information in the virtual and real fusion scene.The user needs to use depth cues to determine the position of the object.3)They determine the distance relationship between the objects in the scene and convert the final distance signal into the corresponding indicated distance.The key role in this process is the visual law of the human eye when viewing the virtual and real scene.However,problems,such as the lack of visual principles and perception theories that can be used to guide the rendering of virtual and real fusion scenes,the lack of absolute depth information that the graphical clues can provide,and the lack of rendering features of virtual objects,are found.The study on the visual laws and perception theories that can be used to guide the rendering of virtual and real scenes is limited.The visual model and perception laws of the human eye should be studied when viewing virtual-real fusion scenes to form effective application guidance and improve virtual-real fusion scenes to apply visual laws effectively in the design and development of virtual-real fusion scenes and increase the accuracy of depth perception.The rendering effect of the mixed reality application improves the interactive efficiency and user experience of mixed reality applications.The absolute depth information that can be provided by graphical cues in the virtual-real fusion scene is missing.Graphical cues that can provide effective absolute depth information in the scene should be generated,the characteristics of different graphical cues should be extracted,and the effects on depth perception should be quantified to help users perceive the depth of the target object.This approach improves user performance in depth perception and provide a basis for rendering of virtual and real scenes.The rendering dimensions and characteristic indicators of virtual objects in virtual and real fusion scenes are insufficient.Reasonable parameter indicators and effective object rendering methods should be studied,different feature interaction models should be built,and the role of different virtual object rendering characteristics in depth perception should be clarified to determine the characteristics that play a major role in the rendering of virtual objects in virtual and real scenes.Finally,the study can provide a basis for rendering the fusion scene.The visual principle in virtual and real fusion environment rendering is analyzed,and then the rendering of graphical cues and virtual object in virtual and real fusion scenes is summarized,and finally the research trend of depth perception in virtual and real fusion scenes is discussed.When viewing virtual and real scenes,humans perceive objects in the scene through the visual system.The visual function factors related to the perception mechanism and the guiding effect of visual laws on depth perception should be studied to optimize the rendering of virtual and real scenes.With the development and application of perception technology in mixed reality,in recent years,many researchers have carried out studies on ground contact theory,the anisotropy of human eye perception,and the distribution of human eye gaze points in depth perception.The background environment and virtual objects in the virtual and real fusion scene can provide users with depth information cues.Most existing related studies focus on adding various depth cues to the virtual and real fusion scene and explore the relationship between additional depth information and depth perception in the scene through experiments.With the rapid development of computer graphics,in recent years,an increasing number of graphic technologies have been applied to the creation of virtual and real fusion scenes to enhance the depth position prompts of virtual objects,including linear perspective,graphical techniques for prompting position information,and creating X-ray vision Graphics technology.The virtual objects presented in the mixed reality system are an important part of the virtual and real fusion environment.In recent years,to study the role of the inherent characteristics of virtual objects in virtual and real fusion scenes in depth perception,researchers have carried out a large number of quantifications in terms of the size,color,brightness,transparency,texture,and surface lighting of virtual objects through experimental study.These rendering-based virtual object characteristics were extracted from the 17 th century painting techniques,but they are different from traditional painting depth cues.
作者
平佳敏
刘越
翁冬冬
Ping Jiamin;Liu Yue;Weng Dongdong(Beijing Engineering Research Center of Mixed Reality and Advanced Display,Beijing 100081,China;School of Optics and Photonics,Beijing Institute of Technology,Beijing 100081,China;Advanced innovation Center for Future Visual Entertainment,Beijing Film Academy,Beijing 100088,China)
出处
《中国图象图形学报》
CSCD
北大核心
2021年第6期1503-1520,共18页
Journal of Image and Graphics
基金
国家重点研发计划项目(2018YFB1403901)
国家自然科学基金项目(61960206007)
广东省关键领域研究和发展项目(2019B010149001)
111项目(B18005)。
关键词
虚实融合场景
绘制渲染
深度感知
混合现实
视觉规律
深度线索
感知匹配
real and virtual fusion environment
scene rendering
depth perception
mixed reality
visual law
depth cues
perceptual matching