Background This work aims to provide an overview of the Mixed Reality(MR)technology’s use in maritime industry for training purposes.Current training procedures cover a broad range of procedural operations for Life-S...Background This work aims to provide an overview of the Mixed Reality(MR)technology’s use in maritime industry for training purposes.Current training procedures cover a broad range of procedural operations for Life-Saving Appliances(LSA)lifeboats;however,several gaps and limitations have been identified related to the practical training that can be addressed through the use of MR.Augmented,Virtual and Mixed Reality applications are already used in various fields in maritime industry,but their full potential have not been yet exploited.SafePASS project aims to exploit MR advantages in the maritime training by introducing a relevant application focusing on use and maintenance of LSA lifeboats.Methods An MR Training application is proposed supporting the training of crew members in equipment usage and operation,as well as in maintenance activities and procedures.The application consists of the training tool that trains crew members on handling lifeboats,the training evaluation tool that allows trainers to assess the performance of trainees,and the maintenance tool that supports crew members to perform maintenance activities and procedures on lifeboats.For each tool,an indicative session and scenario workflow are implemented,along with the main supported interactions of the trainee with the equipment.Results The application has been tested and validated both in lab environment and using a real LSA lifeboat,resulting to improved experience for the users that provided feedback and recommendations for further development.The application has also been demonstrated onboard a cruise ship,showcasing the supported functionalities to relevant stakeholders that recognized the added value of the application and suggested potential future exploitation areas.Conclusions The MR Training application has been evaluated as very promising in providing a user-friendly training environment that can support crew members in LSA lifeboat operation and maintenance,while it is still subject to improvement and further expansion.展开更多
The mixed reality conference system proposed in this paper is a robust,real-time video conference application software that makes up for the simple interaction and lack of immersion and realism of traditional video co...The mixed reality conference system proposed in this paper is a robust,real-time video conference application software that makes up for the simple interaction and lack of immersion and realism of traditional video conference,which realizes the entire process of holographic video conference from client to cloud to the client.This paper mainly focuses on designing and implementing a video conference system based on AI segmentation technology and mixed reality.Several mixed reality conference system components are discussed,including data collection,data transmission,processing,and mixed reality presentation.The data layer is mainly used for data collection,integration,and video and audio codecs.The network layer uses Web-RTC to realize peer-to-peer data communication.The data processing layer is the core part of the system,mainly for human video matting and human-computer interaction,which is the key to realizing mixed reality conferences and improving the interactive experience.The presentation layer explicitly includes the login interface of the mixed reality conference system,the presentation of real-time matting of human subjects,and the presentation objects.With the mixed reality conference system,conference participants in different places can see each other in real-time in their mixed reality scene and share presentation content and 3D models based on mixed reality technology to have a more interactive and immersive experience.展开更多
A concurrency control mechanism for collaborative work is akey element in a mixed reality environment. However, conventional lockingmechanisms restrict potential tasks or the support of non-owners, thusincreasing the ...A concurrency control mechanism for collaborative work is akey element in a mixed reality environment. However, conventional lockingmechanisms restrict potential tasks or the support of non-owners, thusincreasing the working time because of waiting to avoid conflicts. Herein, wepropose an adaptive concurrency control approach that can reduce conflictsand work time. We classify shared object manipulation in mixed reality intodetailed goals and tasks. Then, we model the relationships among goal,task, and ownership. As the collaborative work progresses, the proposedsystem adapts the different concurrency control mechanisms of shared objectmanipulation according to the modeling of goal–task–ownership. With theproposed concurrency control scheme, users can hold shared objects andmove and rotate together in a mixed reality environment similar to realindustrial sites. Additionally, this system provides MS Hololens and Myosensors to recognize inputs from a user and provides results in a mixed realityenvironment. The proposed method is applied to install an air conditioneras a case study. Experimental results and user studies show that, comparedwith the conventional approach, the proposed method reduced the number ofconflicts, waiting time, and total working time.展开更多
Background Mixed reality(MR)video fusion systems merge video imagery with 3D scenes to make the scene more realistic and help users understand the video content and temporal–spatial correlation between them,reducing ...Background Mixed reality(MR)video fusion systems merge video imagery with 3D scenes to make the scene more realistic and help users understand the video content and temporal–spatial correlation between them,reducing the user′s cognitive load.MR video fusion are used in various applications;however,video fusion systems require powerful client machines because video streaming delivery,stitching,and rendering are computationally intensive.Moreover,huge bandwidth usage is another critical factor that affects the scalability of video-fusion systems.Methods Our framework proposes a fusion method for dynamically projecting video images into 3D models as textures.Results Several experiments on different metrics demonstrate the effectiveness of the proposed framework.Conclusions The framework proposed in this study can overcome client limitations by utilizing remote rendering.Furthermore,the framework we built is based on browsers.Therefore,the user can test the MR video fusion system with a laptop or tablet without installing any additional plug-ins or application programs.展开更多
Due to the narrowness of space and the complexity of structure, the assembly of aircraft cabin has become one of the major bottlenecks in the whole manufacturing process. To solve the problem, at the beginning of airc...Due to the narrowness of space and the complexity of structure, the assembly of aircraft cabin has become one of the major bottlenecks in the whole manufacturing process. To solve the problem, at the beginning of aircraft design, the different stages of the lifecycle of aircraft must be thought about, which include the trial manufacture, assembly, maintenance, recycling and destruction of the product. Recently, thanks to the development of the virtual reality and augmented reality, some low-cost and fast solutions are found for the product assembly. This paper presents a mixed reality-based interactive technology for the aircraft cabin assembly, which can enhance the efficiency of the assemblage in a virtual environment in terms of vision, information and operation. In the mixed reality-based assembly environment, the physical scene can be obtained by a camera and then generated by a computer. The virtual parts, the features of visual assembly, the navigation information, the physical parts and the physical assembly environment will be mixed and presented in the same assembly scene. The mixed or the augmented information will provide some assembling information as a detailed assembly instruction in the mixed reality-based assembly environment. Constraint proxy and its match rules help to reconstruct and visualize the restriction relationship among different parts, and to avoid the complex calculation of constraint's match. Finally, a desktop prototype system of virtual assembly has been built to assist the assembly verification and training with the virtual hand.展开更多
The development of digital intelligent diagnostic and treatment technology has opened countless new opportunities for liver surgery from the era of digital anatomy to a new era of digital diagnostics,virtual surgery s...The development of digital intelligent diagnostic and treatment technology has opened countless new opportunities for liver surgery from the era of digital anatomy to a new era of digital diagnostics,virtual surgery simulation and using the created scenarios in real-time surgery using mixed reality.In this article,we described our experience on developing a dedicated 3 dimensional visualization and reconstruction software for surgeons to be used in advanced liver surgery and living donor liver transplantation.Furthermore,we shared the recent developments in the field by explaining the outreach of the software from virtual reality to augmented reality and mixed reality.展开更多
BACKGROUND As a new digital holographic imaging technology,mixed reality(MR)technology has unique advantages in determining the liver anatomy and location of tumor lesions.With the popularization of 5 G communication ...BACKGROUND As a new digital holographic imaging technology,mixed reality(MR)technology has unique advantages in determining the liver anatomy and location of tumor lesions.With the popularization of 5 G communication technology,MR shows great potential in preoperative planning and intraoperative navigation,making hepatectomy more accurate and safer.AIM To evaluate the application value of MR technology in hepatectomy for hepatocellular carcinoma(HCC).METHODS The clinical data of 95 patients who underwent open hepatectomy surgery for HCC between June 2018 and October 2020 at our hospital were analyzed retrospectively.We selected 95 patients with HCC according to the inclusion criteria and exclusion criteria.In 38 patients,hepatectomy was assisted by MR(Group A),and an additional 57 patients underwent traditional hepatectomy without MR(Group B).The perioperative outcomes of the two groups were collected and compared to evaluate the application value of MR in hepatectomy for patients with HCC.RESULTS We summarized the technical process of MR-assisted hepatectomy in the treatment of HCC.Compared to traditional hepatectomy in Group B,MR-assisted hepatectomy in Group A yielded a shorter operation time(202.86±46.02 min vs 229.52±57.13 min,P=0.003),less volume of bleeding(329.29±97.31 mL vs 398.23±159.61 mL,P=0.028),and shorter obstructive time of the portal vein(17.71±4.16 min vs 21.58±5.24 min,P=0.019).Group A had lower alanine aminotransferas and higher albumin values on the third day after the operation(119.74±29.08 U/L vs 135.53±36.68 U/L,P=0.029 and 33.60±3.21 g/L vs 31.80±3.51 g/L,P=0.014,respectively).The total postoperative complications and hospitalization days in Group A were significantly less than those in Group B[14(37.84%)vs 35(60.34%),P=0.032 and 12.05±4.04 d vs 13.78±4.13 d,P=0.049,respectively].CONCLUSION MR has some application value in three-dimensional visualization of the liver,surgical planning,and intraoperative navigation during hepatectomy,and it significantly improves the perioperative outcomes of hepatectomy for HCC.展开更多
In the modern era,preoperative planning is substantially facilitated by artificial reality technologies,which permit a better understanding of patient anatomy,thus increasing the safety and accuracy of surgical interv...In the modern era,preoperative planning is substantially facilitated by artificial reality technologies,which permit a better understanding of patient anatomy,thus increasing the safety and accuracy of surgical interventions.In the field of orthopedic surgery,the increase in safety and accuracy improves treatment quality and orthopedic patient outcomes.Artificial reality technologies,which include virtual reality(VR),augmented reality(AR),and mixed reality(MR),use digital images obtained from computed tomography or magnetic resonance imaging.VR replaces the user’s physical environment with one that is computer generated.AR and MR have been defined as technologies that permit the fusing of the physical with the virtual environment,enabling the user to interact with both physical and virtual objects.MR has been defined as a technology that,in contrast to AR,enables users to visualize the depth and perspective of the virtual models.We aimed to shed light on the role that MR can play in the visualization of orthopedic surgical anatomy.The literature suggests that MR could be a valuable tool in orthopedic surgeon’s hands for visualization of the anatomy.However,we remark that confusion exists in the literature concerning the characteristics of MR.Thus,a more clear description of MR is needed in orthopedic research,so that the potential of this technology can be more deeply understood.展开更多
To improve and develop education systems,the communication between instructors and learners in a class during the learning process is of utmost importance.Currently the presentations of 3D models using mixed reality(M...To improve and develop education systems,the communication between instructors and learners in a class during the learning process is of utmost importance.Currently the presentations of 3D models using mixed reality(MR)technology can be used to avoid misinterpretations of oral and 2D model presentations.As an independent concept and MR applications,MR combines the excellent of each virtual reality(VR)and augmented reality(AR).This work aims to present the descriptions of MR systems,which include its devices,applications,and literature reviews and proposes computer vision tracking using the AR Toolkit Tracking Library.The focus of this work will be on creating 3D models and implementing in Unity 3D using the Vuforia SDK platform to develop VR and AR applications for architectural presentations.展开更多
There have been numerous works proposed to merge augmented reality/mixed reality(AR/MR)and Internet of Things(IoT)in various ways.However,they have focused on their specific target applications and have limitations on...There have been numerous works proposed to merge augmented reality/mixed reality(AR/MR)and Internet of Things(IoT)in various ways.However,they have focused on their specific target applications and have limitations on interoperability or reusability when utilizing them to different domains or adding other devices to the system.This paper proposes a novel architecture of a convergence platform for AR/MR and IoT systems and services.The proposed architecture adopts the oneM2M IoT standard as the basic framework that converges AR/MR and IoT systems and enables the development of application services used in general-purpose environments without being subordinate to specific systems,domains,and device manufacturers.We implement the proposed architecture utilizing the open-source oneM2M-based IoT server and device platforms released by the open alliance for IoT standards(OCEAN)and Microsoft HoloLens as an MR device platform.We also suggest and demonstrate the practical use cases and discuss the advantages of the proposed architecture.展开更多
Traditional teaching and learning about industrial robots uses abstract instructions,which are difficult for students to understand.Meanwhile,there are safety issues associated with the use of practical training equip...Traditional teaching and learning about industrial robots uses abstract instructions,which are difficult for students to understand.Meanwhile,there are safety issues associated with the use of practical training equipment.To address these problems,this paper developed an instructional system based on mixed-reality(MR)technology for teaching about industrial robots.The Siasun T6A-series robots were taken as a case study,and the Microsoft MR device HoloLens-2 was used as the instructional platform.First,the parameters of the robots were analyzed based on their structural drawings.Then,the robot modules were decomposed,and 1:1 three-dimensional(3D)digital reproductions were created in Maya.Next,a library of digital models of the robot components was established,and a 3D spatial operation interface for the virtual instructional system was created in Unity.Subsequently,a C#code framework was established to satisfy the requirements of interactive functions and data transmission,and the data were saved in JSON format.In this way,a key technique that facilitates the understanding of spatial structures and a variety of human-machine interactions were realized.Finally,an instructional system based on HoloLens-2 was established for understanding the structures and principles of robots.The results showed that the instructional system developed in this study provides realistic 3D visualizations and a natural,efficient approach for human-machine interactions.This system could effectively improve the efficiency of knowledge transfer and the student’s motivation to learn.展开更多
Nowadays, urban design faces complex demands. It has become a necessity to negotiate between stakeholder objectives, the expectations of citizens and the demands of planning. It is desirable to involve the stakeholder...Nowadays, urban design faces complex demands. It has become a necessity to negotiate between stakeholder objectives, the expectations of citizens and the demands of planning. It is desirable to involve the stakeholders and citizens from an early stage in the planning process to enable their different viewpoints to be successfully expressed and comprehended. Therefore, the basic aim of the study was how the MR (mixed reality) application is designed to encourage and improve communication on urban design among stakeholders and citizens? In this paper, we discuss new approaches to visualize urban building and environment alternatives to different stakeholders and provide them with tools to explore different approaches to urban planning in order to support citizen's participation in urban planning with augmented and mixed reality. The major finding of the study is that learning "how these participatory technologies may help build a community of practice around an urban project". And throughout the different experiences, we can learn to assist towards development of a methodology to use the mixed reality as a simulation tool in the enhancement of collaborative interaction in real-Egyptian project. So, we can determine a number of recommendations to deal with new participatory design tools for urban planning projects.展开更多
To study recall accuracy of the offensive and defensive situations including movements of elite-athlete/novice oneself, a novel experimental system was developed where defensive actions were performed by the subject w...To study recall accuracy of the offensive and defensive situations including movements of elite-athlete/novice oneself, a novel experimental system was developed where defensive actions were performed by the subject with a CG (Computer Graphics) player who presented predetermined offensive actions. Both the CG player's movements and subject's movements were reproduced by a video using mixed reality technology for recall examination. This system was also designed to rearrange the natural sequence of image frames resulting in a reproducible video in which the time relation of offense and defense was falsified. Displacement of timing in the false video was twofold; delayed from the truth or advanced from the truth. Using this two-video, true/false imagery method, the subject was asked to select the true video by recall; thus it became possible to examine the recall accuracy quantitatively by controlling the timing displacement. Results of the experiment using this system revealed that karate expert possessed a skill to recognize the time relation between the opponent's movement and one's own movement perceptually that was more developed than that of the novice. It was further identified that the expert as well as the novice recognized delayed displacement more accurately than they could recognize advanced displacement.展开更多
In this study, we develop a mixed reality game system to investigate characteristics ofjudgrnents of individual players in an evacuation process. The characteristics of judgments of the players that are inferred from ...In this study, we develop a mixed reality game system to investigate characteristics ofjudgrnents of individual players in an evacuation process. The characteristics of judgments of the players that are inferred from the performance of the game are then incorporated into a multi-agent simulation as rules. The behavior of evacuees is evaluated in approximations of real situations, by using the agent simulation including different judgments of evacuees. Using the results of the simulation, effective methods are discussed for achieving the escape of the evacuees within a short time.展开更多
A precise knowledge of intra-parenchymal vascular and biliary architecture and the location of lesions in relation to the complex anatomy is indispensable to perform liver surgery.Therefore,virtual three-dimensional(3...A precise knowledge of intra-parenchymal vascular and biliary architecture and the location of lesions in relation to the complex anatomy is indispensable to perform liver surgery.Therefore,virtual three-dimensional(3D)-reconstruction models from computed tomography/magnetic resonance imaging scans of the liver might be helpful for visualization.Augmented reality,mixed reality and 3Dnavigation could transfer such 3D-image data directly into the operation theater to support the surgeon.This review examines the literature about the clinical and intraoperative use of these image guidance techniques in liver surgery and provides the reader with the opportunity to learn about these techniques.Augmented reality and mixed reality have been shown to be feasible for the use in open and minimally invasive liver surgery.3D-navigation facilitated targeting of intraparenchymal lesions.The existing data is limited to small cohorts and description about technical details e.g.,accordance between the virtual 3D-model and the real liver anatomy.Randomized controlled trials regarding clinical data or oncological outcome are not available.Up to now there is no intraoperative application of artificial intelligence in liver surgery.The usability of all these sophisticated image guidance tools has still not reached the grade of immersion which would be necessary for a widespread use in the daily surgical routine.Although there are many challenges,augmented reality,mixed reality,3Dnavigation and artificial intelligence are emerging fields in hepato-biliary surgery.展开更多
Augmented-and mixed-reality technologies have pioneered the realization of real-time fusion and interactive projection for laparoscopic surgeries.Indocyanine green fluorescence imaging technology has enabled anatomica...Augmented-and mixed-reality technologies have pioneered the realization of real-time fusion and interactive projection for laparoscopic surgeries.Indocyanine green fluorescence imaging technology has enabled anatomical,functional,and radical hepatectomy through tumor identification and localization of target hepatic segments,driving a transformative shift in themanagement of hepatic surgical diseases,moving away from traditional,empirical diagnostic and treatment approaches toward digital,intelligent ones.The Hepatic Surgery Group of the Surgery Branch of the Chinese Medical Association,Digital Medicine Branch of the Chinese Medical Association,Digital Intelligent Surgery Committee of the Chinese Society of ResearchHospitals,and Liver Cancer Committee of the Chinese Medical Doctor Association organized the relevant experts in China to formulate this consensus.This consensus provides a comprehensive outline of the principles,advantages,processes,and key considerations associated with the application of augmented reality and mixed-reality technology combined with indocyanine green fluorescence imaging technology for hepatic segmental and subsegmental resection.The purpose is to streamline and standardize the application of these technologies.展开更多
Mixed reality technologies provide real-time and immersive experiences,which bring tremendous opportunities in entertainment,education,and enriched experiences that are not directly accessible owing to safety or cost....Mixed reality technologies provide real-time and immersive experiences,which bring tremendous opportunities in entertainment,education,and enriched experiences that are not directly accessible owing to safety or cost.The research in this field has been in the spotlight in the last few years as the metaverse went viral.The recently emerging omnidirectional video streams,i.e.,360°videos,provide an affordable way to capture and present dynamic real-world scenes.In the last decade,fueled by the rapid development of artificial intelligence and computational photography technologies,the research interests in mixed reality systems using 360°videos with richer and more realistic experiences are dramatically increased to unlock the true potential of the metaverse.In this survey,we cover recent research aimed at addressing the above issues in the 360°image and video processing technologies and applications for mixed reality.The survey summarizes the contributions of the recent research and describes potential future research directions about 360°media in the field of mixed reality.展开更多
Helmet Mounted Displays(HMDs),such as in Virtual Reality(VR),Augmented Reality(AR),Mixed reality(MR),and Smart Glasses have the potential to revolutionize the way we live our private and professional lives,as in commu...Helmet Mounted Displays(HMDs),such as in Virtual Reality(VR),Augmented Reality(AR),Mixed reality(MR),and Smart Glasses have the potential to revolutionize the way we live our private and professional lives,as in communicating,working,teaching and learning,shopping and getting entertained.Such HMD devices have to satisfy draconian requirements in weight,size,form factor,power,compute,wireless communication and of course display,imaging and sensing performances.We review in this paper the various optical technologies and architectures that have been developed in the past 10 years to provide adequate solutions for the drastic requirements of consumer HMDs,a market that has yet to become mature in the next years,unlike the existing enterprise and defense markets that have already adopted VR and AR headsets as practical tools to improve greatly effectiveness and productivity.We focus specifically our attention on the optical combiner element,a crucial element in Optical See-Through(OST)HMDs that combines the see-through scene with a world locked digital image.As for the technological platform,we chose optical waveguide combiners,although there is also a considerable effort today dedicated to free-space combiners.Flat and thin optics as in micro-optics,holographics,diffractives,metasurfaces and other nanostructured optical elements are key building blocks to achieve the target form factor.展开更多
Code-Bothy examines traditional bricklaying using mixed reality technology. Digital design demands a re-examination of how we make. The digital and the manual should not be considered as autonomous but as part of some...Code-Bothy examines traditional bricklaying using mixed reality technology. Digital design demands a re-examination of how we make. The digital and the manual should not be considered as autonomous but as part of something more reciprocal. One can engage with digital modelling software or can reject all digital tools and make and design by hand, but can we work in between? In the context of mixed-reality fabrication, the real and virtual worlds come together to create a hybrid environment where physical and digital objects are visualised simultaneously and interact with one another in real time. Hybridity of the two is compelling because the digital is often perceived as the future/emergent and the manual as the past/obsolescent. The practice of being digital and manual is on the one hand procedural and systematic, on the other textural and indexical. Working digitally and manually is about exploring areas in design and making: manual production and digital input can work together to allow for the conservation of crafts, while digital fabrication can be advanced with the help of manual craftsmanship.展开更多
In order to study the role of the new technological concept of shared experiences in the digital interactive experience of cultural heritage and apply it to the digital interactive experience of cultural heritage to s...In order to study the role of the new technological concept of shared experiences in the digital interactive experience of cultural heritage and apply it to the digital interactive experience of cultural heritage to solve the current problems in this field,starting from the mixed reality(MR) technology that the shared experiences rely on,proper software and hardware platforms were investigated and selected,a universal shared experiences solution was designed,and an experimental project based on the proposed solution was made to verify its feasibility.In the end,a proven and workable shared experiences solution was obtained.This solution included a proposed MR spatial alignment method,and it integrated the existing MR content production process and standard network synchronization functions.Furthermore,it is concluded that the introduction and reasonable use of new technologies can help the development of the digital interactive experience of cultural heritage.The shared experiences solution for the digital interactive experience of cultural heritage balances investment issues in the exhibition,display effect,and user experience.It can speed up the promotion of cultural heritage and bring the vitality of MR technology to relevant projects.展开更多
基金Supported by the Safe PASS project that has received funding from the European Union’s Horizon 2020 Research and Innovation programme (815146)。
文摘Background This work aims to provide an overview of the Mixed Reality(MR)technology’s use in maritime industry for training purposes.Current training procedures cover a broad range of procedural operations for Life-Saving Appliances(LSA)lifeboats;however,several gaps and limitations have been identified related to the practical training that can be addressed through the use of MR.Augmented,Virtual and Mixed Reality applications are already used in various fields in maritime industry,but their full potential have not been yet exploited.SafePASS project aims to exploit MR advantages in the maritime training by introducing a relevant application focusing on use and maintenance of LSA lifeboats.Methods An MR Training application is proposed supporting the training of crew members in equipment usage and operation,as well as in maintenance activities and procedures.The application consists of the training tool that trains crew members on handling lifeboats,the training evaluation tool that allows trainers to assess the performance of trainees,and the maintenance tool that supports crew members to perform maintenance activities and procedures on lifeboats.For each tool,an indicative session and scenario workflow are implemented,along with the main supported interactions of the trainee with the equipment.Results The application has been tested and validated both in lab environment and using a real LSA lifeboat,resulting to improved experience for the users that provided feedback and recommendations for further development.The application has also been demonstrated onboard a cruise ship,showcasing the supported functionalities to relevant stakeholders that recognized the added value of the application and suggested potential future exploitation areas.Conclusions The MR Training application has been evaluated as very promising in providing a user-friendly training environment that can support crew members in LSA lifeboat operation and maintenance,while it is still subject to improvement and further expansion.
基金supported in part by the Major Fundamental Research of Natural Science Foundation of Shandong Province under Grant ZR2019ZD05Joint fund for smart computing of Shandong Natural Science Foundation under Grant ZR2020LZH013+1 种基金Open project of State Key Laboratory of Computer Architecture CARCHA202002Human Video Matting Project of Hisense Co.,Ltd.under Grant QD1170020023.
文摘The mixed reality conference system proposed in this paper is a robust,real-time video conference application software that makes up for the simple interaction and lack of immersion and realism of traditional video conference,which realizes the entire process of holographic video conference from client to cloud to the client.This paper mainly focuses on designing and implementing a video conference system based on AI segmentation technology and mixed reality.Several mixed reality conference system components are discussed,including data collection,data transmission,processing,and mixed reality presentation.The data layer is mainly used for data collection,integration,and video and audio codecs.The network layer uses Web-RTC to realize peer-to-peer data communication.The data processing layer is the core part of the system,mainly for human video matting and human-computer interaction,which is the key to realizing mixed reality conferences and improving the interactive experience.The presentation layer explicitly includes the login interface of the mixed reality conference system,the presentation of real-time matting of human subjects,and the presentation objects.With the mixed reality conference system,conference participants in different places can see each other in real-time in their mixed reality scene and share presentation content and 3D models based on mixed reality technology to have a more interactive and immersive experience.
基金supported by“Regional Innovation Strategy (RIS)”through the National Research Foundation of Korea (NRF)funded by the Ministry of Education (MOE) (2021RIS-004).
文摘A concurrency control mechanism for collaborative work is akey element in a mixed reality environment. However, conventional lockingmechanisms restrict potential tasks or the support of non-owners, thusincreasing the working time because of waiting to avoid conflicts. Herein, wepropose an adaptive concurrency control approach that can reduce conflictsand work time. We classify shared object manipulation in mixed reality intodetailed goals and tasks. Then, we model the relationships among goal,task, and ownership. As the collaborative work progresses, the proposedsystem adapts the different concurrency control mechanisms of shared objectmanipulation according to the modeling of goal–task–ownership. With theproposed concurrency control scheme, users can hold shared objects andmove and rotate together in a mixed reality environment similar to realindustrial sites. Additionally, this system provides MS Hololens and Myosensors to recognize inputs from a user and provides results in a mixed realityenvironment. The proposed method is applied to install an air conditioneras a case study. Experimental results and user studies show that, comparedwith the conventional approach, the proposed method reduced the number ofconflicts, waiting time, and total working time.
基金Supported by the National Key R&D Program of China(2018YFB2100601)National Natural Science Foundation of China(61872024)。
文摘Background Mixed reality(MR)video fusion systems merge video imagery with 3D scenes to make the scene more realistic and help users understand the video content and temporal–spatial correlation between them,reducing the user′s cognitive load.MR video fusion are used in various applications;however,video fusion systems require powerful client machines because video streaming delivery,stitching,and rendering are computationally intensive.Moreover,huge bandwidth usage is another critical factor that affects the scalability of video-fusion systems.Methods Our framework proposes a fusion method for dynamically projecting video images into 3D models as textures.Results Several experiments on different metrics demonstrate the effectiveness of the proposed framework.Conclusions The framework proposed in this study can overcome client limitations by utilizing remote rendering.Furthermore,the framework we built is based on browsers.Therefore,the user can test the MR video fusion system with a laptop or tablet without installing any additional plug-ins or application programs.
基金supported by National Defence Basic Research Foundation of China (Grant No. B1420060173)National Hi-tech Research and Development Program of China (863 Program, Grant No. 2006AA04Z138)
文摘Due to the narrowness of space and the complexity of structure, the assembly of aircraft cabin has become one of the major bottlenecks in the whole manufacturing process. To solve the problem, at the beginning of aircraft design, the different stages of the lifecycle of aircraft must be thought about, which include the trial manufacture, assembly, maintenance, recycling and destruction of the product. Recently, thanks to the development of the virtual reality and augmented reality, some low-cost and fast solutions are found for the product assembly. This paper presents a mixed reality-based interactive technology for the aircraft cabin assembly, which can enhance the efficiency of the assemblage in a virtual environment in terms of vision, information and operation. In the mixed reality-based assembly environment, the physical scene can be obtained by a camera and then generated by a computer. The virtual parts, the features of visual assembly, the navigation information, the physical parts and the physical assembly environment will be mixed and presented in the same assembly scene. The mixed or the augmented information will provide some assembling information as a detailed assembly instruction in the mixed reality-based assembly environment. Constraint proxy and its match rules help to reconstruct and visualize the restriction relationship among different parts, and to avoid the complex calculation of constraint's match. Finally, a desktop prototype system of virtual assembly has been built to assist the assembly verification and training with the virtual hand.
文摘The development of digital intelligent diagnostic and treatment technology has opened countless new opportunities for liver surgery from the era of digital anatomy to a new era of digital diagnostics,virtual surgery simulation and using the created scenarios in real-time surgery using mixed reality.In this article,we described our experience on developing a dedicated 3 dimensional visualization and reconstruction software for surgeons to be used in advanced liver surgery and living donor liver transplantation.Furthermore,we shared the recent developments in the field by explaining the outreach of the software from virtual reality to augmented reality and mixed reality.
文摘BACKGROUND As a new digital holographic imaging technology,mixed reality(MR)technology has unique advantages in determining the liver anatomy and location of tumor lesions.With the popularization of 5 G communication technology,MR shows great potential in preoperative planning and intraoperative navigation,making hepatectomy more accurate and safer.AIM To evaluate the application value of MR technology in hepatectomy for hepatocellular carcinoma(HCC).METHODS The clinical data of 95 patients who underwent open hepatectomy surgery for HCC between June 2018 and October 2020 at our hospital were analyzed retrospectively.We selected 95 patients with HCC according to the inclusion criteria and exclusion criteria.In 38 patients,hepatectomy was assisted by MR(Group A),and an additional 57 patients underwent traditional hepatectomy without MR(Group B).The perioperative outcomes of the two groups were collected and compared to evaluate the application value of MR in hepatectomy for patients with HCC.RESULTS We summarized the technical process of MR-assisted hepatectomy in the treatment of HCC.Compared to traditional hepatectomy in Group B,MR-assisted hepatectomy in Group A yielded a shorter operation time(202.86±46.02 min vs 229.52±57.13 min,P=0.003),less volume of bleeding(329.29±97.31 mL vs 398.23±159.61 mL,P=0.028),and shorter obstructive time of the portal vein(17.71±4.16 min vs 21.58±5.24 min,P=0.019).Group A had lower alanine aminotransferas and higher albumin values on the third day after the operation(119.74±29.08 U/L vs 135.53±36.68 U/L,P=0.029 and 33.60±3.21 g/L vs 31.80±3.51 g/L,P=0.014,respectively).The total postoperative complications and hospitalization days in Group A were significantly less than those in Group B[14(37.84%)vs 35(60.34%),P=0.032 and 12.05±4.04 d vs 13.78±4.13 d,P=0.049,respectively].CONCLUSION MR has some application value in three-dimensional visualization of the liver,surgical planning,and intraoperative navigation during hepatectomy,and it significantly improves the perioperative outcomes of hepatectomy for HCC.
文摘In the modern era,preoperative planning is substantially facilitated by artificial reality technologies,which permit a better understanding of patient anatomy,thus increasing the safety and accuracy of surgical interventions.In the field of orthopedic surgery,the increase in safety and accuracy improves treatment quality and orthopedic patient outcomes.Artificial reality technologies,which include virtual reality(VR),augmented reality(AR),and mixed reality(MR),use digital images obtained from computed tomography or magnetic resonance imaging.VR replaces the user’s physical environment with one that is computer generated.AR and MR have been defined as technologies that permit the fusing of the physical with the virtual environment,enabling the user to interact with both physical and virtual objects.MR has been defined as a technology that,in contrast to AR,enables users to visualize the depth and perspective of the virtual models.We aimed to shed light on the role that MR can play in the visualization of orthopedic surgical anatomy.The literature suggests that MR could be a valuable tool in orthopedic surgeon’s hands for visualization of the anatomy.However,we remark that confusion exists in the literature concerning the characteristics of MR.Thus,a more clear description of MR is needed in orthopedic research,so that the potential of this technology can be more deeply understood.
文摘To improve and develop education systems,the communication between instructors and learners in a class during the learning process is of utmost importance.Currently the presentations of 3D models using mixed reality(MR)technology can be used to avoid misinterpretations of oral and 2D model presentations.As an independent concept and MR applications,MR combines the excellent of each virtual reality(VR)and augmented reality(AR).This work aims to present the descriptions of MR systems,which include its devices,applications,and literature reviews and proposes computer vision tracking using the AR Toolkit Tracking Library.The focus of this work will be on creating 3D models and implementing in Unity 3D using the Vuforia SDK platform to develop VR and AR applications for architectural presentations.
基金This research was supported by MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2021-2018-0-01431)the High-Potential Individuals Global Training Program(2019-0-01611)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation).
文摘There have been numerous works proposed to merge augmented reality/mixed reality(AR/MR)and Internet of Things(IoT)in various ways.However,they have focused on their specific target applications and have limitations on interoperability or reusability when utilizing them to different domains or adding other devices to the system.This paper proposes a novel architecture of a convergence platform for AR/MR and IoT systems and services.The proposed architecture adopts the oneM2M IoT standard as the basic framework that converges AR/MR and IoT systems and enables the development of application services used in general-purpose environments without being subordinate to specific systems,domains,and device manufacturers.We implement the proposed architecture utilizing the open-source oneM2M-based IoT server and device platforms released by the open alliance for IoT standards(OCEAN)and Microsoft HoloLens as an MR device platform.We also suggest and demonstrate the practical use cases and discuss the advantages of the proposed architecture.
文摘Traditional teaching and learning about industrial robots uses abstract instructions,which are difficult for students to understand.Meanwhile,there are safety issues associated with the use of practical training equipment.To address these problems,this paper developed an instructional system based on mixed-reality(MR)technology for teaching about industrial robots.The Siasun T6A-series robots were taken as a case study,and the Microsoft MR device HoloLens-2 was used as the instructional platform.First,the parameters of the robots were analyzed based on their structural drawings.Then,the robot modules were decomposed,and 1:1 three-dimensional(3D)digital reproductions were created in Maya.Next,a library of digital models of the robot components was established,and a 3D spatial operation interface for the virtual instructional system was created in Unity.Subsequently,a C#code framework was established to satisfy the requirements of interactive functions and data transmission,and the data were saved in JSON format.In this way,a key technique that facilitates the understanding of spatial structures and a variety of human-machine interactions were realized.Finally,an instructional system based on HoloLens-2 was established for understanding the structures and principles of robots.The results showed that the instructional system developed in this study provides realistic 3D visualizations and a natural,efficient approach for human-machine interactions.This system could effectively improve the efficiency of knowledge transfer and the student’s motivation to learn.
文摘Nowadays, urban design faces complex demands. It has become a necessity to negotiate between stakeholder objectives, the expectations of citizens and the demands of planning. It is desirable to involve the stakeholders and citizens from an early stage in the planning process to enable their different viewpoints to be successfully expressed and comprehended. Therefore, the basic aim of the study was how the MR (mixed reality) application is designed to encourage and improve communication on urban design among stakeholders and citizens? In this paper, we discuss new approaches to visualize urban building and environment alternatives to different stakeholders and provide them with tools to explore different approaches to urban planning in order to support citizen's participation in urban planning with augmented and mixed reality. The major finding of the study is that learning "how these participatory technologies may help build a community of practice around an urban project". And throughout the different experiences, we can learn to assist towards development of a methodology to use the mixed reality as a simulation tool in the enhancement of collaborative interaction in real-Egyptian project. So, we can determine a number of recommendations to deal with new participatory design tools for urban planning projects.
文摘To study recall accuracy of the offensive and defensive situations including movements of elite-athlete/novice oneself, a novel experimental system was developed where defensive actions were performed by the subject with a CG (Computer Graphics) player who presented predetermined offensive actions. Both the CG player's movements and subject's movements were reproduced by a video using mixed reality technology for recall examination. This system was also designed to rearrange the natural sequence of image frames resulting in a reproducible video in which the time relation of offense and defense was falsified. Displacement of timing in the false video was twofold; delayed from the truth or advanced from the truth. Using this two-video, true/false imagery method, the subject was asked to select the true video by recall; thus it became possible to examine the recall accuracy quantitatively by controlling the timing displacement. Results of the experiment using this system revealed that karate expert possessed a skill to recognize the time relation between the opponent's movement and one's own movement perceptually that was more developed than that of the novice. It was further identified that the expert as well as the novice recognized delayed displacement more accurately than they could recognize advanced displacement.
文摘In this study, we develop a mixed reality game system to investigate characteristics ofjudgrnents of individual players in an evacuation process. The characteristics of judgments of the players that are inferred from the performance of the game are then incorporated into a multi-agent simulation as rules. The behavior of evacuees is evaluated in approximations of real situations, by using the agent simulation including different judgments of evacuees. Using the results of the simulation, effective methods are discussed for achieving the escape of the evacuees within a short time.
文摘A precise knowledge of intra-parenchymal vascular and biliary architecture and the location of lesions in relation to the complex anatomy is indispensable to perform liver surgery.Therefore,virtual three-dimensional(3D)-reconstruction models from computed tomography/magnetic resonance imaging scans of the liver might be helpful for visualization.Augmented reality,mixed reality and 3Dnavigation could transfer such 3D-image data directly into the operation theater to support the surgeon.This review examines the literature about the clinical and intraoperative use of these image guidance techniques in liver surgery and provides the reader with the opportunity to learn about these techniques.Augmented reality and mixed reality have been shown to be feasible for the use in open and minimally invasive liver surgery.3D-navigation facilitated targeting of intraparenchymal lesions.The existing data is limited to small cohorts and description about technical details e.g.,accordance between the virtual 3D-model and the real liver anatomy.Randomized controlled trials regarding clinical data or oncological outcome are not available.Up to now there is no intraoperative application of artificial intelligence in liver surgery.The usability of all these sophisticated image guidance tools has still not reached the grade of immersion which would be necessary for a widespread use in the daily surgical routine.Although there are many challenges,augmented reality,mixed reality,3Dnavigation and artificial intelligence are emerging fields in hepato-biliary surgery.
基金National Key Research and Development Program(2016YFC0106500800)NationalMajor Scientific Instruments and Equipments Development Project of National Natural Science Foundation of China(81627805)+3 种基金National Natural Science Foundation of China-Guangdong Joint Fund Key Program(U1401254)National Natural Science Foundation of China Mathematics Tianyuan Foundation(12026602)Guangdong Provincial Natural Science Foundation Team Project(6200171)Guangdong Provincial Health Appropriate Technology Promotion Project(20230319214525105,20230322152307666).
文摘Augmented-and mixed-reality technologies have pioneered the realization of real-time fusion and interactive projection for laparoscopic surgeries.Indocyanine green fluorescence imaging technology has enabled anatomical,functional,and radical hepatectomy through tumor identification and localization of target hepatic segments,driving a transformative shift in themanagement of hepatic surgical diseases,moving away from traditional,empirical diagnostic and treatment approaches toward digital,intelligent ones.The Hepatic Surgery Group of the Surgery Branch of the Chinese Medical Association,Digital Medicine Branch of the Chinese Medical Association,Digital Intelligent Surgery Committee of the Chinese Society of ResearchHospitals,and Liver Cancer Committee of the Chinese Medical Doctor Association organized the relevant experts in China to formulate this consensus.This consensus provides a comprehensive outline of the principles,advantages,processes,and key considerations associated with the application of augmented reality and mixed-reality technology combined with indocyanine green fluorescence imaging technology for hepatic segmental and subsegmental resection.The purpose is to streamline and standardize the application of these technologies.
基金supported by the Marsden Fund Council managed by Royal Society of New Zealand under Grant Nos.MFP-20-VUW-180 and UOO1724Zhejiang Province Public Welfare Technology Application Research under Grant No.LGG22F020009the Key Lab of Film and TV Media Technology of Zhejiang Province of China under Grant No.2020E10015.
文摘Mixed reality technologies provide real-time and immersive experiences,which bring tremendous opportunities in entertainment,education,and enriched experiences that are not directly accessible owing to safety or cost.The research in this field has been in the spotlight in the last few years as the metaverse went viral.The recently emerging omnidirectional video streams,i.e.,360°videos,provide an affordable way to capture and present dynamic real-world scenes.In the last decade,fueled by the rapid development of artificial intelligence and computational photography technologies,the research interests in mixed reality systems using 360°videos with richer and more realistic experiences are dramatically increased to unlock the true potential of the metaverse.In this survey,we cover recent research aimed at addressing the above issues in the 360°image and video processing technologies and applications for mixed reality.The survey summarizes the contributions of the recent research and describes potential future research directions about 360°media in the field of mixed reality.
文摘Helmet Mounted Displays(HMDs),such as in Virtual Reality(VR),Augmented Reality(AR),Mixed reality(MR),and Smart Glasses have the potential to revolutionize the way we live our private and professional lives,as in communicating,working,teaching and learning,shopping and getting entertained.Such HMD devices have to satisfy draconian requirements in weight,size,form factor,power,compute,wireless communication and of course display,imaging and sensing performances.We review in this paper the various optical technologies and architectures that have been developed in the past 10 years to provide adequate solutions for the drastic requirements of consumer HMDs,a market that has yet to become mature in the next years,unlike the existing enterprise and defense markets that have already adopted VR and AR headsets as practical tools to improve greatly effectiveness and productivity.We focus specifically our attention on the optical combiner element,a crucial element in Optical See-Through(OST)HMDs that combines the see-through scene with a world locked digital image.As for the technological platform,we chose optical waveguide combiners,although there is also a considerable effort today dedicated to free-space combiners.Flat and thin optics as in micro-optics,holographics,diffractives,metasurfaces and other nanostructured optical elements are key building blocks to achieve the target form factor.
文摘Code-Bothy examines traditional bricklaying using mixed reality technology. Digital design demands a re-examination of how we make. The digital and the manual should not be considered as autonomous but as part of something more reciprocal. One can engage with digital modelling software or can reject all digital tools and make and design by hand, but can we work in between? In the context of mixed-reality fabrication, the real and virtual worlds come together to create a hybrid environment where physical and digital objects are visualised simultaneously and interact with one another in real time. Hybridity of the two is compelling because the digital is often perceived as the future/emergent and the manual as the past/obsolescent. The practice of being digital and manual is on the one hand procedural and systematic, on the other textural and indexical. Working digitally and manually is about exploring areas in design and making: manual production and digital input can work together to allow for the conservation of crafts, while digital fabrication can be advanced with the help of manual craftsmanship.
基金supported by the National Key Research and Development Program of China (2020YFF0305304)。
文摘In order to study the role of the new technological concept of shared experiences in the digital interactive experience of cultural heritage and apply it to the digital interactive experience of cultural heritage to solve the current problems in this field,starting from the mixed reality(MR) technology that the shared experiences rely on,proper software and hardware platforms were investigated and selected,a universal shared experiences solution was designed,and an experimental project based on the proposed solution was made to verify its feasibility.In the end,a proven and workable shared experiences solution was obtained.This solution included a proposed MR spatial alignment method,and it integrated the existing MR content production process and standard network synchronization functions.Furthermore,it is concluded that the introduction and reasonable use of new technologies can help the development of the digital interactive experience of cultural heritage.The shared experiences solution for the digital interactive experience of cultural heritage balances investment issues in the exhibition,display effect,and user experience.It can speed up the promotion of cultural heritage and bring the vitality of MR technology to relevant projects.