期刊文献+
共找到733篇文章
< 1 2 37 >
每页显示 20 50 100
Virtual reality tools for training in gastrointestinal endoscopy:A systematic review
1
作者 Tuấn Quang Dương Jonathan Soldera 《Artificial Intelligence in Gastrointestinal Endoscopy》 2024年第2期41-54,共14页
BACKGROUND Virtual reality(VR)has emerged as an innovative technology in endoscopy training,providing a simulated environment that closely resembles real-life scenarios and offering trainees a valuable platform to acq... BACKGROUND Virtual reality(VR)has emerged as an innovative technology in endoscopy training,providing a simulated environment that closely resembles real-life scenarios and offering trainees a valuable platform to acquire and enhance their endoscopic skills.This systematic review will critically evaluate the effectiveness and feasibility of VR-based training compared to traditional methods.AIM To evaluate the effectiveness and feasibility of VR-based training compared to traditional methods.By examining the current state of the field,this review seeks to identify gaps,challenges,and opportunities for further research and implementation of VR in endoscopic training.METHODS The study is a systematic review,following the guidelines for reporting systematic reviews set out by the PRISMA statement.A comprehensive search command was designed and implemented and run in September 2023 to identify relevant studies available,from electronic databases such as PubMed,Scopus,Cochrane,and Google Scholar.The results were systematically reviewed.RESULTS Sixteen articles were included in the final analysis.The total number of participants was 523.Five studies focused on both upper endoscopy and colonoscopy training,two on upper endoscopy training only,eight on colonoscopy training only,and one on sigmoidoscopy training only.Gastrointestinal Mentor virtual endoscopy simulator was commonly used.Fifteen reported positive results,indicating that VR-based training was feasible and acceptable for endoscopy learners.VR technology helped the trainees enhance their skills in manipulating the endoscope,reducing the procedure time or increasing the technical accuracy,in VR scenarios and real patients.Some studies show that the patient discomfort level decreased significantly.However,some studies show there were no significant differences in patient discomfort and pain scores between VR group and other groups.CONCLUSION VR training is effective for endoscopy training.There are several well-designed randomized controlled trials with large sample sizes,proving the potential of this innovative tool.Thus,VR should be more widely adopted in endoscopy training.Furthermore,combining VR training with conventional methods could be a promising approach that should be implemented in training. 展开更多
关键词 Virtual reality Gastrointestinal endoscopy Systematic review Virtual reality training SIMULATION
下载PDF
Leveraging Augmented Reality,Semantic-Segmentation,and VANETs for Enhanced Driver’s Safety Assistance
2
作者 Sitara Afzal Imran Ullah Khan +1 位作者 Irfan Mehmood Jong Weon Lee 《Computers, Materials & Continua》 SCIE EI 2024年第1期1443-1460,共18页
Overtaking is a crucial maneuver in road transportation that requires a clear view of the road ahead.However,limited visibility of ahead vehicles can often make it challenging for drivers to assess the safety of overt... Overtaking is a crucial maneuver in road transportation that requires a clear view of the road ahead.However,limited visibility of ahead vehicles can often make it challenging for drivers to assess the safety of overtaking maneuvers,leading to accidents and fatalities.In this paper,we consider atrous convolution,a powerful tool for explicitly adjusting the field-of-view of a filter as well as controlling the resolution of feature responses generated by Deep Convolutional Neural Networks in the context of semantic image segmentation.This article explores the potential of seeing-through vehicles as a solution to enhance overtaking safety.See-through vehicles leverage advanced technologies such as cameras,sensors,and displays to provide drivers with a real-time view of the vehicle ahead,including the areas hidden from their direct line of sight.To address the problems of safe passing and occlusion by huge vehicles,we designed a see-through vehicle system in this study,we employed a windshield display in the back car together with cameras in both cars.The server within the back car was used to segment the car,and the segmented portion of the car displayed the video from the front car.Our see-through system improves the driver’s field of vision and helps him change lanes,cross a large car that is blocking their view,and safely overtake other vehicles.Our network was trained and tested on the Cityscape dataset using semantic segmentation.This transparent technique will instruct the driver on the concealed traffic situation that the front vehicle has obscured.For our findings,we have achieved 97.1% F1-score.The article also discusses the challenges and opportunities of implementing see-through vehicles in real-world scenarios,including technical,regulatory,and user acceptance factors. 展开更多
关键词 Overtaking safety augmented reality VANET V2V deep learning
下载PDF
Chemical simulation teaching system based on virtual reality and gesture interaction
3
作者 Dengzhen LU Hengyi LI +2 位作者 Boyu QIU Siyuan LIU Shuhan QI 《虚拟现实与智能硬件(中英文)》 EI 2024年第2期148-168,共21页
Background Most existing chemical experiment teaching systems lack solid immersive experiences,making it difficult to engage students.To address these challenges,we propose a chemical simulation teaching system based ... Background Most existing chemical experiment teaching systems lack solid immersive experiences,making it difficult to engage students.To address these challenges,we propose a chemical simulation teaching system based on virtual reality and gesture interaction.Methods The parameters of the models were obtained through actual investigation,whereby Blender and 3DS MAX were used to model and import these parameters into a physics engine.By establishing an interface for the physics engine,gesture interaction hardware,and virtual reality(VR)helmet,a highly realistic chemical experiment environment was created.Using code script logic,particle systems,as well as other systems,chemical phenomena were simulated.Furthermore,we created an online teaching platform using streaming media and databases to address the problems of distance teaching.Results The proposed system was evaluated against two mainstream products in the market.In the experiments,the proposed system outperformed the other products in terms of fidelity and practicality.Conclusions The proposed system which offers realistic simulations and practicability,can help improve the high school chemistry experimental education. 展开更多
关键词 Chemical experiment simulation Gesture interaction Virtual reality Model establishment Process control Streaming media DATABASE
下载PDF
Towards engineering a portable platform for laparoscopic pre-training in virtual reality with haptic feedback
4
作者 Hans-Georg ENKLER Wolfgang KUNERT +4 位作者 Stefan PFEFFER Kai-Jonas BOCK Steffen AXT Jonas JOHANNINK Christoph REICH 《虚拟现实与智能硬件(中英文)》 EI 2024年第2期83-99,共17页
Background Laparoscopic surgery is a surgical technique in which special instruments are inserted through small incision holes inside the body.For some time,efforts have been made to improve surgical pre training thro... Background Laparoscopic surgery is a surgical technique in which special instruments are inserted through small incision holes inside the body.For some time,efforts have been made to improve surgical pre training through practical exercises on abstracted and reduced models.Methods The authors strive for a portable,easy to use and cost-effective Virtual Reality-based(VR)laparoscopic pre-training platform and therefore address the question of how such a system has to be designed to achieve the quality of today's gold standard using real tissue specimens.Current VR controllers are limited regarding haptic feedback.Since haptic feedback is necessary or at least beneficial for laparoscopic surgery training,the platform to be developed consists of a newly designed prototype laparoscopic VR controller with haptic feedback,a commercially available head-mounted display,a VR environment for simulating a laparoscopic surgery,and a training concept.Results To take full advantage of benefits such as repeatability and cost-effectiveness of VR-based training,the system shall not require a tissue sample for haptic feedback.It is currently calculated and visually displayed to the user in the VR environment.On the prototype controller,a first axis was provided with perceptible feedback for test purposes.Two of the prototype VR controllers can be combined to simulate a typical both-handed use case,e.g.,laparoscopic suturing.A Unity based VR prototype allows the execution of simple standard pre-trainings.Conclusions The first prototype enables full operation of a virtual laparoscopic instrument in VR.In addition,the simulation can compute simple interaction forces.Major challenges lie in a realistic real-time tissue simulation and calculation of forces for the haptic feedback.Mechanical weaknesses were identified in the first hardware prototype,which will be improved in subsequent versions.All degrees of freedom of the controller are to be provided with haptic feedback.To make forces tangible in the simulation,characteristic values need to be determined using real tissue samples.The system has yet to be validated by cross-comparing real and VR haptics with surgeons. 展开更多
关键词 Laparoscopic surgery Training Virtual reality CONTROLLER Haptic feedback Kinesthetic skills
下载PDF
Large-scale spatial data visualization method based on augmented reality
5
作者 Xiaoning QIAO Wenming XIE +4 位作者 Xiaodong PENG Guangyun LI Dalin LI Yingyi GUO Jingyi REN 《虚拟现实与智能硬件(中英文)》 EI 2024年第2期132-147,共16页
Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for rese... Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules. 展开更多
关键词 Large-scale spatial data analysis Visual analysis technology Augmented reality 3D reconstruction Space environment
下载PDF
Single-center experience with Knee+^(TM) augmented reality navigation system in primary total knee arthroplasty
6
作者 Evangelos Sakellariou Panagiotis Alevrogiannis +6 位作者 Fani Alevrogianni Athanasios Galanis Michail Vavourakis Panagiotis Karampinas Panagiotis Gavriil John Vlamis Stavros Alevrogiannis 《World Journal of Orthopedics》 2024年第3期247-256,共10页
BACKGROUND Computer-assisted systems obtained an increased interest in orthopaedic surgery over the last years,as they enhance precision compared to conventional hardware.The expansion of computer assistance is evolvi... BACKGROUND Computer-assisted systems obtained an increased interest in orthopaedic surgery over the last years,as they enhance precision compared to conventional hardware.The expansion of computer assistance is evolving with the employment of augmented reality.Yet,the accuracy of augmented reality navigation systems has not been determined.AIM To examine the accuracy of component alignment and restoration of the affected limb’s mechanical axis in primary total knee arthroplasty(TKA),utilizing an augmented reality navigation system and to assess whether such systems are conspicuously fruitful for an accomplished knee surgeon.METHODS From May 2021 to December 2021,30 patients,25 women and five men,under-went a primary unilateral TKA.Revision cases were excluded.A preoperative radiographic procedure was performed to evaluate the limb’s axial alignment.All patients were operated on by the same team,without a tourniquet,utilizing three distinct prostheses with the assistance of the Knee+™augmented reality navigation system in every operation.Postoperatively,the same radiographic exam protocol was executed to evaluate the implants’position,orientation and coronal plane alignment.We recorded measurements in 3 stages regarding femoral varus and flexion,tibial varus and posterior slope.Firstly,the expected values from the Augmented Reality system were documented.Then we calculated the same values after each cut and finally,the same measurements were recorded radiolo-gically after the operations.Concerning statistical analysis,Lin’s concordance correlation coefficient was estimated,while Wilcoxon Signed Rank Test was performed when needed.RESULTS A statistically significant difference was observed regarding mean expected values and radiographic mea-surements for femoral flexion measurements only(Z score=2.67,P value=0.01).Nonetheless,this difference was statistically significantly lower than 1 degree(Z score=-4.21,P value<0.01).In terms of discrepancies in the calculations of expected values and controlled measurements,a statistically significant difference between tibial varus values was detected(Z score=-2.33,P value=0.02),which was also statistically significantly lower than 1 degree(Z score=-4.99,P value<0.01).CONCLUSION The results indicate satisfactory postoperative coronal alignment without outliers across all three different implants utilized.Augmented reality navigation systems can bolster orthopaedic surgeons’accuracy in achieving precise axial alignment.However,further research is required to further evaluate their efficacy and potential. 展开更多
关键词 Augmented reality ORTHOPEDICS Total knee arthroplasty ROBOTICS KNEE NAVIGATION
下载PDF
Personalized assessment and training of neurosurgical skills in virtual reality:An interpretable machine learning approach
7
作者 Fei LI Zhibao QIN +3 位作者 Kai QIAN Shaojun LIANG Chengli LI Yonghang TAI 《虚拟现实与智能硬件(中英文)》 EI 2024年第1期17-29,共13页
Background Virtual reality technology has been widely used in surgical simulators,providing new opportunities for assessing and training surgical skills.Machine learning algorithms are commonly used to analyze and eva... Background Virtual reality technology has been widely used in surgical simulators,providing new opportunities for assessing and training surgical skills.Machine learning algorithms are commonly used to analyze and evaluate the performance of participants.However,their interpretability limits the personalization of the training for individual participants.Methods Seventy-nine participants were recruited and divided into three groups based on their skill level in intracranial tumor resection.Data on the use of surgical tools were collected using a surgical simulator.Feature selection was performed using the Minimum Redundancy Maximum Relevance and SVM-RFE algorithms to obtain the final metrics for training the machine learning model.Five machine learning algorithms were trained to predict the skill level,and the support vector machine performed the best,with an accuracy of 92.41%and Area Under Curve value of 0.98253.The machine learning model was interpreted using Shapley values to identify the important factors contributing to the skill level of each participant.Results This study demonstrates the effectiveness of machine learning in differentiating the evaluation and training of virtual reality neurosurgical performances.The use of Shapley values enables targeted training by identifying deficiencies in individual skills.Conclusions This study provides insights into the use of machine learning for personalized training in virtual reality neurosurgery.The interpretability of the machine learning models enables the development of individualized training programs.In addition,this study highlighted the potential of explanatory models in training external skills. 展开更多
关键词 Machine learning NEUROSURGERY Shapley values Virtual reality Human-robot interaction
下载PDF
Academic Stress Assessment Using Virtual Reality as an Educational Tool in Spine Surgery
8
作者 Diana Chávez Lizárraga Jesús Alberto Pérez Contreras +4 位作者 Fernando Alvarado Gómez Evelyn Quintero Medina Emmanuel Cantú Chávez Iván Ulises Sámano López Ana Sofía Peña Blesa 《Open Journal of Modern Neurosurgery》 2024年第2期114-123,共10页
Introduction: The evaluation of academic stress in medical students and residents is a topic of significant interest, given the considerable challenges they face during their learning process with traditional teaching... Introduction: The evaluation of academic stress in medical students and residents is a topic of significant interest, given the considerable challenges they face during their learning process with traditional teaching methods. The use of technologies like virtual reality presents an opportunity to enhance their skills through simulations and training. The main objective of this study is to qualitatively assess the stress levels experienced by medical students and residents by integrating virtual reality into their current learning methods, aiming to improve their ability to manage stressors in their practice. Material and Methods: A questionnaire was conducted with 12 medical students and 12 Traumatology and orthopedics residents. The purpose of the questionnaire was to evaluate the levels of academic stress using the SISCO inventory. The stress levels were calculated by transforming average values into percentages, and the following criteria were assigned: 0 to 33% for Mild Stress, 34 to 66% for Moderate Stress, and 67 to 100% for Deep Stress. Then, a virtual reality class focused on spine surgery was provided. Both medical students and residents were trained using the Non Nocere SharpSurgeon software platform and Oculus Quest 2 virtual reality glasses. At the end of the session, a second questionnaire related to the practice with virtual reality was conducted with the same evaluation criteria and a comparative analysis was carried out. Results: 12 undergraduate students from Hospital Angeles Mexico, CDMX and 12 traumatology and orthopedics residents at Hospital Santa Fe, Bogota were evaluated. The students in CDMX reported an average qualitative stress of 28.50% during habitual practices, which decreased to an average of 14.67% after virtual reality practice. Residents in Bogotá experienced an average qualitative stress of 30.50% with their current learning methods but this reduced to an average of 13.92% after using virtual reality. These findings indicate that the use of virtual reality has a positive impact on reducing stress levels qualitatively. Conclusions: The use of virtual reality as a learning method for medical students and residents qualitatively improves stress levels. Further studies are required to define the potential uses of Virtual Reality to improve learning methods and emotional state in medical students and residents and for a quantitative assessment to validate the training as certified learning methods. 展开更多
关键词 Virtual reality Academic Stress Learning Strategies Spine Surgery Training
下载PDF
Research and Application of Caideng Model Rendering Technology for Virtual Reality
9
作者 Xuefeng Wang Yadong Wu +1 位作者 Yan Luo Dan Luo 《Journal of Computer and Communications》 2024年第4期95-110,共16页
With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of C... With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of Caideng in digital Caideng scenes, this article analyzes the lighting model. It combines it with the lighting effect of Caideng scenes to design an optimized lighting model algorithm that fuses the bidirectional transmission distribution function (BTDF) model. This algorithm can efficiently render the lighting effect of Caideng models in a virtual environment. And using image optimization processing methods, the immersive experience effect on the VR is enhanced. Finally, a Caideng roaming interactive system was designed based on this method. The results show that the frame rate of the system is stable during operation, maintained above 60 fps, and has a good immersive experience. 展开更多
关键词 Virtual reality Caideng Model Lighting Model Point Light Rendering
下载PDF
Training on LSA lifeboat operation using Mixed Reality
10
作者 Spyridon Nektarios BOLIERAKIS Margarita KOSTOVASILI +1 位作者 Lazaros KARAGIANNIDIS Angelos AMDITIS 《Virtual Reality & Intelligent Hardware》 2023年第3期201-212,共12页
Background This work aims to provide an overview of the Mixed Reality(MR)technology’s use in maritime industry for training purposes.Current training procedures cover a broad range of procedural operations for Life-S... Background This work aims to provide an overview of the Mixed Reality(MR)technology’s use in maritime industry for training purposes.Current training procedures cover a broad range of procedural operations for Life-Saving Appliances(LSA)lifeboats;however,several gaps and limitations have been identified related to the practical training that can be addressed through the use of MR.Augmented,Virtual and Mixed Reality applications are already used in various fields in maritime industry,but their full potential have not been yet exploited.SafePASS project aims to exploit MR advantages in the maritime training by introducing a relevant application focusing on use and maintenance of LSA lifeboats.Methods An MR Training application is proposed supporting the training of crew members in equipment usage and operation,as well as in maintenance activities and procedures.The application consists of the training tool that trains crew members on handling lifeboats,the training evaluation tool that allows trainers to assess the performance of trainees,and the maintenance tool that supports crew members to perform maintenance activities and procedures on lifeboats.For each tool,an indicative session and scenario workflow are implemented,along with the main supported interactions of the trainee with the equipment.Results The application has been tested and validated both in lab environment and using a real LSA lifeboat,resulting to improved experience for the users that provided feedback and recommendations for further development.The application has also been demonstrated onboard a cruise ship,showcasing the supported functionalities to relevant stakeholders that recognized the added value of the application and suggested potential future exploitation areas.Conclusions The MR Training application has been evaluated as very promising in providing a user-friendly training environment that can support crew members in LSA lifeboat operation and maintenance,while it is still subject to improvement and further expansion. 展开更多
关键词 Augmented reality Mixed reality Maritime training Cruise industry Lifeboat operation Lifeboat maintenance H2020 research project
下载PDF
Adaptive navigation assistance based on eye movement features in virtual reality 被引量:1
11
作者 Song ZHAO Shiwei CHENG 《Virtual Reality & Intelligent Hardware》 2023年第3期232-248,共17页
Background Navigation assistance is essential for users when roaming virtual reality scenes;however,the traditional navigation method requires users to manually request a map for viewing,which leads to low immersion a... Background Navigation assistance is essential for users when roaming virtual reality scenes;however,the traditional navigation method requires users to manually request a map for viewing,which leads to low immersion and poor user experience.Methods To address this issue,we first collected data on who required navigation assistance in a virtual reality environment,including various eye movement features,such as gaze fixation,pupil size,and gaze angle.Subsequently,we used the boosting-based XGBoost algorithm to train a prediction model and finally used it to predict whether users require navigation assistance in a roaming task.Results After evaluating the performance of the model,the accuracy,precision,recall,and F1-score of our model reached approximately 95%.In addition,by applying the model to a virtual reality scene,an adaptive navigation assistance system based on the real-time eye movement data of the user was implemented.Conclusions Compared with traditional navigation assistance methods,our new adaptive navigation assistance method could enable the user to be more immersive and effective while roaming in a virtual reality(VR)environment. 展开更多
关键词 Eye movement NAVIGATION Human-computer interaction Virtual reality Eye tracking
下载PDF
摄影测量的技术要求和使用技巧--以Reality Capture软件为例 被引量:1
12
作者 周刊 《印刷杂志》 2023年第1期46-48,共3页
一、前言Reality Capture(捕捉现实)是一款摄影测量合成三维模型的软件,通过多组和多角度的图像数据,软件会自动识别图像之间相同的特征点,精确计算出三维图像。Reality Capture已被多个行业领域所使用,一些要求和技巧可以极大地缩短建... 一、前言Reality Capture(捕捉现实)是一款摄影测量合成三维模型的软件,通过多组和多角度的图像数据,软件会自动识别图像之间相同的特征点,精确计算出三维图像。Reality Capture已被多个行业领域所使用,一些要求和技巧可以极大地缩短建模时间,提高三维模型质量。本文就如何获取合格的图像和建模技巧进行分析。 展开更多
关键词 reality CAPTURE 摄影 测量 图像处理
下载PDF
Visualization of real-time displacement time history superimposed with dynamic experiments using wireless smart sensors and augmented reality
13
作者 Marlon Aguero Derek Doyle +1 位作者 David Mascarenas Fernando Moreu 《Earthquake Engineering and Engineering Vibration》 SCIE EI CSCD 2023年第3期573-588,共16页
Wireless smart sensors(WSS)process field data and inform inspectors about the infrastructure health and safety.In bridge engineering,inspectors need reliable data about changes in displacements under loads to make cor... Wireless smart sensors(WSS)process field data and inform inspectors about the infrastructure health and safety.In bridge engineering,inspectors need reliable data about changes in displacements under loads to make correct decisions about repairs and replacements.Access to displacement information in the field and in real-time remains a challenge as inspectors do not see the data in real time.Displacement data from WSS in the field undergoes additional processing and is seen at a different location.If inspectors were able to see structural displacements in real-time at the locations of interest,they could conduct additional observations,creating a new,information-based,decision-making reality in the field.This paper develops a new,human-centered interface that provides inspectors with real-time access to actionable structural data during inspection and monitoring enhanced by augmented reality(AR).It summarizes and evaluates the development and validation of the new human-infrastructure interface in laboratory experiments.The experiments demonstrate that the interface that processes all calculations in the AR device accurately estimates dynamic displacements in comparison with the laser.Using this new AR interface tool,inspectors can observe and compare displacement data,share it across space and time,visualize displacements in time history,and understand structural deflection more accurately through a displacement time history visualization. 展开更多
关键词 wireless smart sensor monitoring augmented reality DISPLACEMENT ACCELERATION human-infrastructure interface
下载PDF
Defect inspection of indoor components in buildings using deep learning object detection and augmented reality
14
作者 Shun-Hsiang Hsu Ho-Tin Hung +1 位作者 Yu-Qi Lin Chia-Ming Chang 《Earthquake Engineering and Engineering Vibration》 SCIE EI CSCD 2023年第1期41-54,共14页
Visual inspection is commonly adopted for building operation,maintenance,and safety.The durability and defects of components or materials in buildings can be quickly assessed through visual inspection.However,implemen... Visual inspection is commonly adopted for building operation,maintenance,and safety.The durability and defects of components or materials in buildings can be quickly assessed through visual inspection.However,implementations of visual inspection are substantially time-consuming,labor-intensive,and error-prone because useful auxiliary tools that can instantly highlight defects or damage locations from images are not available.Therefore,an advanced building inspection framework is developed and implemented with augmented reality(AR)and real-time damage detection in this study.In this framework,engineers should walk around and film every corner of the building interior to generate the three-dimensional(3D)environment through ARKit.Meanwhile,a trained YOLOv5 model real-time detects defects during this process,even in a large-scale field,and the defect locations indicating the detected defects are then marked in this 3D environment.The defects areas can be measured with centimeter-level accuracy with the light detection and ranging(LiDAR)on devices.All required damage information,including defect positions and sizes,is collected at a time and can be rendered in the 2D and 3D views.Finally,this visual inspection can be efficiently conducted,and the previously generated environment can also be loaded to re-localize existing defect marks for future maintenance and change observation.Moreover,the proposed framework is also implemented and verified by an underground parking lot in a building to detect and quantify surface defects on concrete components.As seen in the results,the conventional building inspection is significantly improved with the aid of the proposed framework in terms of damage localization,damage quantification,and inspection efficiency. 展开更多
关键词 visual inspection damage detection augmented reality damage quantification deep learning
下载PDF
Nanomaterial-based flexible sensors for metaverse and virtual reality applications
15
作者 Jianfei Wang Jiao Suo +2 位作者 Zhengxun Song Wen Jung Li Zuobin Wang 《International Journal of Extreme Manufacturing》 SCIE EI CAS CSCD 2023年第3期407-439,共33页
Nanomaterial-based flexible sensors(NMFSs)can be tightly attached to the human skin or integrated with clothing to monitor human physiological information,provide medical data,or explore metaverse spaces.Nanomaterials... Nanomaterial-based flexible sensors(NMFSs)can be tightly attached to the human skin or integrated with clothing to monitor human physiological information,provide medical data,or explore metaverse spaces.Nanomaterials have been widely incorporated into flexible sensors due to their facile processing,material compatibility,and unique properties.This review highlights the recent advancements in NMFSs involving various nanomaterial frameworks such as nanoparticles,nanowires,and nanofilms.Different triggering interaction interfaces between NMFSs and metaverse/virtual reality(VR)applications,e.g.skin-mechanics-triggered,temperature-triggered,magnetically triggered,and neural-triggered interfaces,are discussed.In the context of interfacing physical and virtual worlds,machine learning(ML)has emerged as a promising tool for processing sensor data for controlling avatars in metaverse/VR worlds,and many ML algorithms have been proposed for virtual interaction technologies.This paper discusses the advantages,disadvantages,and prospects of NMFSs in metaverse/VR applications. 展开更多
关键词 flexible sensors metaverse virtual reality human-computer interaction machine learning
下载PDF
Video Conference System in Mixed Reality Using a Hololens
16
作者 Baolin Sun Xuesong Gao +6 位作者 Weiqiang Chen Qihao Sun Xiaoxiao Cui HaoGuo Cishahayo Remesha Kevin Shuaishuai Liu Zhi Liu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第1期383-403,共21页
The mixed reality conference system proposed in this paper is a robust,real-time video conference application software that makes up for the simple interaction and lack of immersion and realism of traditional video co... The mixed reality conference system proposed in this paper is a robust,real-time video conference application software that makes up for the simple interaction and lack of immersion and realism of traditional video conference,which realizes the entire process of holographic video conference from client to cloud to the client.This paper mainly focuses on designing and implementing a video conference system based on AI segmentation technology and mixed reality.Several mixed reality conference system components are discussed,including data collection,data transmission,processing,and mixed reality presentation.The data layer is mainly used for data collection,integration,and video and audio codecs.The network layer uses Web-RTC to realize peer-to-peer data communication.The data processing layer is the core part of the system,mainly for human video matting and human-computer interaction,which is the key to realizing mixed reality conferences and improving the interactive experience.The presentation layer explicitly includes the login interface of the mixed reality conference system,the presentation of real-time matting of human subjects,and the presentation objects.With the mixed reality conference system,conference participants in different places can see each other in real-time in their mixed reality scene and share presentation content and 3D models based on mixed reality technology to have a more interactive and immersive experience. 展开更多
关键词 Mixed reality AI segmentation HOLOGRAM video conference Web-RTC
下载PDF
Three-dimensional automatic artificial intelligence driven augmented-reality selective biopsy during nerve-sparing robot-assisted radical prostatectomy:A feasibility and accuracy study
17
作者 Enrico Checcucci Alberto Piana +11 位作者 Gabriele Volpi Pietro Piazzolla Daniele Amparore Sabrina De Cillis Federico Piramide Cecilia Gatti Ilaria Stura Enrico Bollito Federica Massa Michele Di Dio Cristian Fiori Francesco Porpiglia 《Asian Journal of Urology》 CSCD 2023年第4期407-415,共9页
Objective:To evaluate the accuracy of our new three-dimensional(3D)automatic augmented reality(AAR)system guided by artificial intelligence in the identification of tumour’s location at the level of the preserved neu... Objective:To evaluate the accuracy of our new three-dimensional(3D)automatic augmented reality(AAR)system guided by artificial intelligence in the identification of tumour’s location at the level of the preserved neurovascular bundle(NVB)at the end of the extirpative phase of nerve-sparing robot-assisted radical prostatectomy.Methods:In this prospective study,we enrolled patients with prostate cancer(clinical stages cT1ce3,cN0,and cM0)with a positive index lesion at target biopsy,suspicious for capsular contact or extracapsular extension at preoperative multiparametric magnetic resonance imaging.Patients underwent robot-assisted radical prostatectomy at San Luigi Gonzaga Hospital(Orbassano,Turin,Italy),from December 2020 to December 2021.At the end of extirpative phase,thanks to our new AAR artificial intelligence driven system,the virtual prostate 3D model allowed to identify the tumour’s location at the level of the preserved NVB and to perform a selective excisional biopsy,sparing the remaining portion of the bundle.Perioperative and postoperative data were evaluated,especially focusing on the positive surgical margin(PSM)rates,potency,continence recovery,and biochemical recurrence.Results:Thirty-four patients were enrolled.In 15(44.1%)cases,the target lesion was in contact with the prostatic capsule at multiparametric magnetic resonance imaging(Wheeler grade L2)while in 19(55.9%)cases extracapsular extension was detected(Wheeler grade L3).3D AAR guided biopsies were negative in all pathological tumour stage 2(pT2)patients while they revealed the presence of cancer in 14 cases in the pT3 cohort(14/16;87.5%).PSM rates were 0%and 7.1%in the pathological stages pT2 and pT3(<3 mm,Gleason score 3),respectively.Conclusion:With the proposed 3D AAR system,it is possible to correctly identify the lesion’s location on the NVB in 87.5%of pT3 patients and perform a 3D-guided tailored nerve-sparing even in locally advanced diseases,without compromising the oncological safety in terms of PSM rates. 展开更多
关键词 Prostate cancer Augmented reality Artificial intelligence ROBOTICS Radical prostatectomy
下载PDF
Real Objects Understanding Using 3D Haptic Virtual Reality for E-Learning Education
18
作者 Samia Allaoua Chelloug Hamid Ashfaq +4 位作者 Suliman A.Alsuhibany Mohammad Shorfuzzaman Abdulmajeed Alsufyani Ahmad Jalal Jeongmin Park 《Computers, Materials & Continua》 SCIE EI 2023年第1期1607-1624,共18页
In the past two decades,there has been a lot of work on computer vision technology that incorporates many tasks which implement basic filtering to image classification.Themajor research areas of this field include obj... In the past two decades,there has been a lot of work on computer vision technology that incorporates many tasks which implement basic filtering to image classification.Themajor research areas of this field include object detection and object recognition.Moreover,wireless communication technologies are presently adopted and they have impacted the way of education that has been changed.There are different phases of changes in the traditional system.Perception of three-dimensional(3D)from two-dimensional(2D)image is one of the demanding tasks.Because human can easily perceive but making 3D using software will take time manually.Firstly,the blackboard has been replaced by projectors and other digital screens so such that people can understand the concept better through visualization.Secondly,the computer labs in schools are now more common than ever.Thirdly,online classes have become a reality.However,transferring to online education or e-learning is not without challenges.Therefore,we propose a method for improving the efficiency of e-learning.Our proposed system consists of twoand-a-half dimensional(2.5D)features extraction using machine learning and image processing.Then,these features are utilized to generate 3D mesh using ellipsoidal deformation method.After that,3D bounding box estimation is applied.Our results show that there is a need to move to 3D virtual reality(VR)with haptic sensors in the field of e-learning for a better understanding of real-world objects.Thus,people will have more information as compared to the traditional or simple online education tools.We compare our result with the ShapeNet dataset to check the accuracy of our proposed method.Our proposed system achieved an accuracy of 90.77%on plane class,85.72%on chair class,and car class have 72.14%.Mean accuracy of our method is 70.89%. 展开更多
关键词 Artificial intelligence E-LEARNING online education system computer vision virtual reality 3D haptic
下载PDF
Adaptive Consistent Management to Prevent System Collapse on Shared Object Manipulation in Mixed Reality
19
作者 Jun Lee Hyun Kwon 《Computers, Materials & Continua》 SCIE EI 2023年第4期2025-2042,共18页
A concurrency control mechanism for collaborative work is akey element in a mixed reality environment. However, conventional lockingmechanisms restrict potential tasks or the support of non-owners, thusincreasing the ... A concurrency control mechanism for collaborative work is akey element in a mixed reality environment. However, conventional lockingmechanisms restrict potential tasks or the support of non-owners, thusincreasing the working time because of waiting to avoid conflicts. Herein, wepropose an adaptive concurrency control approach that can reduce conflictsand work time. We classify shared object manipulation in mixed reality intodetailed goals and tasks. Then, we model the relationships among goal,task, and ownership. As the collaborative work progresses, the proposedsystem adapts the different concurrency control mechanisms of shared objectmanipulation according to the modeling of goal–task–ownership. With theproposed concurrency control scheme, users can hold shared objects andmove and rotate together in a mixed reality environment similar to realindustrial sites. Additionally, this system provides MS Hololens and Myosensors to recognize inputs from a user and provides results in a mixed realityenvironment. The proposed method is applied to install an air conditioneras a case study. Experimental results and user studies show that, comparedwith the conventional approach, the proposed method reduced the number ofconflicts, waiting time, and total working time. 展开更多
关键词 Mixed reality upper body motion recognition shared object manipulation adaptive task concurrency control
下载PDF
Spatial Multi-Presence System to Increase Security Awareness for Remote Collaboration in an Extended Reality Environment
20
作者 Jun Lee Hyun Kwon 《Intelligent Automation & Soft Computing》 SCIE 2023年第7期369-384,共16页
Enhancing the sense of presence of participants is an important issue in terms of security awareness for remote collaboration in extended reality.However,conventional methods are insufficient to be aware of remote sit... Enhancing the sense of presence of participants is an important issue in terms of security awareness for remote collaboration in extended reality.However,conventional methods are insufficient to be aware of remote situations and to search for and control remote workspaces.This study pro-poses a spatial multi-presence system that simultaneously provides multiple spaces while rapidly exploring these spaces as users perform collaborative work in an extended reality environment.The proposed system provides methods for arranging and manipulating remote and personal spaces by creating an annular screen that is invisible to the user.The user can freely arrange remote participants and their workspaces on the annular screen.Because users can simultaneously view various spaces arranged on the annular screen,they can perform collaborative work while simultaneously feeling the presence of multiple spaces and can be fully immersed in a specific space.In addition,the personal spaces where users work can also be arranged through the annular screen.According to the results of the performance evaluations,users participating in remote collaborative works can visualize the spaces of multiple users simultaneously and feel their presence,thereby increasing their understanding of the spaces.Moreover,the proposed system reduces the time to perform tasks and gain awareness of the emergency in remote workspaces. 展开更多
关键词 Extended reality spatial presence multi presence security awareness
下载PDF
上一页 1 2 37 下一页 到第
使用帮助 返回顶部