Monocular 3D object detection is challenging due to the lack of accurate depth information.Some methods estimate the pixel-wise depth maps from off-the-shelf depth estimators and then use them as an additional input t...Monocular 3D object detection is challenging due to the lack of accurate depth information.Some methods estimate the pixel-wise depth maps from off-the-shelf depth estimators and then use them as an additional input to augment the RGB images.Depth-based methods attempt to convert estimated depth maps to pseudo-LiDAR and then use LiDAR-based object detectors or focus on the perspective of image and depth fusion learning.However,they demonstrate limited performance and efficiency as a result of depth inaccuracy and complex fusion mode with convolutions.Different from these approaches,our proposed depth-guided vision transformer with a normalizing flows(NF-DVT)network uses normalizing flows to build priors in depth maps to achieve more accurate depth information.Then we develop a novel Swin-Transformer-based backbone with a fusion module to process RGB image patches and depth map patches with two separate branches and fuse them using cross-attention to exchange information with each other.Furthermore,with the help of pixel-wise relative depth values in depth maps,we develop new relative position embeddings in the cross-attention mechanism to capture more accurate sequence ordering of input tokens.Our method is the first Swin-Transformer-based backbone architecture for monocular 3D object detection.The experimental results on the KITTI and the challenging Waymo Open datasets show the effectiveness of our proposed method and superior performance over previous counterparts.展开更多
As positioning sensors,edge computation power,and communication technologies continue to develop,a moving agent can now sense its surroundings and communicate with other agents.By receiving spatial information from bo...As positioning sensors,edge computation power,and communication technologies continue to develop,a moving agent can now sense its surroundings and communicate with other agents.By receiving spatial information from both its environment and other agents,an agent can use various methods and sensor types to localize itself.With its high flexibility and robustness,collaborative positioning has become a widely used method in both military and civilian applications.This paper introduces the basic fundamental concepts and applications of collaborative positioning,and reviews recent progress in the field based on camera,LiDAR(Light Detection and Ranging),wireless sensor,and their integration.The paper compares the current methods with respect to their sensor type,summarizes their main paradigms,and analyzes their evaluation experiments.Finally,the paper discusses the main challenges and open issues that require further research.展开更多
AIM:To investigate the frequency and associated factors of accommodation and non-strabismic binocular vision dysfunction among medical university students.METHODS:Totally 158 student volunteers underwent routine visio...AIM:To investigate the frequency and associated factors of accommodation and non-strabismic binocular vision dysfunction among medical university students.METHODS:Totally 158 student volunteers underwent routine vision examination in the optometry clinic of Guangxi Medical University.Their data were used to identify the different types of accommodation and nonstrabismic binocular vision dysfunction and to determine their frequency.Correlation analysis and logistic regression were used to examine the factors associated with these abnormalities.RESULTS:The results showed that 36.71%of the subjects had accommodation and non-strabismic binocular vision issues,with 8.86%being attributed to accommodation dysfunction and 27.85%to binocular abnormalities.Convergence insufficiency(CI)was the most common abnormality,accounting for 13.29%.Those with these abnormalities experienced higher levels of eyestrain(χ2=69.518,P<0.001).The linear correlations were observed between the difference of binocular spherical equivalent(SE)and the index of horizontal esotropia at a distance(r=0.231,P=0.004)and the asthenopia survey scale(ASS)score(r=0.346,P<0.001).Furthermore,the right eye's SE was inversely correlated with the convergence of positive and negative fusion images at close range(r=-0.321,P<0.001),the convergence of negative fusion images at close range(r=-0.294,P<0.001),the vergence facility(VF;r=-0.234,P=0.003),and the set of negative fusion images at far range(r=-0.237,P=0.003).Logistic regression analysis indicated that gender,age,and the difference in right and binocular SE did not influence the emergence of these abnormalities.CONCLUSION:Binocular vision abnormalities are more prevalent than accommodation dysfunction,with CI being the most frequent type.Greater binocular refractive disparity leads to more severe eyestrain symptoms.展开更多
针对当前遥感农作物分类研究中深度学习模型对光谱时间和空间信息特征采样不足,农作物提取仍然存在边界模糊、漏提、误提的问题,提出了一种名为视觉Transformer-长短期记忆递归神经网络(Vision Transformer-long short term memory,ViTL...针对当前遥感农作物分类研究中深度学习模型对光谱时间和空间信息特征采样不足,农作物提取仍然存在边界模糊、漏提、误提的问题,提出了一种名为视觉Transformer-长短期记忆递归神经网络(Vision Transformer-long short term memory,ViTL)的深度学习模型,ViTL模型集成了双路Vision-Transformer特征提取、时空特征融合和长短期记忆递归神经网络(LSTM)时序分类等3个关键模块,双路Vision-Transformer特征提取模块用于捕获图像的时空特征相关性,一路提取空间分类特征,一路提取时间变化特征;时空特征融合模块用于将多时特征信息进行交叉融合;LSTM时序分类模块捕捉多时序的依赖关系并进行输出分类。综合利用基于多时序卫星影像的遥感技术理论和方法,对黑龙江省齐齐哈尔市讷河市作物信息进行提取,研究结果表明,ViTL模型表现出色,其总体准确率(Overall Accuracy,OA)、平均交并比(Mean Intersection over Union,MIoU)和F1分数分别达到0.8676、0.6987和0.8175,与其他广泛使用的深度学习方法相比,包括三维卷积神经网络(3-D CNN)、二维卷积神经网络(2-D CNN)和长短期记忆递归神经网络(LSTM),ViTL模型的F1分数提高了9%~12%,显示出显著的优越性。ViTL模型克服了面对多时序遥感影像的农作物分类任务中的时间和空间信息特征采样不足问题,为准确、高效地农作物分类提供了新思路。展开更多
With the rapid development of drones and autonomous vehicles, miniaturized and lightweight vision sensors that can track targets are of great interests. Limited by the flat structure, conventional image sensors apply ...With the rapid development of drones and autonomous vehicles, miniaturized and lightweight vision sensors that can track targets are of great interests. Limited by the flat structure, conventional image sensors apply a large number of lenses to achieve corresponding functions, increasing the overall volume and weight of the system.展开更多
AIM:To establish an animal model of form deprivation amblyopia based on a simulated cataract intraocular lens(IOLs).METHODS:Poly(dimethyl siloxane)-SiO_(2) thin films(PSF)with different degrees of opacity as IOL mater...AIM:To establish an animal model of form deprivation amblyopia based on a simulated cataract intraocular lens(IOLs).METHODS:Poly(dimethyl siloxane)-SiO_(2) thin films(PSF)with different degrees of opacity as IOL materials were prepared.The light transmission of the PSF-IOL was measured,and its in vitro biosafety was determined by cell counting kit(CCK)-8 assay using the HLEC-B3 cell line and ARPE-19 cell line.Subsequently,the in vivo safety was determined by implanting the PSF-IOL with 10%wt SiO_(2) into the right eyes of New Zealand white rabbits(PSF-IOL group),and compared with two control groups:contralateral comparison group and normal control(NC)group(Contralateral comparison group:the fellow eye;NC group:a group of binocular normal rabbits without intervention).The flash visual-evoked potentials(F-VEPs)were measured to verify amblyopia.RESULTS:PSFs containing 0,2%,and 10%wt SiO_(2) were successfully constructed.The 0 SiO_(2) PSF was transparent,while the 10%wt SiO_(2) PSF was completely opaque.It was found that PSF did not induce unwanted cytotoxicity in HLECs and ARPE19 cells in vitro.In vitro,PSF-IOL with 10%wt SiO_(2) was also non-toxic,and no significant inflammation or structural changes occurred after four weeks of PSF-IOL implantation.Finally,our IOL-simulated congenital cataract rabbit detected by F-VEPs suggested tentative amblyopia.CONCLUSION:A PSF-IOL that mimics cataracts is created.A novel form deprivation model is created by the IOL-simulated congenital cataract rabbit.It can be developed fast and stable and holds great potential for future study.展开更多
Atom tracking technology enhanced with innovative algorithms has been implemented in this study,utilizing a comprehensive suite of controllers and software independently developed domestically.Leveraging an on-board f...Atom tracking technology enhanced with innovative algorithms has been implemented in this study,utilizing a comprehensive suite of controllers and software independently developed domestically.Leveraging an on-board field-programmable gate array(FPGA)with a core frequency of 100 MHz,our system facilitates reading and writing operations across 16 channels,performing discrete incremental proportional-integral-derivative(PID)calculations within 3.4 microseconds.Building upon this foundation,gradient and extremum algorithms are further integrated,incorporating circular and spiral scanning modes with a horizontal movement accuracy of 0.38 pm.This integration enhances the real-time performance and significantly increases the accuracy of atom tracking.Atom tracking achieves an equivalent precision of at least 142 pm on a highly oriented pyrolytic graphite(HOPG)surface under room temperature atmospheric conditions.Through applying computer vision and image processing algorithms,atom tracking can be used when scanning a large area.The techniques primarily consist of two algorithms:the region of interest(ROI)-based feature matching algorithm,which achieves 97.92%accuracy,and the feature description-based matching algorithm,with an impressive 99.99%accuracy.Both implementation approaches have been tested for scanner drift measurements,and these technologies are scalable and applicable in various domains of scanning probe microscopy with broad application prospects in the field of nanoengineering.展开更多
基金supported in part by the Major Project for New Generation of AI (2018AAA0100400)the National Natural Science Foundation of China (61836014,U21B2042,62072457,62006231)the InnoHK Program。
文摘Monocular 3D object detection is challenging due to the lack of accurate depth information.Some methods estimate the pixel-wise depth maps from off-the-shelf depth estimators and then use them as an additional input to augment the RGB images.Depth-based methods attempt to convert estimated depth maps to pseudo-LiDAR and then use LiDAR-based object detectors or focus on the perspective of image and depth fusion learning.However,they demonstrate limited performance and efficiency as a result of depth inaccuracy and complex fusion mode with convolutions.Different from these approaches,our proposed depth-guided vision transformer with a normalizing flows(NF-DVT)network uses normalizing flows to build priors in depth maps to achieve more accurate depth information.Then we develop a novel Swin-Transformer-based backbone with a fusion module to process RGB image patches and depth map patches with two separate branches and fuse them using cross-attention to exchange information with each other.Furthermore,with the help of pixel-wise relative depth values in depth maps,we develop new relative position embeddings in the cross-attention mechanism to capture more accurate sequence ordering of input tokens.Our method is the first Swin-Transformer-based backbone architecture for monocular 3D object detection.The experimental results on the KITTI and the challenging Waymo Open datasets show the effectiveness of our proposed method and superior performance over previous counterparts.
基金National Natural Science Foundation of China(Grant No.62101138)Shandong Natural Science Foundation(Grant No.ZR2021QD148)+1 种基金Guangdong Natural Science Foundation(Grant No.2022A1515012573)Guangzhou Basic and Applied Basic Research Project(Grant No.202102020701)for providing funds for publishing this paper。
文摘As positioning sensors,edge computation power,and communication technologies continue to develop,a moving agent can now sense its surroundings and communicate with other agents.By receiving spatial information from both its environment and other agents,an agent can use various methods and sensor types to localize itself.With its high flexibility and robustness,collaborative positioning has become a widely used method in both military and civilian applications.This paper introduces the basic fundamental concepts and applications of collaborative positioning,and reviews recent progress in the field based on camera,LiDAR(Light Detection and Ranging),wireless sensor,and their integration.The paper compares the current methods with respect to their sensor type,summarizes their main paradigms,and analyzes their evaluation experiments.Finally,the paper discusses the main challenges and open issues that require further research.
基金Supported by the Innovat ion and Entrepreneurship Project for College Students of the First Affiliated Hospital of Guangxi Medical University in 2022 and the Development and Application of Appropriate Medical and Health Technologies in Guangxi(No.S2021093).
文摘AIM:To investigate the frequency and associated factors of accommodation and non-strabismic binocular vision dysfunction among medical university students.METHODS:Totally 158 student volunteers underwent routine vision examination in the optometry clinic of Guangxi Medical University.Their data were used to identify the different types of accommodation and nonstrabismic binocular vision dysfunction and to determine their frequency.Correlation analysis and logistic regression were used to examine the factors associated with these abnormalities.RESULTS:The results showed that 36.71%of the subjects had accommodation and non-strabismic binocular vision issues,with 8.86%being attributed to accommodation dysfunction and 27.85%to binocular abnormalities.Convergence insufficiency(CI)was the most common abnormality,accounting for 13.29%.Those with these abnormalities experienced higher levels of eyestrain(χ2=69.518,P<0.001).The linear correlations were observed between the difference of binocular spherical equivalent(SE)and the index of horizontal esotropia at a distance(r=0.231,P=0.004)and the asthenopia survey scale(ASS)score(r=0.346,P<0.001).Furthermore,the right eye's SE was inversely correlated with the convergence of positive and negative fusion images at close range(r=-0.321,P<0.001),the convergence of negative fusion images at close range(r=-0.294,P<0.001),the vergence facility(VF;r=-0.234,P=0.003),and the set of negative fusion images at far range(r=-0.237,P=0.003).Logistic regression analysis indicated that gender,age,and the difference in right and binocular SE did not influence the emergence of these abnormalities.CONCLUSION:Binocular vision abnormalities are more prevalent than accommodation dysfunction,with CI being the most frequent type.Greater binocular refractive disparity leads to more severe eyestrain symptoms.
文摘针对当前遥感农作物分类研究中深度学习模型对光谱时间和空间信息特征采样不足,农作物提取仍然存在边界模糊、漏提、误提的问题,提出了一种名为视觉Transformer-长短期记忆递归神经网络(Vision Transformer-long short term memory,ViTL)的深度学习模型,ViTL模型集成了双路Vision-Transformer特征提取、时空特征融合和长短期记忆递归神经网络(LSTM)时序分类等3个关键模块,双路Vision-Transformer特征提取模块用于捕获图像的时空特征相关性,一路提取空间分类特征,一路提取时间变化特征;时空特征融合模块用于将多时特征信息进行交叉融合;LSTM时序分类模块捕捉多时序的依赖关系并进行输出分类。综合利用基于多时序卫星影像的遥感技术理论和方法,对黑龙江省齐齐哈尔市讷河市作物信息进行提取,研究结果表明,ViTL模型表现出色,其总体准确率(Overall Accuracy,OA)、平均交并比(Mean Intersection over Union,MIoU)和F1分数分别达到0.8676、0.6987和0.8175,与其他广泛使用的深度学习方法相比,包括三维卷积神经网络(3-D CNN)、二维卷积神经网络(2-D CNN)和长短期记忆递归神经网络(LSTM),ViTL模型的F1分数提高了9%~12%,显示出显著的优越性。ViTL模型克服了面对多时序遥感影像的农作物分类任务中的时间和空间信息特征采样不足问题,为准确、高效地农作物分类提供了新思路。
文摘With the rapid development of drones and autonomous vehicles, miniaturized and lightweight vision sensors that can track targets are of great interests. Limited by the flat structure, conventional image sensors apply a large number of lenses to achieve corresponding functions, increasing the overall volume and weight of the system.
基金Supported by National Natural Science Foundation of China(No.81870680).
文摘AIM:To establish an animal model of form deprivation amblyopia based on a simulated cataract intraocular lens(IOLs).METHODS:Poly(dimethyl siloxane)-SiO_(2) thin films(PSF)with different degrees of opacity as IOL materials were prepared.The light transmission of the PSF-IOL was measured,and its in vitro biosafety was determined by cell counting kit(CCK)-8 assay using the HLEC-B3 cell line and ARPE-19 cell line.Subsequently,the in vivo safety was determined by implanting the PSF-IOL with 10%wt SiO_(2) into the right eyes of New Zealand white rabbits(PSF-IOL group),and compared with two control groups:contralateral comparison group and normal control(NC)group(Contralateral comparison group:the fellow eye;NC group:a group of binocular normal rabbits without intervention).The flash visual-evoked potentials(F-VEPs)were measured to verify amblyopia.RESULTS:PSFs containing 0,2%,and 10%wt SiO_(2) were successfully constructed.The 0 SiO_(2) PSF was transparent,while the 10%wt SiO_(2) PSF was completely opaque.It was found that PSF did not induce unwanted cytotoxicity in HLECs and ARPE19 cells in vitro.In vitro,PSF-IOL with 10%wt SiO_(2) was also non-toxic,and no significant inflammation or structural changes occurred after four weeks of PSF-IOL implantation.Finally,our IOL-simulated congenital cataract rabbit detected by F-VEPs suggested tentative amblyopia.CONCLUSION:A PSF-IOL that mimics cataracts is created.A novel form deprivation model is created by the IOL-simulated congenital cataract rabbit.It can be developed fast and stable and holds great potential for future study.
基金Project supported by the National Science Fund for Distinguished Young Scholars(Grant No.T2125014)the Special Fund for Research on National Major Research Instruments of the National Natural Science Foundation of China(Grant No.11927808)the CAS Key Technology Research and Development Team Project(Grant No.GJJSTD20200005)。
文摘Atom tracking technology enhanced with innovative algorithms has been implemented in this study,utilizing a comprehensive suite of controllers and software independently developed domestically.Leveraging an on-board field-programmable gate array(FPGA)with a core frequency of 100 MHz,our system facilitates reading and writing operations across 16 channels,performing discrete incremental proportional-integral-derivative(PID)calculations within 3.4 microseconds.Building upon this foundation,gradient and extremum algorithms are further integrated,incorporating circular and spiral scanning modes with a horizontal movement accuracy of 0.38 pm.This integration enhances the real-time performance and significantly increases the accuracy of atom tracking.Atom tracking achieves an equivalent precision of at least 142 pm on a highly oriented pyrolytic graphite(HOPG)surface under room temperature atmospheric conditions.Through applying computer vision and image processing algorithms,atom tracking can be used when scanning a large area.The techniques primarily consist of two algorithms:the region of interest(ROI)-based feature matching algorithm,which achieves 97.92%accuracy,and the feature description-based matching algorithm,with an impressive 99.99%accuracy.Both implementation approaches have been tested for scanner drift measurements,and these technologies are scalable and applicable in various domains of scanning probe microscopy with broad application prospects in the field of nanoengineering.