期刊文献+
共找到253篇文章
< 1 2 13 >
每页显示 20 50 100
Information perception and feedback mechanism and key techniques of multi-modality human-robot interaction for service robots 被引量:1
1
作者 赵其杰 《Journal of Shanghai University(English Edition)》 CAS 2006年第3期281-281,共1页
With the increasing of the elderly population and the growing hearth care cost, the role of service robots in aiding the disabled and the elderly is becoming important. Many researchers in the world have paid much att... With the increasing of the elderly population and the growing hearth care cost, the role of service robots in aiding the disabled and the elderly is becoming important. Many researchers in the world have paid much attention to heaRthcare robots and rehabilitation robots. To get natural and harmonious communication between the user and a service robot, the information perception/feedback ability, and interaction ability for service robots become more important in many key issues. 展开更多
关键词 service robot multi-modality human-robot interaction user model interaction protocol information perception and feedback.
下载PDF
On Multi-Modality in English Listening Teaching
2
作者 Zhang Rui 《International Journal of Technology Management》 2013年第12期115-117,共3页
Listening is the breakthrough for conquering English castle, it is not only the requirement of English test, but also the practical use of English knowledge and the embodiment of English comprehensive ability. Listeni... Listening is the breakthrough for conquering English castle, it is not only the requirement of English test, but also the practical use of English knowledge and the embodiment of English comprehensive ability. Listening teaching plays a crucial role in foreign language teaching. However, the effect of listening teaching is undesirable. In recent years, multi-modality theory has been focused by many researchers. In view of particularity of the listening teaching, it is urgent to apply the multi-modality theory to English listening teaching which will produce very good teaching result. 展开更多
关键词 LISTENING multi-modality TEACHING
下载PDF
Multi-modality liver image registration based on multilevel B-splines free-form deformation and L-BFGS optimal algorithm 被引量:1
3
作者 宋红 李佳佳 +1 位作者 王树良 马婧婷 《Journal of Central South University》 SCIE EI CAS 2014年第1期287-292,共6页
A new coarse-to-fine strategy was proposed for nonrigid registration of computed tomography(CT) and magnetic resonance(MR) images of a liver.This hierarchical framework consisted of an affine transformation and a B-sp... A new coarse-to-fine strategy was proposed for nonrigid registration of computed tomography(CT) and magnetic resonance(MR) images of a liver.This hierarchical framework consisted of an affine transformation and a B-splines free-form deformation(FFD).The affine transformation performed a rough registration targeting the mismatch between the CT and MR images.The B-splines FFD transformation performed a finer registration by correcting local motion deformation.In the registration algorithm,the normalized mutual information(NMI) was used as similarity measure,and the limited memory Broyden-Fletcher- Goldfarb-Shannon(L-BFGS) optimization method was applied for optimization process.The algorithm was applied to the fully automated registration of liver CT and MR images in three subjects.The results demonstrate that the proposed method not only significantly improves the registration accuracy but also reduces the running time,which is effective and efficient for nonrigid registration. 展开更多
关键词 multi-modal image registration affine transformation B-splines free-form deformation (FFD) L-BFGS
下载PDF
Multi-modality hierarchical fusion network for lumbar spine segmentation with magnetic resonance images
4
作者 Han Yan Guangtao Zhang +1 位作者 Wei Cui Zhuliang Yu 《Control Theory and Technology》 EI 2024年第4期612-622,共11页
For the analysis of spinal and disc diseases,automated tissue segmentation of the lumbar spine is vital.Due to the continuous and concentrated location of the target,the abundance of edge features,and individual diffe... For the analysis of spinal and disc diseases,automated tissue segmentation of the lumbar spine is vital.Due to the continuous and concentrated location of the target,the abundance of edge features,and individual differences,conventional automatic segmentation methods perform poorly.Since the success of deep learning in the segmentation of medical images has been shown in the past few years,it has been applied to this task in a number of ways.The multi-scale and multi-modal features of lumbar tissues,however,are rarely explored by methodologies of deep learning.Because of the inadequacies in medical images availability,it is crucial to effectively fuse various modes of data collection for model training to alleviate the problem of insufficient samples.In this paper,we propose a novel multi-modality hierarchical fusion network(MHFN)for improving lumbar spine segmentation by learning robust feature representations from multi-modality magnetic resonance images.An adaptive group fusion module(AGFM)is introduced in this paper to fuse features from various modes to extract cross-modality features that could be valuable.Furthermore,to combine features from low to high levels of cross-modality,we design a hierarchical fusion structure based on AGFM.Compared to the other feature fusion methods,AGFM is more effective based on experimental results on multi-modality MR images of the lumbar spine.To further enhance segmentation accuracy,we compare our network with baseline fusion structures.Compared to the baseline fusion structures(input-level:76.27%,layer-level:78.10%,decision-level:79.14%),our network was able to segment fractured vertebrae more accurately(85.05%). 展开更多
关键词 Lumbar spine segmentation Deep learning multi-modality fusion Feature fusion
原文传递
Robust triboelectric information-mat enhanced by multi-modality deep learning for smart home 被引量:1
5
作者 Yanqin Yang Qiongfeng Shi +3 位作者 Zixuan Zhang Xuechuan Shan Budiman Salam Chengkuo Lee 《InfoMat》 SCIE CAS CSCD 2023年第1期139-160,共22页
In metaverse,a digital-twin smart home is a vital platform for immersive communication between the physical and virtual world.Triboelectric nanogenerators(TENGs)sensors contribute substantially to providing smart-home... In metaverse,a digital-twin smart home is a vital platform for immersive communication between the physical and virtual world.Triboelectric nanogenerators(TENGs)sensors contribute substantially to providing smart-home monitoring.However,TENG deployment is hindered by its unstable out-put under environment changes.Herein,we develop a digital-twin smart home using a robust all-TENG based information mat(InfoMat),which consists of an in-home mat array and an entry mat.The interdigital electrodes design allows environment-insensitive ratiometric readout from the mat array to can-cel the commonly experienced environmental variations.Arbitrary position sensing is also achieved because of the interval arrangement of the mat pixels.Concurrently,the two-channel entry mat generates multi-modality informa-tion to aid the 10-user identification accuracy to increase from 93% to 99% compared to the one-channel case.Furthermore,a digital-twin smart home is visualized by real-time projecting the information in smart home to virtual reality,including access authorization,position,walking trajectory,dynamic activities/sports,and so on. 展开更多
关键词 digital twin environment-insensitive multi-modality deep learning SCALABILITY smart home triboelectric information-mat
原文传递
Emma:An accurate,efficient,and multi-modality strategy for autonomous vehicle angle prediction
6
作者 Keqi Song Tao Ni +1 位作者 Linqi Song Weitao Xu 《Intelligent and Converged Networks》 EI 2023年第1期41-49,共9页
Autonomous driving and self-driving vehicles have become the most popular selection for customers for their convenience.Vehicle angle prediction is one of the most prevalent topics in the autonomous driving industry,t... Autonomous driving and self-driving vehicles have become the most popular selection for customers for their convenience.Vehicle angle prediction is one of the most prevalent topics in the autonomous driving industry,that is,realizing real-time vehicle angle prediction.However,existing methods of vehicle angle prediction utilize only single-modal data to achieve model prediction,such as images captured by the camera,which limits the performance and efficiency of the prediction system.In this paper,we present Emma,a novel vehicle angle prediction strategy that achieves multi-modal prediction and is more efficient.Specifically,Emma exploits both images and inertial measurement unit(IMU)signals with a fusion network for multi-modal data fusion and vehicle angle prediction.Moreover,we design and implement a few-shot learning module in Emma for fast domain adaptation to varied scenarios(e.g.,different vehicle models).Evaluation results demonstrate that Emma achieves overall 97.5%accuracy in predicting three vehicle angle parameters(yaw,pitch,and roll),which outperforms traditional single-modalities by approximately 16.7%-36.8%.Additionally,the few-shot learning module presents promising adaptive ability and shows overall 79.8%and 88.3%accuracy in 5-shot and 10-shot settings,respectively.Finally,empirical results show that Emma reduces energy consumption by 39.7%when running on the Arduino UNO board. 展开更多
关键词 multi-modality autonomous driving vehicle angle prediction few-shot learning
原文传递
Optimization Control of Multi-Mode Coupling All-Wheel Drive System for Hybrid Vehicle
7
作者 Lipeng Zhang Zijian Wang +1 位作者 Liandong Wang Changan Ren 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2024年第2期340-355,共16页
The all-wheel drive(AWD)hybrid system is a research focus on high-performance new energy vehicles that can meet the demands of dynamic performance and passing ability.Simultaneous optimization of the power and economy... The all-wheel drive(AWD)hybrid system is a research focus on high-performance new energy vehicles that can meet the demands of dynamic performance and passing ability.Simultaneous optimization of the power and economy of hybrid vehicles becomes an issue.A unique multi-mode coupling(MMC)AWD hybrid system is presented to realize the distributed and centralized driving of the front and rear axles to achieve vectored distribution and full utilization of the system power between the axles of vehicles.Based on the parameters of the benchmarking model of a hybrid vehicle,the best model-predictive control-based energy management strategy is proposed.First,the drive system model was built after the analysis of the MMC-AWD’s drive modes.Next,three fundamental strategies were established to address power distribution adjustment and battery SOC maintenance when the SOC changed,which was followed by the design of a road driving force observer.Then,the energy consumption rate in the average time domain was processed before designing the minimum fuel consumption controller based on the equivalent fuel consumption coefficient.Finally,the advantage of the MMC-AWD was confirmed by comparison with the dynamic performance and economy of the BYD Song PLUS DMI-AWD.The findings indicate that,in comparison to the comparative hybrid system at road adhesion coefficients of 0.8 and 0.6,the MMC-AWD’s capacity to accelerate increases by 5.26%and 7.92%,respectively.When the road adhesion coefficient is 0.8,0.6,and 0.4,the maximum climbing ability increases by 14.22%,12.88%,and 4.55%,respectively.As a result,the dynamic performance is greatly enhanced,and the fuel savings rate per 100 km of mileage reaches 12.06%,which is also very economical.The proposed control strategies for the new hybrid AWD vehicle can optimize the power and economy simultaneously. 展开更多
关键词 Hybrid vehicle All-wheel drive multi-mode coupling Energy management Model predictive control
下载PDF
A Comprehensive Survey on Deep Learning Multi-Modal Fusion:Methods,Technologies and Applications
8
作者 Tianzhe Jiao Chaopeng Guo +2 位作者 Xiaoyue Feng Yuming Chen Jie Song 《Computers, Materials & Continua》 SCIE EI 2024年第7期1-35,共35页
Multi-modal fusion technology gradually become a fundamental task in many fields,such as autonomous driving,smart healthcare,sentiment analysis,and human-computer interaction.It is rapidly becoming the dominant resear... Multi-modal fusion technology gradually become a fundamental task in many fields,such as autonomous driving,smart healthcare,sentiment analysis,and human-computer interaction.It is rapidly becoming the dominant research due to its powerful perception and judgment capabilities.Under complex scenes,multi-modal fusion technology utilizes the complementary characteristics of multiple data streams to fuse different data types and achieve more accurate predictions.However,achieving outstanding performance is challenging because of equipment performance limitations,missing information,and data noise.This paper comprehensively reviews existing methods based onmulti-modal fusion techniques and completes a detailed and in-depth analysis.According to the data fusion stage,multi-modal fusion has four primary methods:early fusion,deep fusion,late fusion,and hybrid fusion.The paper surveys the three majormulti-modal fusion technologies that can significantly enhance the effect of data fusion and further explore the applications of multi-modal fusion technology in various fields.Finally,it discusses the challenges and explores potential research opportunities.Multi-modal tasks still need intensive study because of data heterogeneity and quality.Preserving complementary information and eliminating redundant information between modalities is critical in multi-modal technology.Invalid data fusion methods may introduce extra noise and lead to worse results.This paper provides a comprehensive and detailed summary in response to these challenges. 展开更多
关键词 multi-modal fusion REPRESENTATION TRANSLATION ALIGNMENT deep learning comparative analysis
下载PDF
A Hand Features Based Fusion Recognition Network with Enhancing Multi-Modal Correlation
9
作者 Wei Wu Yuan Zhang +2 位作者 Yunpeng Li Chuanyang Li YanHao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期537-555,共19页
Fusing hand-based features in multi-modal biometric recognition enhances anti-spoofing capabilities.Additionally,it leverages inter-modal correlation to enhance recognition performance.Concurrently,the robustness and ... Fusing hand-based features in multi-modal biometric recognition enhances anti-spoofing capabilities.Additionally,it leverages inter-modal correlation to enhance recognition performance.Concurrently,the robustness and recognition performance of the system can be enhanced through judiciously leveraging the correlation among multimodal features.Nevertheless,two issues persist in multi-modal feature fusion recognition:Firstly,the enhancement of recognition performance in fusion recognition has not comprehensively considered the inter-modality correlations among distinct modalities.Secondly,during modal fusion,improper weight selection diminishes the salience of crucial modal features,thereby diminishing the overall recognition performance.To address these two issues,we introduce an enhanced DenseNet multimodal recognition network founded on feature-level fusion.The information from the three modalities is fused akin to RGB,and the input network augments the correlation between modes through channel correlation.Within the enhanced DenseNet network,the Efficient Channel Attention Network(ECA-Net)dynamically adjusts the weight of each channel to amplify the salience of crucial information in each modal feature.Depthwise separable convolution markedly reduces the training parameters and further enhances the feature correlation.Experimental evaluations were conducted on four multimodal databases,comprising six unimodal databases,including multispectral palmprint and palm vein databases from the Chinese Academy of Sciences.The Equal Error Rates(EER)values were 0.0149%,0.0150%,0.0099%,and 0.0050%,correspondingly.In comparison to other network methods for palmprint,palm vein,and finger vein fusion recognition,this approach substantially enhances recognition performance,rendering it suitable for high-security environments with practical applicability.The experiments in this article utilized amodest sample database comprising 200 individuals.The subsequent phase involves preparing for the extension of the method to larger databases. 展开更多
关键词 BIOMETRICS multi-modAL CORRELATION deep learning feature-level fusion
下载PDF
Towards trustworthy multi-modal motion prediction:Holistic evaluation and interpretability of outputs
10
作者 Sandra Carrasco Limeros Sylwia Majchrowska +3 位作者 Joakim Johnander Christoffer Petersson MiguelÁngel Sotelo David Fernández Llorca 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第3期557-572,共16页
Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of po... Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of possible future trajectories can be consid-erable(multi-modal).Most prior approaches proposed to address multi-modal motion prediction are based on complex machine learning systems that have limited interpret-ability.Moreover,the metrics used in current benchmarks do not evaluate all aspects of the problem,such as the diversity and admissibility of the output.The authors aim to advance towards the design of trustworthy motion prediction systems,based on some of the re-quirements for the design of Trustworthy Artificial Intelligence.The focus is on evaluation criteria,robustness,and interpretability of outputs.First,the evaluation metrics are comprehensively analysed,the main gaps of current benchmarks are identified,and a new holistic evaluation framework is proposed.Then,a method for the assessment of spatial and temporal robustness is introduced by simulating noise in the perception system.To enhance the interpretability of the outputs and generate more balanced results in the proposed evaluation framework,an intent prediction layer that can be attached to multi-modal motion prediction models is proposed.The effectiveness of this approach is assessed through a survey that explores different elements in the visualisation of the multi-modal trajectories and intentions.The proposed approach and findings make a significant contribution to the development of trustworthy motion prediction systems for autono-mous vehicles,advancing the field towards greater safety and reliability. 展开更多
关键词 autonomous vehicles EVALUATION INTERPRETABILITY multi-modal motion prediction ROBUSTNESS trustworthy AI
下载PDF
Fake News Detection Based on Text-Modal Dominance and Fusing Multiple Multi-Model Clues
11
作者 Li fang Fu Huanxin Peng +1 位作者 Changjin Ma Yuhan Liu 《Computers, Materials & Continua》 SCIE EI 2024年第3期4399-4416,共18页
In recent years,how to efficiently and accurately identify multi-model fake news has become more challenging.First,multi-model data provides more evidence but not all are equally important.Secondly,social structure in... In recent years,how to efficiently and accurately identify multi-model fake news has become more challenging.First,multi-model data provides more evidence but not all are equally important.Secondly,social structure information has proven to be effective in fake news detection and how to combine it while reducing the noise information is critical.Unfortunately,existing approaches fail to handle these problems.This paper proposes a multi-model fake news detection framework based on Tex-modal Dominance and fusing Multiple Multi-model Cues(TD-MMC),which utilizes three valuable multi-model clues:text-model importance,text-image complementary,and text-image inconsistency.TD-MMC is dominated by textural content and assisted by image information while using social network information to enhance text representation.To reduce the irrelevant social structure’s information interference,we use a unidirectional cross-modal attention mechanism to selectively learn the social structure’s features.A cross-modal attention mechanism is adopted to obtain text-image cross-modal features while retaining textual features to reduce the loss of important information.In addition,TD-MMC employs a new multi-model loss to improve the model’s generalization ability.Extensive experiments have been conducted on two public real-world English and Chinese datasets,and the results show that our proposed model outperforms the state-of-the-art methods on classification evaluation metrics. 展开更多
关键词 Fake news detection cross-modal attention mechanism multi-modal fusion social network transfer learning
下载PDF
Multi-modal knowledge graph inference via media convergence and logic rule
12
作者 Feng Lin Dongmei Li +5 位作者 Wenbin Zhang Dongsheng Shi Yuanzhou Jiao Qianzhong Chen Yiying Lin Wentao Zhu 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第1期211-221,共11页
Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the intro... Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the introduction of a large amount of information from other modalities reduces the effectiveness of representation learning and makes knowledge graph inference less effective.To address the issue,an inference method based on Media Convergence and Rule-guided Joint Inference model(MCRJI)has been pro-posed.The authors not only converge multi-media features of entities but also introduce logic rules to improve the accuracy and interpretability of link prediction.First,a multi-headed self-attention approach is used to obtain the attention of different media features of entities during semantic synthesis.Second,logic rules of different lengths are mined from knowledge graph to learn new entity representations.Finally,knowledge graph inference is performed based on representing entities that converge multi-media features.Numerous experimental results show that MCRJI outperforms other advanced baselines in using multi-media features and knowledge graph inference,demonstrating that MCRJI provides an excellent approach for knowledge graph inference with converged multi-media features. 展开更多
关键词 logic rule media convergence multi-modal knowledge graph inference representation learning
下载PDF
Generative Multi-Modal Mutual Enhancement Video Semantic Communications
13
作者 Yuanle Chen Haobo Wang +3 位作者 Chunyu Liu Linyi Wang Jiaxin Liu Wei Wu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2985-3009,共25页
Recently,there have been significant advancements in the study of semantic communication in single-modal scenarios.However,the ability to process information in multi-modal environments remains limited.Inspired by the... Recently,there have been significant advancements in the study of semantic communication in single-modal scenarios.However,the ability to process information in multi-modal environments remains limited.Inspired by the research and applications of natural language processing across different modalities,our goal is to accurately extract frame-level semantic information from videos and ultimately transmit high-quality videos.Specifically,we propose a deep learning-basedMulti-ModalMutual Enhancement Video Semantic Communication system,called M3E-VSC.Built upon a VectorQuantized Generative AdversarialNetwork(VQGAN),our systemaims to leverage mutual enhancement among different modalities by using text as the main carrier of transmission.With it,the semantic information can be extracted fromkey-frame images and audio of the video and performdifferential value to ensure that the extracted text conveys accurate semantic information with fewer bits,thus improving the capacity of the system.Furthermore,a multi-frame semantic detection module is designed to facilitate semantic transitions during video generation.Simulation results demonstrate that our proposed model maintains high robustness in complex noise environments,particularly in low signal-to-noise ratio conditions,significantly improving the accuracy and speed of semantic transmission in video communication by approximately 50 percent. 展开更多
关键词 Generative adversarial networks multi-modal mutual enhancement video semantic transmission deep learning
下载PDF
Research on Multi-modal In-Vehicle Intelligent Personal Assistant Design
14
作者 WANG Jia-rou TANG Cheng-xin SHUAI Liang-ying 《印刷与数字媒体技术研究》 CAS 北大核心 2024年第4期136-146,共11页
Intelligent personal assistants play a pivotal role in in-vehicle systems,significantly enhancing life efficiency,driving safety,and decision-making support.In this study,the multi-modal design elements of intelligent... Intelligent personal assistants play a pivotal role in in-vehicle systems,significantly enhancing life efficiency,driving safety,and decision-making support.In this study,the multi-modal design elements of intelligent personal assistants within the context of visual,auditory,and somatosensory interactions with drivers were discussed.Their impact on the driver’s psychological state through various modes such as visual imagery,voice interaction,and gesture interaction were explored.The study also introduced innovative designs for in-vehicle intelligent personal assistants,incorporating design principles such as driver-centricity,prioritizing passenger safety,and utilizing timely feedback as a criterion.Additionally,the study employed design methods like driver behavior research and driving situation analysis to enhance the emotional connection between drivers and their vehicles,ultimately improving driver satisfaction and trust. 展开更多
关键词 Intelligent personal assistants multi-modal design User psychology In-vehicle interaction Voice interaction Emotional design
下载PDF
Unsupervised multi-modal image translation based on the squeeze-and-excitation mechanism and feature attention module
15
作者 胡振涛 HU Chonghao +1 位作者 YANG Haoran SHUAI Weiwei 《High Technology Letters》 EI CAS 2024年第1期23-30,共8页
The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-genera... The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-generator mechanism is employed among the advanced approaches available to model different domain mappings,which results in inefficient training of neural networks and pattern collapse,leading to inefficient generation of image diversity.To address this issue,this paper introduces a multi-modal unsupervised image translation framework that uses a generator to perform multi-modal image translation.Specifically,firstly,the domain code is introduced in this paper to explicitly control the different generation tasks.Secondly,this paper brings in the squeeze-and-excitation(SE)mechanism and feature attention(FA)module.Finally,the model integrates multiple optimization objectives to ensure efficient multi-modal translation.This paper performs qualitative and quantitative experiments on multiple non-paired benchmark image translation datasets while demonstrating the benefits of the proposed method over existing technologies.Overall,experimental results have shown that the proposed method is versatile and scalable. 展开更多
关键词 multi-modal image translation generative adversarial network(GAN) squeezeand-excitation(SE)mechanism feature attention(FA)module
下载PDF
Multifunctional microcapsules:A theranostic agent for US/MR/PAT multi-modality imaging and synergistic chemo-photothermal osteosarcoma therapy 被引量:3
16
作者 Hufei Wang Sijia Xu +7 位作者 Daoyang Fan Xiaowen Geng Guang Zhi Decheng Wu Hong Shen Fei Yang Xiao Zhou Xing Wang 《Bioactive Materials》 SCIE 2022年第1期453-465,共13页
Development of versatile theranostic agents that simultaneously integrate therapeutic and diagnostic features remains a clinical urgent.Herein,we aimed to prepare uniform PEGylated(lactic-co-glycolic acid)(PLGA)microc... Development of versatile theranostic agents that simultaneously integrate therapeutic and diagnostic features remains a clinical urgent.Herein,we aimed to prepare uniform PEGylated(lactic-co-glycolic acid)(PLGA)microcapsules(PB@(Fe_(3)O_(4)@PEG-PLGA)MCs)with superparamagnetic Fe3O4 nanoparticles embedded in the shell and Prussian blue(PB)NPs inbuilt in the cavity via a premix membrane emulsification(PME)method.On account of the eligible geometry and multiple load capacity,these MCs could be used as efficient multi-modality contrast agents to simultaneously enhance the contrasts of US,MR and PAT imaging.In-built PB NPs furnished the MCs with excellent photothermal conversion property and embedded Fe_(3)O_(4)NPs endowed the magnetic location for fabrication of targeted drug delivery system.Notably,after further in-situ encapsulation of antitumor drug of DOX,(PB+DOX)@(Fe_(3)O_(4)@PEG-PLGA)MCs possessed more unique advantages on achieving near infrared(NIR)-responsive drug delivery and magnetic-guided chemo-photothermal synergistic osteosarcoma therapy.In vitro and in vivo studies revealed these biocompatible(PB+DOX)@(Fe_(3)O_(4)@PEG-PLGA)MCs could effectively target to the tumor tissue with superior therapeutic effect against the invasion of osteosarcoma and alleviation of osteolytic lesions,which will be developed as a smart platform integrating multi-modality imaging capabilities and synergistic effect with high therapy efficacy. 展开更多
关键词 multi-modality imaging MICROCAPSULE Photothermal therapy Drug delivery OSTEOSARCOMA
原文传递
Blind identification of occurrence of multi-modality in laser-feedback-based self-mixing sensor 被引量:1
17
作者 Muhammad Usman Usman Zabit +1 位作者 Olivier DBernal Gulistan Raja 《Chinese Optics Letters》 SCIE EI CAS CSCD 2020年第1期29-33,共5页
Self-mixing interferometry(SMI)is an attractive sensing scheme that typically relies on mono-modal operation of an employed laser diode.However,change in laser modality can occur due to change in operating conditions.... Self-mixing interferometry(SMI)is an attractive sensing scheme that typically relies on mono-modal operation of an employed laser diode.However,change in laser modality can occur due to change in operating conditions.So,detection of occurrence of multi-modality in SMI signals is necessary to avoid erroneous metric measurements.Typically,processing of multi-modal SMI signals is a difficult task due to the diverse and complex nature of such signals.However,the proposed techniques can significantly ease this task by identifying the modal state of SMI signals with 100%success rate so that interferometric fringes can be correctly interpreted for metric sensing applications. 展开更多
关键词 SELF-MIXING interferometry LASER DIODE multi-modality optical FEEDBACK
原文传递
M3SC:A Generic Dataset for Mixed Multi-Modal(MMM)Sensing and Communication Integration 被引量:3
18
作者 Xiang Cheng Ziwei Huang +6 位作者 Lu Bai Haotian Zhang Mingran Sun Boxun Liu Sijiang Li Jianan Zhang Minson Lee 《China Communications》 SCIE CSCD 2023年第11期13-29,共17页
The sixth generation(6G)of mobile communication system is witnessing a new paradigm shift,i.e.,integrated sensing-communication system.A comprehensive dataset is a prerequisite for 6G integrated sensing-communication ... The sixth generation(6G)of mobile communication system is witnessing a new paradigm shift,i.e.,integrated sensing-communication system.A comprehensive dataset is a prerequisite for 6G integrated sensing-communication research.This paper develops a novel simulation dataset,named M3SC,for mixed multi-modal(MMM)sensing-communication integration,and the generation framework of the M3SC dataset is further given.To obtain multimodal sensory data in physical space and communication data in electromagnetic space,we utilize Air-Sim and WaveFarer to collect multi-modal sensory data and exploit Wireless InSite to collect communication data.Furthermore,the in-depth integration and precise alignment of AirSim,WaveFarer,andWireless InSite are achieved.The M3SC dataset covers various weather conditions,multiplex frequency bands,and different times of the day.Currently,the M3SC dataset contains 1500 snapshots,including 80 RGB images,160 depth maps,80 LiDAR point clouds,256 sets of mmWave waveforms with 8 radar point clouds,and 72 channel impulse response(CIR)matrices per snapshot,thus totaling 120,000 RGB images,240,000 depth maps,120,000 LiDAR point clouds,384,000 sets of mmWave waveforms with 12,000 radar point clouds,and 108,000 CIR matrices.The data processing result presents the multi-modal sensory information and communication channel statistical properties.Finally,the MMM sensing-communication application,which can be supported by the M3SC dataset,is discussed. 展开更多
关键词 multi-modal sensing RAY-TRACING sensing-communication integration simulation dataset
下载PDF
Multi-task Learning of Semantic Segmentation and Height Estimation for Multi-modal Remote Sensing Images 被引量:2
19
作者 Mengyu WANG Zhiyuan YAN +2 位作者 Yingchao FENG Wenhui DIAO Xian SUN 《Journal of Geodesy and Geoinformation Science》 CSCD 2023年第4期27-39,共13页
Deep learning based methods have been successfully applied to semantic segmentation of optical remote sensing images.However,as more and more remote sensing data is available,it is a new challenge to comprehensively u... Deep learning based methods have been successfully applied to semantic segmentation of optical remote sensing images.However,as more and more remote sensing data is available,it is a new challenge to comprehensively utilize multi-modal remote sensing data to break through the performance bottleneck of single-modal interpretation.In addition,semantic segmentation and height estimation in remote sensing data are two tasks with strong correlation,but existing methods usually study individual tasks separately,which leads to high computational resource overhead.To this end,we propose a Multi-Task learning framework for Multi-Modal remote sensing images(MM_MT).Specifically,we design a Cross-Modal Feature Fusion(CMFF)method,which aggregates complementary information of different modalities to improve the accuracy of semantic segmentation and height estimation.Besides,a dual-stream multi-task learning method is introduced for Joint Semantic Segmentation and Height Estimation(JSSHE),extracting common features in a shared network to save time and resources,and then learning task-specific features in two task branches.Experimental results on the public multi-modal remote sensing image dataset Potsdam show that compared to training two tasks independently,multi-task learning saves 20%of training time and achieves competitive performance with mIoU of 83.02%for semantic segmentation and accuracy of 95.26%for height estimation. 展开更多
关键词 multi-modAL MULTI-TASK semantic segmentation height estimation convolutional neural network
下载PDF
PowerDetector:Malicious PowerShell Script Family Classification Based on Multi-Modal Semantic Fusion and Deep Learning 被引量:1
20
作者 Xiuzhang Yang Guojun Peng +2 位作者 Dongni Zhang Yuhang Gao Chenguang Li 《China Communications》 SCIE CSCD 2023年第11期202-224,共23页
Power Shell has been widely deployed in fileless malware and advanced persistent threat(APT)attacks due to its high stealthiness and live-off-theland technique.However,existing works mainly focus on deobfuscation and ... Power Shell has been widely deployed in fileless malware and advanced persistent threat(APT)attacks due to its high stealthiness and live-off-theland technique.However,existing works mainly focus on deobfuscation and malicious detection,lacking the malicious Power Shell families classification and behavior analysis.Moreover,the state-of-the-art methods fail to capture fine-grained features and semantic relationships,resulting in low robustness and accuracy.To this end,we propose Power Detector,a novel malicious Power Shell script detector based on multimodal semantic fusion and deep learning.Specifically,we design four feature extraction methods to extract key features from character,token,abstract syntax tree(AST),and semantic knowledge graph.Then,we intelligently design four embeddings(i.e.,Char2Vec,Token2Vec,AST2Vec,and Rela2Vec) and construct a multi-modal fusion algorithm to concatenate feature vectors from different views.Finally,we propose a combined model based on transformer and CNN-Bi LSTM to implement Power Shell family detection.Our experiments with five types of Power Shell attacks show that PowerDetector can accurately detect various obfuscated and stealth PowerShell scripts,with a 0.9402 precision,a 0.9358 recall,and a 0.9374 F1-score.Furthermore,through singlemodal and multi-modal comparison experiments,we demonstrate that PowerDetector’s multi-modal embedding and deep learning model can achieve better accuracy and even identify more unknown attacks. 展开更多
关键词 deep learning malicious family detection multi-modal semantic fusion POWERSHELL
下载PDF
上一页 1 2 13 下一页 到第
使用帮助 返回顶部