期刊文献+
共找到431篇文章
< 1 2 22 >
每页显示 20 50 100
A Robust Framework for Multimodal Sentiment Analysis with Noisy Labels Generated from Distributed Data Annotation
1
作者 Kai Jiang Bin Cao Jing Fan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2965-2984,共20页
Multimodal sentiment analysis utilizes multimodal data such as text,facial expressions and voice to detect people’s attitudes.With the advent of distributed data collection and annotation,we can easily obtain and sha... Multimodal sentiment analysis utilizes multimodal data such as text,facial expressions and voice to detect people’s attitudes.With the advent of distributed data collection and annotation,we can easily obtain and share such multimodal data.However,due to professional discrepancies among annotators and lax quality control,noisy labels might be introduced.Recent research suggests that deep neural networks(DNNs)will overfit noisy labels,leading to the poor performance of the DNNs.To address this challenging problem,we present a Multimodal Robust Meta Learning framework(MRML)for multimodal sentiment analysis to resist noisy labels and correlate distinct modalities simultaneously.Specifically,we propose a two-layer fusion net to deeply fuse different modalities and improve the quality of the multimodal data features for label correction and network training.Besides,a multiple meta-learner(label corrector)strategy is proposed to enhance the label correction approach and prevent models from overfitting to noisy labels.We conducted experiments on three popular multimodal datasets to verify the superiority of ourmethod by comparing it with four baselines. 展开更多
关键词 Distributed data collection multimodal sentiment analysis meta learning learn with noisy labels
下载PDF
Effect of different anesthetic modalities with multimodal analgesia on postoperative pain level in colorectal tumor patients
2
作者 Ji-Chun Tang Jia-Wei Ma +2 位作者 Jin-Jin Jian Jie Shen Liang-Liang Cao 《World Journal of Gastrointestinal Oncology》 SCIE 2024年第2期364-371,共8页
BACKGROUND According to clinical data,a significant percentage of patients experience pain after surgery,highlighting the importance of alleviating postoperative pain.The current approach involves intravenous self-con... BACKGROUND According to clinical data,a significant percentage of patients experience pain after surgery,highlighting the importance of alleviating postoperative pain.The current approach involves intravenous self-control analgesia,often utilizing opioid analgesics such as morphine,sufentanil,and fentanyl.Surgery for colo-rectal cancer typically involves general anesthesia.Therefore,optimizing anes-thetic management and postoperative analgesic programs can effectively reduce perioperative stress and enhance postoperative recovery.The study aims to analyze the impact of different anesthesia modalities with multimodal analgesia on patients'postoperative pain.AIM To explore the effects of different anesthesia methods coupled with multi-mode analgesia on postoperative pain in patients with colorectal cancer.METHODS Following the inclusion criteria and exclusion criteria,a total of 126 patients with colorectal cancer admitted to our hospital from January 2020 to December 2022 were included,of which 63 received general anesthesia coupled with multi-mode labor pain and were set as the control group,and 63 received general anesthesia associated with epidural anesthesia coupled with multi-mode labor pain and were set as the research group.After data collection,the effects of postoperative analgesia,sedation,and recovery were compared.RESULTS Compared to the control group,the research group had shorter recovery times for orientation,extubation,eye-opening,and spontaneous respiration(P<0.05).The research group also showed lower Visual analog scale scores at 24 h and 48 h,higher Ramany scores at 6 h and 12 h,and improved cognitive function at 24 h,48 h,and 72 h(P<0.05).Additionally,interleukin-6 and interleukin-10 levels were significantly reduced at various time points in the research group compared to the control group(P<0.05).Levels of CD3+,CD4+,and CD4+/CD8+were also lower in the research group at multiple time points(P<0.05).CONCLUSION For patients with colorectal cancer,general anesthesia coupled with epidural anesthesia and multi-mode analgesia can achieve better postoperative analgesia and sedation effects,promote postoperative rehabilitation of patients,improve inflammatory stress and immune status,and have higher safety. 展开更多
关键词 multimodal analgesia ANESTHESIA Colorectal cancer Postoperative pain
下载PDF
National Image Construction from the Perspective of Multimodal Metaphor-A Case Study of the Opening Ceremony of the Beijing Winter Olympics
3
作者 LUO Yi ZHOU Jing 《Journal of Literature and Art Studies》 2024年第4期290-294,共5页
The national image is a comprehensive concept with a distinct political feature,including the international image presented to the outside world,and also encompassing the national identity of the people.With the devel... The national image is a comprehensive concept with a distinct political feature,including the international image presented to the outside world,and also encompassing the national identity of the people.With the development of globalization,international cultural communication has become a crucial part of shaping the national image,and the opening ceremony of the Beijing Winter Olympics has become an important opportunity for China to showcase its national image to the world in the post-pandemic era.Based on Forceville’s multimodal metaphor theory,this paper examines the metaphorical phenomena contained in the performance and their functions,effects,and purposes in the construction of the national image.It is found that there are many scenes,images,and narratives in the opening ceremony,including war metaphor,competition metaphor,personification metaphor,and other conceptual metaphors.The focus of this paper is on multimodal metaphor in a broad sense,mainly encompassing auditory and visual modes.Through the use of these multimodal metaphors,the opening ceremony of the Winter Olympics builds an image of a vibrant,peace-loving,and responsible country,which not only demonstrates China’s cultural self-confidence,but also expresses the Chinese people’s beautiful vision for the early reunification of the motherland. 展开更多
关键词 multimodal metaphor national image Olympic Winter Games Opening Ceremony
下载PDF
Implicit Modality Mining: An End-to-End Method for Multimodal Information Extraction
4
作者 Jinle Lu Qinglang Guo 《Journal of Electronic Research and Application》 2024年第2期124-139,共16页
Multimodal named entity recognition(MNER)and relation extraction(MRE)are key in social media analysis but face challenges like inefficient visual processing and non-optimal modality interaction.(1)Heavy visual embeddi... Multimodal named entity recognition(MNER)and relation extraction(MRE)are key in social media analysis but face challenges like inefficient visual processing and non-optimal modality interaction.(1)Heavy visual embedding:the process of visual embedding is both time and computationally expensive due to the prerequisite extraction of explicit visual cues from the original image before input into the multimodal model.Consequently,these approaches cannot achieve efficient online reasoning;(2)suboptimal interaction handling:the prevalent method of managing interaction between different modalities typically relies on the alternation of self-attention and cross-attention mechanisms or excessive dependence on the gating mechanism.This explicit modeling method may fail to capture some nuanced relations between image and text,ultimately undermining the model’s capability to extract optimal information.To address these challenges,we introduce Implicit Modality Mining(IMM),a novel end-to-end framework for fine-grained image-text correlation without heavy visual embedders.IMM uses an Implicit Semantic Alignment module with a Transformer for cross-modal clues and an Insert-Activation module to effectively utilize these clues.Our approach achieves state-of-the-art performance on three datasets. 展开更多
关键词 multimodal Named entity recognition Relation extraction Patch projection
下载PDF
Solving Geometry Problems via Feature Learning and Contrastive Learning of Multimodal Data
5
作者 Pengpeng Jian Fucheng Guo +1 位作者 Yanli Wang Yang Li 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第8期1707-1728,共22页
This paper presents an end-to-end deep learning method to solve geometry problems via feature learning and contrastive learning of multimodal data.A key challenge in solving geometry problems using deep learning is to... This paper presents an end-to-end deep learning method to solve geometry problems via feature learning and contrastive learning of multimodal data.A key challenge in solving geometry problems using deep learning is to automatically adapt to the task of understanding single-modal and multimodal problems.Existing methods either focus on single-modal ormultimodal problems,and they cannot fit each other.A general geometry problem solver shouldobviouslybe able toprocess variousmodalproblems at the same time.Inthispaper,a shared feature-learning model of multimodal data is adopted to learn the unified feature representation of text and image,which can solve the heterogeneity issue between multimodal geometry problems.A contrastive learning model of multimodal data enhances the semantic relevance betweenmultimodal features and maps them into a unified semantic space,which can effectively adapt to both single-modal and multimodal downstream tasks.Based on the feature extraction and fusion of multimodal data,a proposed geometry problem solver uses relation extraction,theorem reasoning,and problem solving to present solutions in a readable way.Experimental results show the effectiveness of the method. 展开更多
关键词 Geometry problems multimodal feature learning multimodal contrastive learning automatic solver
下载PDF
Multimodal Identification by Transcriptomics and Multiscale Bioassays of Active Components in Xuanfeibaidu Formula to Suppress Macrophage-Mediated Immune Response 被引量:5
6
作者 Lu Zhao Hao Liu +5 位作者 Yingchao Wang Shufang Wang Dejin Xun Yi Wang Yiyu Cheng Boli Zhang 《Engineering》 SCIE EI CAS CSCD 2023年第1期63-76,共14页
Xuanfeibaidu Formula (XFBD) is a Chinese medicine used in the clinical treatment of coronavirus disease 2019 (COVID-19) patients. Although XFBD has exhibited significant therapeutic efficacy in clinical practice, its ... Xuanfeibaidu Formula (XFBD) is a Chinese medicine used in the clinical treatment of coronavirus disease 2019 (COVID-19) patients. Although XFBD has exhibited significant therapeutic efficacy in clinical practice, its underlying pharmacological mechanism remains unclear. Here, we combine a comprehensive research approach that includes network pharmacology, transcriptomics, and bioassays in multiple model systems to investigate the pharmacological mechanism of XFBD and its bioactive substances. High-resolution mass spectrometry was combined with molecular networking to profile the major active substances in XFBD. A total of 104 compounds were identified or tentatively characterized, including flavonoids, terpenes, carboxylic acids, and other types of constituents. Based on the chemical composition of XFBD, a network pharmacology-based analysis identified inflammation-related pathways as primary targets. Thus, we examined the anti-inflammation activity of XFBD in a lipopolysaccharide-induced acute inflammation mice model. XFBD significantly alleviated pulmonary inflammation and decreased the level of serum proinflammatory cytokines. Transcriptomic profiling suggested that genes related to macrophage function were differently expressed after XFBD treatment. Consequently, the effects of XFBD on macrophage activation and mobilization were investigated in a macrophage cell line and a zebrafish wounding model. XFBD exerts strong inhibitory effects on both macrophage activation and migration. Moreover, through multimodal screening, we further identified the major components and compounds from the different herbs of XFBD that mediate its anti-inflammation function. Active components from XFBD, including Polygoni cuspidati Rhizoma, Phragmitis Rhizoma, and Citri grandis Exocarpium rubrum, were then found to strongly downregulate macrophage activation, and polydatin, isoliquiritin, and acteoside were identified as active compounds. Components of Artemisiae annuae Herba and Ephedrae Herba were found to substantially inhibit endogenous macrophage migration, while the presence of ephedrine, atractylenolide I, and kaempferol was attributed to these effects. In summary, our study explores the pharmacological mechanism and effective components of XFBD in inflammation regulation via multimodal approaches, and thereby provides a biological illustration of the clinical efficacy of XFBD. 展开更多
关键词 Xuanfeibaidu Formula multimodal identificati on Inflammation Macrophage activation Macrophage migration
下载PDF
Construction of Human Digital Twin Model Based on Multimodal Data and Its Application in Locomotion Mode Identifcation
7
作者 Ruirui Zhong Bingtao Hu +4 位作者 Yixiong Feng Hao Zheng Zhaoxi Hong Shanhe Lou Jianrong Tan 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2023年第5期7-19,共13页
With the increasing attention to the state and role of people in intelligent manufacturing, there is a strong demand for human-cyber-physical systems (HCPS) that focus on human-robot interaction. The existing intellig... With the increasing attention to the state and role of people in intelligent manufacturing, there is a strong demand for human-cyber-physical systems (HCPS) that focus on human-robot interaction. The existing intelligent manufacturing system cannot satisfy efcient human-robot collaborative work. However, unlike machines equipped with sensors, human characteristic information is difcult to be perceived and digitized instantly. In view of the high complexity and uncertainty of the human body, this paper proposes a framework for building a human digital twin (HDT) model based on multimodal data and expounds on the key technologies. Data acquisition system is built to dynamically acquire and update the body state data and physiological data of the human body and realize the digital expression of multi-source heterogeneous human body information. A bidirectional long short-term memory and convolutional neural network (BiLSTM-CNN) based network is devised to fuse multimodal human data and extract the spatiotemporal features, and the human locomotion mode identifcation is taken as an application case. A series of optimization experiments are carried out to improve the performance of the proposed BiLSTM-CNN-based network model. The proposed model is compared with traditional locomotion mode identifcation models. The experimental results proved the superiority of the HDT framework for human locomotion mode identifcation. 展开更多
关键词 Human digital twin Human-cyber-physical system Bidirectional long short-term memory Convolutional neural network multimodal data
下载PDF
Fusion of color and hallucinated depth features for enhanced multimodal deep learning-based damage segmentation
8
作者 Tarutal Ghosh Mondal Mohammad Reza Jahanshahi 《Earthquake Engineering and Engineering Vibration》 SCIE EI CSCD 2023年第1期55-68,共14页
Recent advances in computer vision and deep learning have shown that the fusion of depth information can significantly enhance the performance of RGB-based damage detection and segmentation models.However,alongside th... Recent advances in computer vision and deep learning have shown that the fusion of depth information can significantly enhance the performance of RGB-based damage detection and segmentation models.However,alongside the advantages,depth-sensing also presents many practical challenges.For instance,the depth sensors impose an additional payload burden on the robotic inspection platforms limiting the operation time and increasing the inspection cost.Additionally,some lidar-based depth sensors have poor outdoor performance due to sunlight contamination during the daytime.In this context,this study investigates the feasibility of abolishing depth-sensing at test time without compromising the segmentation performance.An autonomous damage segmentation framework is developed,based on recent advancements in vision-based multi-modal sensing such as modality hallucination(MH)and monocular depth estimation(MDE),which require depth data only during the model training.At the time of deployment,depth data becomes expendable as it can be simulated from the corresponding RGB frames.This makes it possible to reap the benefits of depth fusion without any depth perception per se.This study explored two different depth encoding techniques and three different fusion strategies in addition to a baseline RGB-based model.The proposed approach is validated on computer-generated RGB-D data of reinforced concrete buildings subjected to seismic damage.It was observed that the surrogate techniques can increase the segmentation IoU by up to 20.1%with a negligible increase in the computation cost.Overall,this study is believed to make a positive contribution to enhancing the resilience of critical civil infrastructure. 展开更多
关键词 multimodal data fusion depth sensing vision-based inspection UAV-assisted inspection damage segmentation post-disaster reconnaissance modality hallucination monocular depth estimation
下载PDF
Coevolutionary Framework for Generalized Multimodal Multi-Objective Optimization
9
作者 Wenhua Li Xingyi Yao +3 位作者 Kaiwen Li Rui Wang Tao Zhang Ling Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第7期1544-1556,共13页
Most multimodal multi-objective evolutionary algorithms(MMEAs)aim to find all global Pareto optimal sets(PSs)for a multimodal multi-objective optimization problem(MMOP).However,in real-world problems,decision makers(D... Most multimodal multi-objective evolutionary algorithms(MMEAs)aim to find all global Pareto optimal sets(PSs)for a multimodal multi-objective optimization problem(MMOP).However,in real-world problems,decision makers(DMs)may be also interested in local PSs.Also,searching for both global and local PSs is more general in view of dealing with MMOPs,which can be seen as generalized MMOPs.Moreover,most state-of-theart MMEAs exhibit poor convergence on high-dimension MMOPs and are unable to deal with constrained MMOPs.To address the above issues,we present a novel multimodal multiobjective coevolutionary algorithm(Co MMEA)to better produce both global and local PSs,and simultaneously,to improve the convergence performance in dealing with high-dimension MMOPs.Specifically,the Co MMEA introduces two archives to the search process,and coevolves them simultaneously through effective knowledge transfer.The convergence archive assists the Co MMEA to quickly approach the Pareto optimal front.The knowledge of the converged solutions is then transferred to the diversity archive which utilizes the local convergence indicator and the-dominance-based method to obtain global and local PSs effectively.Experimental results show that Co MMEA is competitive compared to seven state-of-the-art MMEAs on fifty-four complex MMOPs. 展开更多
关键词 Coevolution ∈-dominance generalized multimodal multi-objective optimization(MMO) local convergence two archives
下载PDF
Nanomedicine-based multimodal therapies:Recent progress and perspectives in colon cancer
10
作者 Yu-Chu He Zi-Ning Hao +1 位作者 Zhuo Li Da-Wei Gao 《World Journal of Gastroenterology》 SCIE CAS 2023年第4期670-681,共12页
Colon cancer has attracted much attention due to its annually increasing incidence.Conventional chemotherapeutic drugs are unsatisfactory in clinical application because of their lack of targeting and severe toxic sid... Colon cancer has attracted much attention due to its annually increasing incidence.Conventional chemotherapeutic drugs are unsatisfactory in clinical application because of their lack of targeting and severe toxic side effects.In the past decade,nanomedicines with multimodal therapeutic strategies have shown potential for colon cancer because of their enhanced permeability and retention,high accumulation at tumor sites,co-loading with different drugs,and combination of various therapies.This review summarizes the advances in research on various nanomedicine-based therapeutic strategies including chemotherapy,radiotherapy,phototherapy(photothermal therapy and photodynamic therapy),chemodynamic therapy,gas therapy,and immunotherapy.Additionally,the therapeutic mechanisms,limitations,improvements,and future of the above therapies are discussed. 展开更多
关键词 Colon cancer NANOMEDICINE Drug permeability Drug retention multimodal therapies Therapeutic mechanism
下载PDF
3D Vehicle Detection Algorithm Based onMultimodal Decision-Level Fusion
11
作者 Peicheng Shi Heng Qi +1 位作者 Zhiqiang Liu Aixi Yang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第6期2007-2023,共17页
3D vehicle detection based on LiDAR-camera fusion is becoming an emerging research topic in autonomous driving.The algorithm based on the Camera-LiDAR object candidate fusion method(CLOCs)is currently considered to be... 3D vehicle detection based on LiDAR-camera fusion is becoming an emerging research topic in autonomous driving.The algorithm based on the Camera-LiDAR object candidate fusion method(CLOCs)is currently considered to be a more effective decision-level fusion algorithm,but it does not fully utilize the extracted features of 3D and 2D.Therefore,we proposed a 3D vehicle detection algorithm based onmultimodal decision-level fusion.First,project the anchor point of the 3D detection bounding box into the 2D image,calculate the distance between 2D and 3D anchor points,and use this distance as a new fusion feature to enhance the feature redundancy of the network.Subsequently,add an attention module:squeeze-and-excitation networks,weight each feature channel to enhance the important features of the network,and suppress useless features.The experimental results show that the mean average precision of the algorithm in the KITTI dataset is 82.96%,which outperforms previous state-ofthe-art multimodal fusion-based methods,and the average accuracy in the Easy,Moderate and Hard evaluation indicators reaches 88.96%,82.60%,and 77.31%,respectively,which are higher compared to the original CLOCs model by 1.02%,2.29%,and 0.41%,respectively.Compared with the original CLOCs algorithm,our algorithm has higher accuracy and better performance in 3D vehicle detection. 展开更多
关键词 3D vehicle detection multimodal fusion CLOCs network structure optimization attention module
下载PDF
Multimodal sentiment analysis for social media contents during public emergencies
12
作者 Tao Fan Hao Wang +2 位作者 Peng Wu Chen Ling Milad Taleby Ahvanooey 《Journal of Data and Information Science》 CSCD 2023年第3期61-87,共27页
Purpose:Nowadays,public opinions during public emergencies involve not only textual contents but also contain images.However,the existing works mainly focus on textual contents and they do not provide a satisfactory a... Purpose:Nowadays,public opinions during public emergencies involve not only textual contents but also contain images.However,the existing works mainly focus on textual contents and they do not provide a satisfactory accuracy of sentiment analysis,lacking the combination of multimodal contents.In this paper,we propose to combine texts and images generated in the social media to perform sentiment analysis.Design/methodology/approach:We propose a Deep Multimodal Fusion Model(DMFM),which combines textual and visual sentiment analysis.We first train word2vec model on a large-scale public emergency corpus to obtain semantic-rich word vectors as the input of textual sentiment analysis.BiLSTM is employed to generate encoded textual embeddings.To fully excavate visual information from images,a modified pretrained VGG16-based sentiment analysis network is used with the best-performed fine-tuning strategy.A multimodal fusion method is implemented to fuse textual and visual embeddings completely,producing predicted labels.Findings:We performed extensive experiments on Weibo and Twitter public emergency datasets,to evaluate the performance of our proposed model.Experimental results demonstrate that the DMFM provides higher accuracy compared with baseline models.The introduction of images can boost the performance of sentiment analysis during public emergencies.Research limitations:In the future,we will test our model in a wider dataset.We will also consider a better way to learn the multimodal fusion information.Practical implications:We build an efficient multimodal sentiment analysis model for the social media contents during public emergencies.Originality/value:We consider the images posted by online users during public emergencies on social platforms.The proposed method can present a novel scope for sentiment analysis during public emergencies and provide the decision support for the government when formulating policies in public emergencies. 展开更多
关键词 Public emergency multimodal sentiment analysis Social platform Textual sentiment analysis Visual sentiment analysis
下载PDF
MFF-Net: Multimodal Feature Fusion Network for 3D Object Detection
13
作者 Peicheng Shi Zhiqiang Liu +1 位作者 Heng Qi Aixi Yang 《Computers, Materials & Continua》 SCIE EI 2023年第6期5615-5637,共23页
In complex traffic environment scenarios,it is very important for autonomous vehicles to accurately perceive the dynamic information of other vehicles around the vehicle in advance.The accuracy of 3D object detection ... In complex traffic environment scenarios,it is very important for autonomous vehicles to accurately perceive the dynamic information of other vehicles around the vehicle in advance.The accuracy of 3D object detection will be affected by problems such as illumination changes,object occlusion,and object detection distance.To this purpose,we face these challenges by proposing a multimodal feature fusion network for 3D object detection(MFF-Net).In this research,this paper first uses the spatial transformation projection algorithm to map the image features into the feature space,so that the image features are in the same spatial dimension when fused with the point cloud features.Then,feature channel weighting is performed using an adaptive expression augmentation fusion network to enhance important network features,suppress useless features,and increase the directionality of the network to features.Finally,this paper increases the probability of false detection and missed detection in the non-maximum suppression algo-rithm by increasing the one-dimensional threshold.So far,this paper has constructed a complete 3D target detection network based on multimodal feature fusion.The experimental results show that the proposed achieves an average accuracy of 82.60%on the Karlsruhe Institute of Technology and Toyota Technological Institute(KITTI)dataset,outperforming previous state-of-the-art multimodal fusion networks.In Easy,Moderate,and hard evaluation indicators,the accuracy rate of this paper reaches 90.96%,81.46%,and 75.39%.This shows that the MFF-Net network has good performance in 3D object detection. 展开更多
关键词 3D object detection multimodal fusion neural network autonomous driving attention mechanism
下载PDF
Clinical and multimodal imaging features of acute macular neuroretinopathy lesions following recent SARS-CoV-2 infection
14
作者 Yang-Chen Liu Bin Wu +1 位作者 Yan Wang Song Chen 《International Journal of Ophthalmology(English edition)》 SCIE CAS 2023年第5期755-761,共7页
AIM:To describe the clinical characteristics of eyes using multimodal imaging features with acute macular neuroretinopathy(AMN)lesions following severe acute respiratory syndrome coronavirus 2(SARS-CoV-2)infection.MET... AIM:To describe the clinical characteristics of eyes using multimodal imaging features with acute macular neuroretinopathy(AMN)lesions following severe acute respiratory syndrome coronavirus 2(SARS-CoV-2)infection.METHODS:Retrospective case series study.From December 18,2022 to February 14,2023,previously healthy cases within 1-week infection with SARS-CoV-2 and examined at Tianjin Eye Hospital to confirm the diagnosis of AMN were included in the study.Totally 5 males and 9 females[mean age:29.93±10.32(16-49)y]were presented for reduced vision,with or without blurred vision.All patients underwent best corrected visual acuity(BCVA),intraocular pressure,slit lamp microscopy,indirect fundoscopy.Simultaneously,multimodal imagings fundus photography(45°or 200°field of view)was performed in 7 cases(14 eyes).Near infrared(NIR)fundus photography was performed in 9 cases(18 eyes),optical coherence tomography(OCT)in 5 cases(10 eyes),optical coherence tomography angiography(OCTA)in 9 cases(18 eyes),and fundus fluorescence angiography(FFA)in 3 cases(6 eyes).Visual field was performed in 1 case(2 eyes).RESULTS:Multimodal imaging findings data from 14 patients with AMN were reviewed.All eyes demonstrated different extent hyperreflective lesions at the level of the inner nuclear layer and/or outer plexus layer on OCT or OCTA.Fundus photography(45°or 200°field of view)showed irregular hypo-reflective lesion around the fovea in 7 cases(14 eyes).OCTA demonstrated that the superficial retinal capillary plexus(SCP)vascular density,deep capillary plexus(DCP)vascular density and choriocapillaris(CC)vascular density was reduced in 9 case(18 eyes).Among the follow-up cases(2 cases),vascular density increased in 1 case with elevated BCVA;another case has vascular density decrease in one eye and basically unchanged in other eye.En face images of the ellipsoidal zone and interdigitation zone injury showed a low wedge-shaped reflection contour appearance.NIR image mainly show the absence of the outer retinal interdigitation zone in AMN.No abnormal fluorescence was observed in FFA.Corresponding partial defect of the visual field were visualized via perimeter in one case.CONCLUSION:The morbidity of SARS-CoV-2 infection with AMN is increased.Ophthalmologists should be aware of the possible,albeit rare,AMN after SARS-CoV-2 infection and focus on multimodal imaging features.OCT,OCTA,and infrared fundus phase are proved to be valuable tools for detection of AMN in patients with SARS-CoV-2. 展开更多
关键词 SARS-CoV-2 infection tomography optical coherence acute macular neuroretinopathy multimodal imaging features
原文传递
Improving Targeted Multimodal Sentiment Classification with Semantic Description of Images
15
作者 Jieyu An Wan Mohd Nazmee Wan Zainon Zhang Hao 《Computers, Materials & Continua》 SCIE EI 2023年第6期5801-5815,共15页
Targeted multimodal sentiment classification(TMSC)aims to identify the sentiment polarity of a target mentioned in a multimodal post.The majority of current studies on this task focus on mapping the image and the text... Targeted multimodal sentiment classification(TMSC)aims to identify the sentiment polarity of a target mentioned in a multimodal post.The majority of current studies on this task focus on mapping the image and the text to a high-dimensional space in order to obtain and fuse implicit representations,ignoring the rich semantic information contained in the images and not taking into account the contribution of the visual modality in the multimodal fusion representation,which can potentially influence the results of TMSC tasks.This paper proposes a general model for Improving Targeted Multimodal Sentiment Classification with Semantic Description of Images(ITMSC)as a way to tackle these issues and improve the accu-racy of multimodal sentiment analysis.Specifically,the ITMSC model can automatically adjust the contribution of images in the fusion representation through the exploitation of semantic descriptions of images and text similarity relations.Further,we propose a target-based attention module to capture the target-text relevance,an image-based attention module to capture the image-text relevance,and a target-image matching module based on the former two modules to properly align the target with the image so that fine-grained semantic information can be extracted.Our experimental results demonstrate that our model achieves comparable performance with several state-of-the-art approaches on two multimodal sentiment datasets.Our findings indicate that incorporating semantic descriptions of images can enhance our understanding of multimodal content and lead to improved sentiment analysis performance. 展开更多
关键词 Targeted sentiment analysis multimodal sentiment classification visual sentiment textual sentiment social media
下载PDF
Multimodal Fuzzy Downstream Petroleum Supply Chain:A Novel Pentagonal Fuzzy Optimization
16
作者 Gul Freen Sajida Kousar +2 位作者 Nasreen Kausar Dragan Pamucar Georgia Irina Oros 《Computers, Materials & Continua》 SCIE EI 2023年第3期4861-4879,共19页
The petroleum industry has a complex,inflexible and challenging supply chain(SC)that impacts both the national economy as well as people’s daily lives with a range of services,including transportation,heating,electri... The petroleum industry has a complex,inflexible and challenging supply chain(SC)that impacts both the national economy as well as people’s daily lives with a range of services,including transportation,heating,electricity,lubricants,as well as chemicals and petrochemicals.In the petroleum industry,supply chain management presents several challenges,especially in the logistics sector,that are not found in other industries.In addition,logistical challenges contribute significantly to the cost of oil.Uncertainty regarding customer demand and supply significantly affects SC networks.Hence,SC flexibility can be maintained by addressing uncertainty.On the other hand,in the real world,decision-making challenges are often ambiguous or vague.In some cases,measurements are incorrect owing to measurement errors,instrument faults,etc.,which lead to a pentagonal fuzzy number(PFN)which is the extension of a fuzzy number.Therefore,it is necessary to develop quantitative models to optimize logistics operations and supply chain networks.This study proposed a linear programming model under an uncertain environment.The model minimizes the cost along the refineries,depots,multimode transport and demand nodes.Further developed pentagonal fuzzy optimization,an alternative approach is developed to solve the downstream supply chain using themixed-integer linear programming(MILP)model to obtain a feasible solution to the fuzzy transportation cost problem.In this model,the coefficient of the transportation costs and parameters is assumed to be a pentagonal fuzzy number.Furthermore,defuzzification is performed using an accuracy function.To validate the model and technique and feasibility solution,an illustrative example of the oil and gas SC is considered,providing improved results compared with existing techniques and demonstrating its ability to benefit petroleum companies is the objective of this study. 展开更多
关键词 Downstream petroleum supply chain fuzzy optimization multimodal optimization pentagonal fuzzy number
下载PDF
Multimodal Spatiotemporal Feature Map for Dynamic Gesture Recognition
17
作者 Xiaorui Zhang Xianglong Zeng +2 位作者 Wei Sun Yongjun Ren Tong Xu 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期671-686,共16页
Gesture recognition technology enables machines to read human gestures and has significant application prospects in the fields of human-computer interaction and sign language translation.Existing researches usually us... Gesture recognition technology enables machines to read human gestures and has significant application prospects in the fields of human-computer interaction and sign language translation.Existing researches usually use convolutional neural networks to extract features directly from raw gesture data for gesture recognition,but the networks are affected by much interference information in the input data and thus fit to some unimportant features.In this paper,we proposed a novel method for encoding spatio-temporal information,which can enhance the key features required for gesture recognition,such as shape,structure,contour,position and hand motion of gestures,thereby improving the accuracy of gesture recognition.This encoding method can encode arbitrarily multiple frames of gesture data into a single frame of the spatio-temporal feature map and use the spatio-temporal feature map as the input to the neural network.This can guide the model to fit important features while avoiding the use of complex recurrent network structures to extract temporal features.In addition,we designed two sub-networks and trained the model using a sub-network pre-training strategy that trains the sub-networks first and then the entire network,so as to avoid the subnetworks focusing too much on the information of a single category feature and being overly influenced by each other’s features.Experimental results on two public gesture datasets show that the proposed spatio-temporal information encoding method achieves advanced accuracy. 展开更多
关键词 Dynamic gesture recognition spatio-temporal information encoding multimodal input pre-training score fusion
下载PDF
Leveraging Vision-Language Pre-Trained Model and Contrastive Learning for Enhanced Multimodal Sentiment Analysis
18
作者 Jieyu An Wan Mohd Nazmee Wan Zainon Binfen Ding 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期1673-1689,共17页
Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on... Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on unimodal pre-trained models for feature extraction from each modality often overlook the intrinsic connections of semantic information between modalities.This limitation is attributed to their training on unimodal data,and necessitates the use of complex fusion mechanisms for sentiment analysis.In this study,we present a novel approach that combines a vision-language pre-trained model with a proposed multimodal contrastive learning method.Our approach harnesses the power of transfer learning by utilizing a vision-language pre-trained model to extract both visual and textual representations in a unified framework.We employ a Transformer architecture to integrate these representations,thereby enabling the capture of rich semantic infor-mation in image-text pairs.To further enhance the representation learning of these pairs,we introduce our proposed multimodal contrastive learning method,which leads to improved performance in sentiment analysis tasks.Our approach is evaluated through extensive experiments on two publicly accessible datasets,where we demonstrate its effectiveness.We achieve a significant improvement in sentiment analysis accuracy,indicating the supe-riority of our approach over existing techniques.These results highlight the potential of multimodal sentiment analysis and underscore the importance of considering the intrinsic semantic connections between modalities for accurate sentiment assessment. 展开更多
关键词 multimodal sentiment analysis vision–language pre-trained model contrastive learning sentiment classification
下载PDF
Multimodal MRI diagnosis and transvenous embolization of a basicranial emissary vein dural arteriovenous fistula:A case report
19
作者 Xi Chen Liang Ge +5 位作者 Hailin Wan Lei Huang Yeqing Jiang Gang Lu Jing Wang Xiaolong Zhang 《Journal of Interventional Medicine》 2023年第1期41-45,共5页
A dural arteriovenous fistula(DAVF) is an abnormal linkage connecting the arterial and venous systems within the intracranial dura mater. A basicranial emissary vein DAVF drains into the cavernous sinus and the ophtha... A dural arteriovenous fistula(DAVF) is an abnormal linkage connecting the arterial and venous systems within the intracranial dura mater. A basicranial emissary vein DAVF drains into the cavernous sinus and the ophthalmic vein, similar to a cavernous sinus DAVF. Precise preoperative identification of the DAVF location is a prerequisite for appropriate treatment. Treatment options include microsurgical disconnection, endovascular transarterial embolization(TAE), transvenous embolization(TVE), or a combination thereof. TVE is an increasingly popular approach for the treatment of DAVFs and the preferred approach for skull base locations, due to the risk of cranial neuropathy caused by dangerous anastomosis from arterial approaches. Multimodal magnetic resonance imaging(MRI) can provide anatomical and hemodynamic information for TVE. The therapeutic target must be precisely embolized in the emissary vein, which requires guidance via multimodal MRI. Here, we report a rare case of successful TVE for a basicranial emissary vein DAVF, utilizing multimodal MRI assistance. The fistula had vanished, pterygoid plexus drainage had improved, and the inferior petrosal sinus had recanalized, as observed on 8-month follow-up angiography. Symptoms and signs of double vision, caused by abduction deficiency, disappeared. Detailed anatomic and hemodynamic assessment by multimodal MRI is the key to guiding successful diagnosis and treatment. 展开更多
关键词 Dural arteriovenous fistula Transvenous embolization multimodal magnetic resonance imaging Cortical venous reflux ANGIOGRAPHY
下载PDF
Multimodal integrated intervention for children with attentiondeficit/hyperactivity disorder
20
作者 Ying-Bo Lv Wei Cheng +3 位作者 Meng-Hui Wang Xiao-Min Wang Yan-Li Hu Lan-Qiu Lv 《World Journal of Clinical Cases》 SCIE 2023年第18期4267-4276,共10页
BACKGROUND Attention-deficit/hyperactivity disorder(ADHD)is one of the most common disorders in child and adolescent psychiatry,with a prevalence of more than 5%.Despite extensive research on ADHD in the last 10 to 20... BACKGROUND Attention-deficit/hyperactivity disorder(ADHD)is one of the most common disorders in child and adolescent psychiatry,with a prevalence of more than 5%.Despite extensive research on ADHD in the last 10 to 20 years,effective treatments are still lacking.Instead,the concept of ADHD seems to have become broader and more heterogeneous.Therefore,the diagnosis and treatment of ADHD remains challenging for clinicians.AIM To investigate the effects of a multimodal integrated intervention for children with ADHD.METHODS Between March 2019 and September 2020,a total of 100 children with ADHD who were diagnosed and treated at our hospital were assessed for eligibility,two of whom revoked their consent.A case-control study was conducted in which the children were equally assigned,using a randomized number table,to either a medication group(methylphenidate hydrochloride extended-release tablets and atomoxetine hydrochloride tablets)or a multimodal integrated intervention group(medication+parent training+behavior modification+sensory integration therapy+sand tray therapy),with 49 patients in each group.The clinical endpoint was the efficacy of the different intervention modalities.RESULTS The two groups of children with ADHD had comparable patient characteristics(P>0.05).Multimodal integrated intervention resulted in a significantly higher treatment efficacy(91.84%)than medication alone(75.51%)(P<0.05).Children who received the multimodal integrated intervention showed lower scores in the Conners Parent Symptom Questionnaire and the Weiss Functional Impairment Rating Scale than those treated with medication alone(P<0.05).The Sensory Integration Scale scores of children in the multimodal integrated intervention group were higher than those of children in the medication group(P<0.05).Children who received the multimodal integrated intervention had higher compliance and family satisfaction and a lower incidence of adverse events than those treated with medication alone(P<0.05).CONCLUSION Multimodal integrated intervention effectively alleviated symptoms associated with ADHD in children.It enhanced their memory and attention with high safety and parental satisfaction,demonstrating good potential for clinical promotion. 展开更多
关键词 Attention-deficit/hyperactivity disorder multimodal integrated intervention MEDICATION Behavior modification Sensory integration therapy Sand tray therapy
下载PDF
上一页 1 2 22 下一页 到第
使用帮助 返回顶部