期刊文献+
共找到81,025篇文章
< 1 2 250 >
每页显示 20 50 100
Deep Feature Fusion Model for Sentence Semantic Matching 被引量:1
1
作者 Xu Zhang Wenpeng Lu +2 位作者 Fangfang Li Xueping Peng Ruoyu Zhang 《Computers, Materials & Continua》 SCIE EI 2019年第8期601-616,共16页
Sentence semantic matching(SSM)is a fundamental research in solving natural language processing tasks such as question answering and machine translation.The latest SSM research benefits from deep learning techniques b... Sentence semantic matching(SSM)is a fundamental research in solving natural language processing tasks such as question answering and machine translation.The latest SSM research benefits from deep learning techniques by incorporating attention mechanism to semantically match given sentences.However,how to fully capture the semantic context without losing significant features for sentence encoding is still a challenge.To address this challenge,we propose a deep feature fusion model and integrate it into the most popular deep learning architecture for sentence matching task.The integrated architecture mainly consists of embedding layer,deep feature fusion layer,matching layer and prediction layer.In addition,we also compare the commonly used loss function,and propose a novel hybrid loss function integrating MSE and cross entropy together,considering confidence interval and threshold setting to preserve the indistinguishable instances in training process.To evaluate our model performance,we experiment on two real world public data sets:LCQMC and Quora.The experiment results demonstrate that our model outperforms the most existing advanced deep learning models for sentence matching,benefited from our enhanced loss function and deep feature fusion model for capturing semantic context. 展开更多
关键词 Natural language processing semantic matching deep learning
下载PDF
AN IMPROVED HYBRID SEMANTIC MATCHING ALGORITHM WITH LEXICAL SIMILARITY
2
作者 Peng Rongqun Mi Zhengkun Wang Lingjiao 《Journal of Electronics(China)》 2010年第6期838-847,共10页
In this paper, we proposed an improved hybrid semantic matching algorithm combining Input/Output (I/O) semantic matching with text lexical similarity to overcome the disadvantage that the existing semantic matching al... In this paper, we proposed an improved hybrid semantic matching algorithm combining Input/Output (I/O) semantic matching with text lexical similarity to overcome the disadvantage that the existing semantic matching algorithms were unable to distinguish those services with the same I/O by only performing I/O based service signature matching in semantic web service discovery techniques. The improved algorithm consists of two steps, the first is logic based I/O concept ontology matching, through which the candidate service set is obtained and the second is the service name matching with lexical similarity against the candidate service set, through which the final precise matching result is concluded. Using Ontology Web Language for Services (OWL-S) test collection, we tested our hybrid algorithm and compared it with OWL-S Matchmaker-X (OWLS-MX), the experimental results have shown that the proposed algorithm could pick out the most suitable advertised service corresponding to user's request from very similar ones and provide better matching precision and efficiency than OWLS-MX. 展开更多
关键词 Hybrid matching algorithm semantic matching Lexical similarity WORDNET Ontology Web Language for Services (OWL-S)
下载PDF
Wireless semantic communication based on semantic matching multiple access and intent bias multiplexing
3
作者 Ren Chao He Zongrui +2 位作者 Sun Chen Li Haojin Zhang Haijun 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2024年第1期26-36,共11页
This paper proposes a multi-access and multi-user semantic communication scheme based on semantic matching and intent deviation to address the increasing demand for wireless users and data.The scheme enables flexible ... This paper proposes a multi-access and multi-user semantic communication scheme based on semantic matching and intent deviation to address the increasing demand for wireless users and data.The scheme enables flexible management of long frames,allowing each unit of bandwidth to support a higher number of users.By leveraging semantic classification,different users can independently access the network through the transmission of long concatenated sequences without modifying the existing wireless communication architecture.To overcome the potential disadvantage of incomplete semantic database matching leading to semantic intent misunderstanding,the scheme proposes using intent deviation as an advantage.This allows different receivers to interpret the same semantic information differently,enabling multiplexing where one piece of information can serve multiple users with distinct purposes.Simulation results show that at a bit error rate(BER)of 0.1,it is possible to reduce the transmission by approximately 20 semantic basic units. 展开更多
关键词 semantic communication multiple access MULTIPLEXING multimodal communication
原文传递
A semantic segmentation-based underwater acoustic image transmission framework for cooperative SLAM
4
作者 Jiaxu Li Guangyao Han +1 位作者 Shuai Chang Xiaomei Fu 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第3期339-351,共13页
With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection abil... With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection ability of a single vehicle limits the SLAM performance in wide areas.Thereby,cooperative SLAM using multiple vehicles has become an important research direction.The key factor of cooperative SLAM is timely and efficient sonar image transmission among underwater vehicles.However,the limited bandwidth of underwater acoustic channels contradicts a large amount of sonar image data.It is essential to compress the images before transmission.Recently,deep neural networks have great value in image compression by virtue of the powerful learning ability of neural networks,but the existing sonar image compression methods based on neural network usually focus on the pixel-level information without the semantic-level information.In this paper,we propose a novel underwater acoustic transmission scheme called UAT-SSIC that includes semantic segmentation-based sonar image compression(SSIC)framework and the joint source-channel codec,to improve the accuracy of the semantic information of the reconstructed sonar image at the receiver.The SSIC framework consists of Auto-Encoder structure-based sonar image compression network,which is measured by a semantic segmentation network's residual.Considering that sonar images have the characteristics of blurred target edges,the semantic segmentation network used a special dilated convolution neural network(DiCNN)to enhance segmentation accuracy by expanding the range of receptive fields.The joint source-channel codec with unequal error protection is proposed that adjusts the power level of the transmitted data,which deal with sonar image transmission error caused by the serious underwater acoustic channel.Experiment results demonstrate that our method preserves more semantic information,with advantages over existing methods at the same compression ratio.It also improves the error tolerance and packet loss resistance of transmission. 展开更多
关键词 semantic segmentation Sonar image transmission Learning-based compression
下载PDF
A Video Captioning Method by Semantic Topic-Guided Generation
5
作者 Ou Ye Xinli Wei +2 位作者 Zhenhua Yu Yan Fu Ying Yang 《Computers, Materials & Continua》 SCIE EI 2024年第1期1071-1093,共23页
In the video captioning methods based on an encoder-decoder,limited visual features are extracted by an encoder,and a natural sentence of the video content is generated using a decoder.However,this kind ofmethod is de... In the video captioning methods based on an encoder-decoder,limited visual features are extracted by an encoder,and a natural sentence of the video content is generated using a decoder.However,this kind ofmethod is dependent on a single video input source and few visual labels,and there is a problem with semantic alignment between video contents and generated natural sentences,which are not suitable for accurately comprehending and describing the video contents.To address this issue,this paper proposes a video captioning method by semantic topic-guided generation.First,a 3D convolutional neural network is utilized to extract the spatiotemporal features of videos during the encoding.Then,the semantic topics of video data are extracted using the visual labels retrieved from similar video data.In the decoding,a decoder is constructed by combining a novel Enhance-TopK sampling algorithm with a Generative Pre-trained Transformer-2 deep neural network,which decreases the influence of“deviation”in the semantic mapping process between videos and texts by jointly decoding a baseline and semantic topics of video contents.During this process,the designed Enhance-TopK sampling algorithm can alleviate a long-tail problem by dynamically adjusting the probability distribution of the predicted words.Finally,the experiments are conducted on two publicly used Microsoft Research Video Description andMicrosoft Research-Video to Text datasets.The experimental results demonstrate that the proposed method outperforms several state-of-art approaches.Specifically,the performance indicators Bilingual Evaluation Understudy,Metric for Evaluation of Translation with Explicit Ordering,Recall Oriented Understudy for Gisting Evaluation-longest common subsequence,and Consensus-based Image Description Evaluation of the proposed method are improved by 1.2%,0.1%,0.3%,and 2.4% on the Microsoft Research Video Description dataset,and 0.1%,1.0%,0.1%,and 2.8% on the Microsoft Research-Video to Text dataset,respectively,compared with the existing video captioning methods.As a result,the proposed method can generate video captioning that is more closely aligned with human natural language expression habits. 展开更多
关键词 Video captioning encoder-decoder semantic topic jointly decoding Enhance-TopK sampling
下载PDF
Feature Matching via Topology-Aware Graph Interaction Model
6
作者 Yifan Lu Jiayi Ma +2 位作者 Xiaoguang Mei Jun Huang Xiao-Ping Zhang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第1期113-130,共18页
Feature matching plays a key role in computer vision. However, due to the limitations of the descriptors, the putative matches are inevitably contaminated by massive outliers.This paper attempts to tackle the outlier ... Feature matching plays a key role in computer vision. However, due to the limitations of the descriptors, the putative matches are inevitably contaminated by massive outliers.This paper attempts to tackle the outlier filtering problem from two aspects. First, a robust and efficient graph interaction model,is proposed, with the assumption that matches are correlated with each other rather than independently distributed. To this end, we construct a graph based on the local relationships of matches and formulate the outlier filtering task as a binary labeling energy minimization problem, where the pairwise term encodes the interaction between matches. We further show that this formulation can be solved globally by graph cut algorithm. Our new formulation always improves the performance of previous localitybased method without noticeable deterioration in processing time,adding a few milliseconds. Second, to construct a better graph structure, a robust and geometrically meaningful topology-aware relationship is developed to capture the topology relationship between matches. The two components in sum lead to topology interaction matching(TIM), an effective and efficient method for outlier filtering. Extensive experiments on several large and diverse datasets for multiple vision tasks including general feature matching, as well as relative pose estimation, homography and fundamental matrix estimation, loop-closure detection, and multi-modal image matching, demonstrate that our TIM is more competitive than current state-of-the-art methods, in terms of generality, efficiency, and effectiveness. The source code is publicly available at http://github.com/YifanLu2000/TIM. 展开更多
关键词 Feature matching graph cut outlier filtering topology preserving
下载PDF
Distributed Matching Theory-Based Task Re-Allocating for Heterogeneous Multi-UAV Edge Computing
7
作者 Yangang Wang Xianglin Wei +3 位作者 Hai Wang Yongyang Hu Kuang Zhao Jianhua Fan 《China Communications》 SCIE CSCD 2024年第1期260-278,共19页
Many efforts have been devoted to efficient task scheduling in Multi-Unmanned Aerial Vehicle(UAV)edge computing.However,the heterogeneity of UAV computation resource,and the task re-allocating between UAVs have not be... Many efforts have been devoted to efficient task scheduling in Multi-Unmanned Aerial Vehicle(UAV)edge computing.However,the heterogeneity of UAV computation resource,and the task re-allocating between UAVs have not been fully considered yet.Moreover,most existing works neglect the fact that a task can only be executed on the UAV equipped with its desired service function(SF).In this backdrop,this paper formulates the task scheduling problem as a multi-objective task scheduling problem,which aims at maximizing the task execution success ratio while minimizing the average weighted sum of all tasks’completion time and energy consumption.Optimizing three coupled goals in a realtime manner with the dynamic arrival of tasks hinders us from adopting existing methods,like machine learning-based solutions that require a long training time and tremendous pre-knowledge about the task arrival process,or heuristic-based ones that usually incur a long decision-making time.To tackle this problem in a distributed manner,we establish a matching theory framework,in which three conflicting goals are treated as the preferences of tasks,SFs and UAVs.Then,a Distributed Matching Theory-based Re-allocating(DiMaToRe)algorithm is put forward.We formally proved that a stable matching can be achieved by our proposal.Extensive simulation results show that Di Ma To Re algorithm outperforms benchmark algorithms under diverse parameter settings and has good robustness. 展开更多
关键词 edge computing HETEROGENEITY matching theory service function unmanned aerial vehicle
下载PDF
A Random Fusion of Mix 3D and Polar Mix to Improve Semantic Segmentation Performance in 3D Lidar Point Cloud
8
作者 Bo Liu Li Feng Yufeng Chen 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期845-862,共18页
This paper focuses on the effective utilization of data augmentation techniques for 3Dlidar point clouds to enhance the performance of neural network models.These point clouds,which represent spatial information throu... This paper focuses on the effective utilization of data augmentation techniques for 3Dlidar point clouds to enhance the performance of neural network models.These point clouds,which represent spatial information through a collection of 3D coordinates,have found wide-ranging applications.Data augmentation has emerged as a potent solution to the challenges posed by limited labeled data and the need to enhance model generalization capabilities.Much of the existing research is devoted to crafting novel data augmentation methods specifically for 3D lidar point clouds.However,there has been a lack of focus on making the most of the numerous existing augmentation techniques.Addressing this deficiency,this research investigates the possibility of combining two fundamental data augmentation strategies.The paper introduces PolarMix andMix3D,two commonly employed augmentation techniques,and presents a new approach,named RandomFusion.Instead of using a fixed or predetermined combination of augmentation methods,RandomFusion randomly chooses one method from a pool of options for each instance or sample.This innovative data augmentation technique randomly augments each point in the point cloud with either PolarMix or Mix3D.The crux of this strategy is the random choice between PolarMix and Mix3Dfor the augmentation of each point within the point cloud data set.The results of the experiments conducted validate the efficacy of the RandomFusion strategy in enhancing the performance of neural network models for 3D lidar point cloud semantic segmentation tasks.This is achieved without compromising computational efficiency.By examining the potential of merging different augmentation techniques,the research contributes significantly to a more comprehensive understanding of how to utilize existing augmentation methods for 3D lidar point clouds.RandomFusion data augmentation technique offers a simple yet effective method to leverage the diversity of augmentation techniques and boost the robustness of models.The insights gained from this research can pave the way for future work aimed at developing more advanced and efficient data augmentation strategies for 3D lidar point cloud analysis. 展开更多
关键词 3D lidar point cloud data augmentation RandomFusion semantic segmentation
下载PDF
A Time Series Short-Term Prediction Method Based on Multi-Granularity Event Matching and Alignment
9
作者 Haibo Li Yongbo Yu +1 位作者 Zhenbo Zhao Xiaokang Tang 《Computers, Materials & Continua》 SCIE EI 2024年第1期653-676,共24页
Accurate forecasting of time series is crucial across various domains.Many prediction tasks rely on effectively segmenting,matching,and time series data alignment.For instance,regardless of time series with the same g... Accurate forecasting of time series is crucial across various domains.Many prediction tasks rely on effectively segmenting,matching,and time series data alignment.For instance,regardless of time series with the same granularity,segmenting them into different granularity events can effectively mitigate the impact of varying time scales on prediction accuracy.However,these events of varying granularity frequently intersect with each other,which may possess unequal durations.Even minor differences can result in significant errors when matching time series with future trends.Besides,directly using matched events but unaligned events as state vectors in machine learning-based prediction models can lead to insufficient prediction accuracy.Therefore,this paper proposes a short-term forecasting method for time series based on a multi-granularity event,MGE-SP(multi-granularity event-based short-termprediction).First,amethodological framework for MGE-SP established guides the implementation steps.The framework consists of three key steps,including multi-granularity event matching based on the LTF(latest time first)strategy,multi-granularity event alignment using a piecewise aggregate approximation based on the compression ratio,and a short-term prediction model based on XGBoost.The data from a nationwide online car-hailing service in China ensures the method’s reliability.The average RMSE(root mean square error)and MAE(mean absolute error)of the proposed method are 3.204 and 2.360,lower than the respective values of 4.056 and 3.101 obtained using theARIMA(autoregressive integratedmoving average)method,as well as the values of 4.278 and 2.994 obtained using k-means-SVR(support vector regression)method.The other experiment is conducted on stock data froma public data set.The proposed method achieved an average RMSE and MAE of 0.836 and 0.696,lower than the respective values of 1.019 and 0.844 obtained using the ARIMA method,as well as the values of 1.350 and 1.172 obtained using the k-means-SVR method. 展开更多
关键词 Time series short-term prediction multi-granularity event ALIGNMENT event matching
下载PDF
A semantic vector map-based approach for aircraft positioning in GNSS/GPS denied large-scale environment
10
作者 Chenguang Ouyang Suxing Hu +6 位作者 Fengqi Long Shuai Shi Zhichao Yu Kaichun Zhao Zheng You Junyin Pi Bowen Xing 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第4期1-10,共10页
Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework... Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework for aircraft geo-localization in a large range that only requires a downward-facing monocular camera,an altimeter,a compass,and an open-source Vector Map(VMAP).The algorithm combines the matching and particle filter methods.Shape vector and correlation between two building contour vectors are defined,and a coarse-to-fine building vector matching(CFBVM)method is proposed in the matching stage,for which the original matching results are described by the Gaussian mixture model(GMM).Subsequently,an improved resampling strategy is designed to reduce computing expenses with a huge number of initial particles,and a credibility indicator is designed to avoid location mistakes in the particle filter stage.An experimental evaluation of the approach based on flight data is provided.On a flight at a height of 0.2 km over a flight distance of 2 km,the aircraft is geo-localized in a reference map of 11,025 km~2using 0.09 km~2aerial images without any prior information.The absolute localization error is less than 10 m. 展开更多
关键词 Large-scale positioning Building vector matching Improved particle filter GPS-Denied Vector map
下载PDF
Generative Multi-Modal Mutual Enhancement Video Semantic Communications
11
作者 Yuanle Chen Haobo Wang +3 位作者 Chunyu Liu Linyi Wang Jiaxin Liu Wei Wu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2985-3009,共25页
Recently,there have been significant advancements in the study of semantic communication in single-modal scenarios.However,the ability to process information in multi-modal environments remains limited.Inspired by the... Recently,there have been significant advancements in the study of semantic communication in single-modal scenarios.However,the ability to process information in multi-modal environments remains limited.Inspired by the research and applications of natural language processing across different modalities,our goal is to accurately extract frame-level semantic information from videos and ultimately transmit high-quality videos.Specifically,we propose a deep learning-basedMulti-ModalMutual Enhancement Video Semantic Communication system,called M3E-VSC.Built upon a VectorQuantized Generative AdversarialNetwork(VQGAN),our systemaims to leverage mutual enhancement among different modalities by using text as the main carrier of transmission.With it,the semantic information can be extracted fromkey-frame images and audio of the video and performdifferential value to ensure that the extracted text conveys accurate semantic information with fewer bits,thus improving the capacity of the system.Furthermore,a multi-frame semantic detection module is designed to facilitate semantic transitions during video generation.Simulation results demonstrate that our proposed model maintains high robustness in complex noise environments,particularly in low signal-to-noise ratio conditions,significantly improving the accuracy and speed of semantic transmission in video communication by approximately 50 percent. 展开更多
关键词 Generative adversarial networks multi-modal mutual enhancement video semantic transmission deep learning
下载PDF
A Joint Entity Relation Extraction Model Based on Relation Semantic Template Automatically Constructed
12
作者 Wei Liu Meijuan Yin +1 位作者 Jialong Zhang Lunchong Cui 《Computers, Materials & Continua》 SCIE EI 2024年第1期975-997,共23页
The joint entity relation extraction model which integrates the semantic information of relation is favored by relevant researchers because of its effectiveness in solving the overlapping of entities,and the method of... The joint entity relation extraction model which integrates the semantic information of relation is favored by relevant researchers because of its effectiveness in solving the overlapping of entities,and the method of defining the semantic template of relation manually is particularly prominent in the extraction effect because it can obtain the deep semantic information of relation.However,this method has some problems,such as relying on expert experience and poor portability.Inspired by the rule-based entity relation extraction method,this paper proposes a joint entity relation extraction model based on a relation semantic template automatically constructed,which is abbreviated as RSTAC.This model refines the extraction rules of relation semantic templates from relation corpus through dependency parsing and realizes the automatic construction of relation semantic templates.Based on the relation semantic template,the process of relation classification and triplet extraction is constrained,and finally,the entity relation triplet is obtained.The experimental results on the three major Chinese datasets of DuIE,SanWen,and FinRE showthat the RSTAC model successfully obtains rich deep semantics of relation,improves the extraction effect of entity relation triples,and the F1 scores are increased by an average of 0.96% compared with classical joint extraction models such as CasRel,TPLinker,and RFBFN. 展开更多
关键词 Natural language processing deep learning information extraction relation extraction relation semantic template
下载PDF
Artificial Immune Detection for Network Intrusion Data Based on Quantitative Matching Method
13
作者 CaiMing Liu Yan Zhang +1 位作者 Zhihui Hu Chunming Xie 《Computers, Materials & Continua》 SCIE EI 2024年第2期2361-2389,共29页
Artificial immune detection can be used to detect network intrusions in an adaptive approach and proper matching methods can improve the accuracy of immune detection methods.This paper proposes an artificial immune de... Artificial immune detection can be used to detect network intrusions in an adaptive approach and proper matching methods can improve the accuracy of immune detection methods.This paper proposes an artificial immune detection model for network intrusion data based on a quantitative matching method.The proposed model defines the detection process by using network data and decimal values to express features and artificial immune mechanisms are simulated to define immune elements.Then,to improve the accuracy of similarity calculation,a quantitative matching method is proposed.The model uses mathematical methods to train and evolve immune elements,increasing the diversity of immune recognition and allowing for the successful detection of unknown intrusions.The proposed model’s objective is to accurately identify known intrusions and expand the identification of unknown intrusions through signature detection and immune detection,overcoming the disadvantages of traditional methods.The experiment results show that the proposed model can detect intrusions effectively.It has a detection rate of more than 99.6%on average and a false alarm rate of 0.0264%.It outperforms existing immune intrusion detection methods in terms of comprehensive detection performance. 展开更多
关键词 Immune detection network intrusion network data signature detection quantitative matching method
下载PDF
SHEL:a semantically enhanced hardware-friendly entity linking method
14
作者 亓东林 CHEN Shudong +2 位作者 DU Rong TONG Da YU Yong 《High Technology Letters》 EI CAS 2024年第1期13-22,共10页
With the help of pre-trained language models,the accuracy of the entity linking task has made great strides in recent years.However,most models with excellent performance require fine-tuning on a large amount of train... With the help of pre-trained language models,the accuracy of the entity linking task has made great strides in recent years.However,most models with excellent performance require fine-tuning on a large amount of training data using large pre-trained language models,which is a hardware threshold to accomplish this task.Some researchers have achieved competitive results with less training data through ingenious methods,such as utilizing information provided by the named entity recognition model.This paper presents a novel semantic-enhancement-based entity linking approach,named semantically enhanced hardware-friendly entity linking(SHEL),which is designed to be hardware friendly and efficient while maintaining good performance.Specifically,SHEL's semantic enhancement approach consists of three aspects:(1)semantic compression of entity descriptions using a text summarization model;(2)maximizing the capture of mention contexts using asymmetric heuristics;(3)calculating a fixed size mention representation through pooling operations.These series of semantic enhancement methods effectively improve the model's ability to capture semantic information while taking into account the hardware constraints,and significantly improve the model's convergence speed by more than 50%compared with the strong baseline model proposed in this paper.In terms of performance,SHEL is comparable to the previous method,with superior performance on six well-established datasets,even though SHEL is trained using a smaller pre-trained language model as the encoder. 展开更多
关键词 entity linking(EL) pre-trained models knowledge graph text summarization semantic enhancement
下载PDF
Can propensity score matching replace randomized controlled trials?
15
作者 Matthias Yi Quan Liau En Qi Toh +2 位作者 Shamir Muhamed Surya Varma Selvakumar Vishalkumar Girishchandra Shelat 《World Journal of Methodology》 2024年第1期58-70,共13页
Randomized controlled trials(RCTs)have long been recognized as the gold standard for establishing causal relationships in clinical research.Despite that,various limitations of RCTs prevent its widespread implementatio... Randomized controlled trials(RCTs)have long been recognized as the gold standard for establishing causal relationships in clinical research.Despite that,various limitations of RCTs prevent its widespread implementation,ranging from the ethicality of withholding potentially-lifesaving treatment from a group to relatively poor external validity due to stringent inclusion criteria,amongst others.However,with the introduction of propensity score matching(PSM)as a retrospective statistical tool,new frontiers in establishing causation in clinical research were opened up.PSM predicts treatment effects using observational data from existing sources such as registries or electronic health records,to create a matched sample of participants who received or did not receive the intervention based on their propensity scores,which takes into account characteristics such as age,gender and comorbidities.Given its retrospective nature and its use of observational data from existing sources,PSM circumvents the aforementioned ethical issues faced by RCTs.Majority of RCTs exclude elderly,pregnant women and young children;thus,evidence of therapy efficacy is rarely proven by robust clinical research for this population.On the other hand,by matching study patient characteristics to that of the population of interest,including the elderly,pregnant women and young children,PSM allows for generalization of results to the wider population and hence greatly increases the external validity.Instead of replacing RCTs with PSM,the synergistic integration of PSM into RCTs stands to provide better research outcomes with both methods complementing each other.For example,in an RCT investigating the impact of mannitol on outcomes among participants of the Intensive Blood Pressure Reduction in Acute Cerebral Hemorrhage Trial,the baseline characteristics of comorbidities and current medications between treatment and control arms were significantly different despite the randomization protocol.Therefore,PSM was incorporated in its analysis to create samples from the treatment and control arms that were matched in terms of these baseline characteristics,thus providing a fairer comparison for the impact of mannitol.This literature review reports the applications,advantages,and considerations of using PSM with RCTs,illustrating its utility in refining randomization,improving external validity,and accounting for non-compliance to protocol.Future research should consider integrating the use of PSM in RCTs to better generalize outcomes to target populations for clinical practice and thereby benefit a wider range of patients,while maintaining the robustness of randomization offered by RCTs. 展开更多
关键词 Propensity score matching Randomized controlled trials RANDOMIZATION Clinical practice Validity ETHICS
下载PDF
A Semantic-Sensitive Approach to Indoor and Outdoor 3D Data Organization
16
作者 Youchen Wei 《Journal of World Architecture》 2024年第1期1-6,共6页
Building model data organization is often programmed to solve a specific problem,resulting in the inability to organize indoor and outdoor 3D scenes in an integrated manner.In this paper,existing building spatial data... Building model data organization is often programmed to solve a specific problem,resulting in the inability to organize indoor and outdoor 3D scenes in an integrated manner.In this paper,existing building spatial data models are studied,and the characteristics of building information modeling standards(IFC),city geographic modeling language(CityGML),indoor modeling language(IndoorGML),and other models are compared and analyzed.CityGML and IndoorGML models face challenges in satisfying diverse application scenarios and requirements due to limitations in their expression capabilities.It is proposed to combine the semantic information of the model objects to effectively partition and organize the indoor and outdoor spatial 3D model data and to construct the indoor and outdoor data organization mechanism of“chunk-layer-subobject-entrances-area-detail object.”This method is verified by proposing a 3D data organization method for indoor and outdoor space and constructing a 3D visualization system based on it. 展开更多
关键词 Integrated data organization Indoor and outdoor 3D data models semantic models Spatial segmentation
下载PDF
A Study of the TPS Based Beam-Matching Concept for Medical Linear Accelerators at a Tertiary Hospital
17
作者 Ntombela N. Lethukuthula Rovetto J. Nicolas +1 位作者 Nethwadzi C. Lutendo Mpumelelo Nyathi 《International Journal of Medical Physics, Clinical Engineering and Radiation Oncology》 2024年第1期16-25,共10页
The flexibility in radiotherapy can be improved if patients can be moved between any one of the department’s medical linear accelerators (LINACs) without the need to change anything in the patient’s treatment plan. ... The flexibility in radiotherapy can be improved if patients can be moved between any one of the department’s medical linear accelerators (LINACs) without the need to change anything in the patient’s treatment plan. For this to be possible, the dosimetric characteristics of the various accelerators must be the same, or nearly the same. The purpose of this work is to describe further and compare measurements and parameters after the initial vendor-recommended beam matching of the five LINACs. Deviations related to dose calculations and to beam matched accelerators may compromise treatment accuracy. The safest and most practical way to ensure that all accelerators are within clinical acceptable accuracy is to include TPS calculations in the LINACs matching evaluation. Treatment planning system (TPS) was used to create three photons plans with different field sizes 3 × 3 cm, 10 × 10 cm and 25 × 25 cm at a depth of 4.5 cm in Perspex. Calculated TPS plans were sent to Mosaiq to be delivered by five LINACs. TPS plans were compared with five LINACs measurements data using Gamma analyses of 2% and 2 mm. The results suggest that for four out of the five LINACs, there was generally good agreement, less than a 2% deviation between the planned dose distribution and the measured dose distribution. However, one specific LINAC named “Asterix” exhibited a deviation of 2.121% from the planned dose. The results show that all of the LINACs’ performance were within the acceptable deviation and delivering radiation dose consistently and accurately. 展开更多
关键词 RADIOTHERAPY Beam-matching Linear Accelerator DOSIMETRY
下载PDF
Image Captioning with multi-level similarity-guided semantic matching
18
作者 Jiesi Li Ning Xu +1 位作者 Weizhi Nie Shenyuan Zhang 《Visual Informatics》 EI 2021年第4期41-48,共8页
Image Captioning is a cross-modal task that needs to automatically generate coherent natural sentences to describe the image contents.Due to the large gap between vision and language modalities,most of the existing me... Image Captioning is a cross-modal task that needs to automatically generate coherent natural sentences to describe the image contents.Due to the large gap between vision and language modalities,most of the existing methods have the problem of inaccurate semantic matching between images and generated captions.To solve the problem,this paper proposes a novel multi-level similarity-guided semantic matching method for image captioning,which can fuse local and global semantic similarities to learn the latent semantic correlation between images and generated captions.Specifically,we extract the semantic units containing fine-grained semantic information of images and generated captions,respectively.Based on the comparison of the semantic units,we design a local semantic similarity evaluation mechanism.Meanwhile,we employ the CIDEr score to characterize the global semantic similarity.The local and global two-level similarities are finally fused using the reinforcement learning theory,to guide the model optimization to obtain better semantic matching.The quantitative and qualitative experiments on large-scale MSCOCO dataset illustrate the superiority of the proposed method,which can achieve fine-grained semantic matching of images and generated captions. 展开更多
关键词 Image Captioning Cross-modal semantic matching Reinforcement learning
原文传递
Loop Closure Detection via Locality Preserving Matching With Global Consensus 被引量:1
19
作者 Jiayi Ma Kaining Zhang Junjun Jiang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第2期411-426,共16页
A critical component of visual simultaneous localization and mapping is loop closure detection(LCD),an operation judging whether a robot has come to a pre-visited area.Concretely,given a query image(i.e.,the latest vi... A critical component of visual simultaneous localization and mapping is loop closure detection(LCD),an operation judging whether a robot has come to a pre-visited area.Concretely,given a query image(i.e.,the latest view observed by the robot),it proceeds by first exploring images with similar semantic information,followed by solving the relative relationship between candidate pairs in the 3D space.In this work,a novel appearance-based LCD system is proposed.Specifically,candidate frame selection is conducted via the combination of Superfeatures and aggregated selective match kernel(ASMK).We incorporate an incremental strategy into the vanilla ASMK to make it applied in the LCD task.It is demonstrated that this setting is memory-wise efficient and can achieve remarkable performance.To dig up consistent geometry between image pairs during loop closure verification,we propose a simple yet surprisingly effective feature matching algorithm,termed locality preserving matching with global consensus(LPM-GC).The major objective of LPM-GC is to retain the local neighborhood information of true feature correspondences between candidate pairs,where a global constraint is further designed to effectively remove false correspondences in challenging sceneries,e.g.,containing numerous repetitive structures.Meanwhile,we derive a closed-form solution that enables our approach to provide reliable correspondences within only a few milliseconds.The performance of the proposed approach has been experimentally evaluated on ten publicly available and challenging datasets.Results show that our method can achieve better performance over the state-of-the-art in both feature matching and LCD tasks.We have released our code of LPM-GC at https://github.com/jiayi-ma/LPM-GC. 展开更多
关键词 Feature matching locality preserving matching loop closure detection SLAM
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部