期刊文献+
共找到48篇文章
< 1 2 3 >
每页显示 20 50 100
A Rapid Adaptation Approach for Dynamic Air‑Writing Recognition Using Wearable Wristbands with Self‑Supervised Contrastive Learning
1
作者 Yunjian Guo Kunpeng Li +4 位作者 Wei Yue Nam‑Young Kim Yang Li Guozhen Shen Jong‑Chul Lee 《Nano-Micro Letters》 SCIE EI CAS 2025年第2期417-431,共15页
Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the pro... Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication. 展开更多
关键词 Wearable wristband Self-supervised contrastive learning Dynamic gesture Air-writing Human-machine interaction
下载PDF
Position-Aware and Subgraph Enhanced Dynamic Graph Contrastive Learning on Discrete-Time Dynamic Graph
2
作者 Jian Feng Tian Liu Cailing Du 《Computers, Materials & Continua》 SCIE EI 2024年第11期2895-2909,共15页
Unsupervised learning methods such as graph contrastive learning have been used for dynamic graph represen-tation learning to eliminate the dependence of labels.However,existing studies neglect positional information ... Unsupervised learning methods such as graph contrastive learning have been used for dynamic graph represen-tation learning to eliminate the dependence of labels.However,existing studies neglect positional information when learning discrete snapshots,resulting in insufficient network topology learning.At the same time,due to the lack of appropriate data augmentation methods,it is difficult to capture the evolving patterns of the network effectively.To address the above problems,a position-aware and subgraph enhanced dynamic graph contrastive learning method is proposed for discrete-time dynamic graphs.Firstly,the global snapshot is built based on the historical snapshots to express the stable pattern of the dynamic graph,and the random walk is used to obtain the position representation by learning the positional information of the nodes.Secondly,a new data augmentation method is carried out from the perspectives of short-term changes and long-term stable structures of dynamic graphs.Specifically,subgraph sampling based on snapshots and global snapshots is used to obtain two structural augmentation views,and node structures and evolving patterns are learned by combining graph neural network,gated recurrent unit,and attention mechanism.Finally,the quality of node representation is improved by combining the contrastive learning between different structural augmentation views and between the two representations of structure and position.Experimental results on four real datasets show that the performance of the proposed method is better than the existing unsupervised methods,and it is more competitive than the supervised learning method under a semi-supervised setting. 展开更多
关键词 Dynamic graph representation learning graph contrastive learning structure representation position representation evolving pattern
下载PDF
Contrastive Learning for Blind Super-Resolution via A Distortion-Specific Network 被引量:1
3
作者 Xinya Wang Jiayi Ma Junjun Jiang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第1期78-89,共12页
Previous deep learning-based super-resolution(SR)methods rely on the assumption that the degradation process is predefined(e.g.,bicubic downsampling).Thus,their performance would suffer from deterioration if the real ... Previous deep learning-based super-resolution(SR)methods rely on the assumption that the degradation process is predefined(e.g.,bicubic downsampling).Thus,their performance would suffer from deterioration if the real degradation is not consistent with the assumption.To deal with real-world scenarios,existing blind SR methods are committed to estimating both the degradation and the super-resolved image with an extra loss or iterative scheme.However,degradation estimation that requires more computation would result in limited SR performance due to the accumulated estimation errors.In this paper,we propose a contrastive regularization built upon contrastive learning to exploit both the information of blurry images and clear images as negative and positive samples,respectively.Contrastive regularization ensures that the restored image is pulled closer to the clear image and pushed far away from the blurry image in the representation space.Furthermore,instead of estimating the degradation,we extract global statistical prior information to capture the character of the distortion.Considering the coupling between the degradation and the low-resolution image,we embed the global prior into the distortion-specific SR network to make our method adaptive to the changes of distortions.We term our distortion-specific network with contrastive regularization as CRDNet.The extensive experiments on synthetic and realworld scenes demonstrate that our lightweight CRDNet surpasses state-of-the-art blind super-resolution approaches. 展开更多
关键词 Blind super-resolution contrastive learning deep learning image super-resolution(SR)
下载PDF
Solving Geometry Problems via Feature Learning and Contrastive Learning of Multimodal Data
4
作者 Pengpeng Jian Fucheng Guo +1 位作者 Yanli Wang Yang Li 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第8期1707-1728,共22页
This paper presents an end-to-end deep learning method to solve geometry problems via feature learning and contrastive learning of multimodal data.A key challenge in solving geometry problems using deep learning is to... This paper presents an end-to-end deep learning method to solve geometry problems via feature learning and contrastive learning of multimodal data.A key challenge in solving geometry problems using deep learning is to automatically adapt to the task of understanding single-modal and multimodal problems.Existing methods either focus on single-modal ormultimodal problems,and they cannot fit each other.A general geometry problem solver shouldobviouslybe able toprocess variousmodalproblems at the same time.Inthispaper,a shared feature-learning model of multimodal data is adopted to learn the unified feature representation of text and image,which can solve the heterogeneity issue between multimodal geometry problems.A contrastive learning model of multimodal data enhances the semantic relevance betweenmultimodal features and maps them into a unified semantic space,which can effectively adapt to both single-modal and multimodal downstream tasks.Based on the feature extraction and fusion of multimodal data,a proposed geometry problem solver uses relation extraction,theorem reasoning,and problem solving to present solutions in a readable way.Experimental results show the effectiveness of the method. 展开更多
关键词 Geometry problems multimodal feature learning multimodal contrastive learning automatic solver
下载PDF
A Memory-Guided Anomaly Detection Model with Contrastive Learning for Multivariate Time Series
5
作者 Wei Zhang Ping He +2 位作者 Ting Li Fan Yang Ying Liu 《Computers, Materials & Continua》 SCIE EI 2023年第11期1893-1910,共18页
Some reconstruction-based anomaly detection models in multivariate time series have brought impressive performance advancements but suffer from weak generalization ability and a lack of anomaly identification.These li... Some reconstruction-based anomaly detection models in multivariate time series have brought impressive performance advancements but suffer from weak generalization ability and a lack of anomaly identification.These limitations can result in the misjudgment of models,leading to a degradation in overall detection performance.This paper proposes a novel transformer-like anomaly detection model adopting a contrastive learning module and a memory block(CLME)to overcome the above limitations.The contrastive learning module tailored for time series data can learn the contextual relationships to generate temporal fine-grained representations.The memory block can record normal patterns of these representations through the utilization of attention-based addressing and reintegration mechanisms.These two modules together effectively alleviate the problem of generalization.Furthermore,this paper introduces a fusion anomaly detection strategy that comprehensively takes into account the residual and feature spaces.Such a strategy can enlarge the discrepancies between normal and abnormal data,which is more conducive to anomaly identification.The proposed CLME model not only efficiently enhances the generalization performance but also improves the ability of anomaly detection.To validate the efficacy of the proposed approach,extensive experiments are conducted on well-established benchmark datasets,including SWaT,PSM,WADI,and MSL.The results demonstrate outstanding performance,with F1 scores of 90.58%,94.83%,91.58%,and 91.75%,respectively.These findings affirm the superiority of the CLME model over existing stateof-the-art anomaly detection methodologies in terms of its ability to detect anomalies within complex datasets accurately. 展开更多
关键词 Anomaly detection multivariate time series contrastive learning memory network
下载PDF
Leveraging Vision-Language Pre-Trained Model and Contrastive Learning for Enhanced Multimodal Sentiment Analysis
6
作者 Jieyu An Wan Mohd Nazmee Wan Zainon Binfen Ding 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期1673-1689,共17页
Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on... Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on unimodal pre-trained models for feature extraction from each modality often overlook the intrinsic connections of semantic information between modalities.This limitation is attributed to their training on unimodal data,and necessitates the use of complex fusion mechanisms for sentiment analysis.In this study,we present a novel approach that combines a vision-language pre-trained model with a proposed multimodal contrastive learning method.Our approach harnesses the power of transfer learning by utilizing a vision-language pre-trained model to extract both visual and textual representations in a unified framework.We employ a Transformer architecture to integrate these representations,thereby enabling the capture of rich semantic infor-mation in image-text pairs.To further enhance the representation learning of these pairs,we introduce our proposed multimodal contrastive learning method,which leads to improved performance in sentiment analysis tasks.Our approach is evaluated through extensive experiments on two publicly accessible datasets,where we demonstrate its effectiveness.We achieve a significant improvement in sentiment analysis accuracy,indicating the supe-riority of our approach over existing techniques.These results highlight the potential of multimodal sentiment analysis and underscore the importance of considering the intrinsic semantic connections between modalities for accurate sentiment assessment. 展开更多
关键词 Multimodal sentiment analysis vision–language pre-trained model contrastive learning sentiment classification
下载PDF
Multi-View Hybrid Contrastive Learning for Bundle Recommendation
7
作者 Maoyan Lin Youxin Hu +2 位作者 Zhixin Wang Jianqiu Luo Jinyu Huang 《Open Journal of Applied Sciences》 2023年第10期1742-1763,共22页
Bundle recommendation aims to provide users with convenient one-stop solutions by recommending bundles of related items that cater to their diverse needs. However, previous research has neglected the interaction betwe... Bundle recommendation aims to provide users with convenient one-stop solutions by recommending bundles of related items that cater to their diverse needs. However, previous research has neglected the interaction between bundle and item views and relied on simplistic methods for predicting user-bundle relationships. To address this limitation, we propose Hybrid Contrastive Learning for Bundle Recommendation (HCLBR). Our approach integrates unsupervised and supervised contrastive learning to enrich user and bundle representations, promoting diversity. By leveraging interconnected views of user-item and user-bundle nodes, HCLBR enhances representation learning for robust recommendations. Evaluation on four public datasets demonstrates the superior performance of HCLBR over state-of-the-art baselines. Our findings highlight the significance of leveraging contrastive learning and interconnected views in bundle recommendation, providing valuable insights for marketing strategies and recommendation system design. 展开更多
关键词 Recommender Systems Bundle Recommendation Package Recommendation contrastive learning Graph Neural Network
下载PDF
Recognition of Similar Weather Scenarios in Terminal Area Based on Contrastive Learning 被引量:2
8
作者 CHEN Haiyan LIU Zhenya +1 位作者 ZHOU Yi YUAN Ligang 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI CSCD 2022年第4期425-433,共9页
In order to improve the recognition accuracy of similar weather scenarios(SWSs)in terminal area,a recognition model for SWS based on contrastive learning(SWS-CL)is proposed.Firstly,a data augmentation method is design... In order to improve the recognition accuracy of similar weather scenarios(SWSs)in terminal area,a recognition model for SWS based on contrastive learning(SWS-CL)is proposed.Firstly,a data augmentation method is designed to improve the number and quality of weather scenarios samples according to the characteristics of convective weather images.Secondly,in the pre-trained recognition model of SWS-CL,a loss function is formulated to minimize the distance between the anchor and positive samples,and maximize the distance between the anchor and the negative samples in the latent space.Finally,the pre-trained SWS-CL model is fine-tuned with labeled samples to improve the recognition accuracy of SWS.The comparative experiments on the weather images of Guangzhou terminal area show that the proposed data augmentation method can effectively improve the quality of weather image dataset,and the proposed SWS-CL model can achieve satisfactory recognition accuracy.It is also verified that the fine-tuned SWS-CL model has obvious advantages in datasets with sparse labels. 展开更多
关键词 air traffic control terminal area similar weather scenarios(SWSs) image recognition contrastive learning
下载PDF
Few-Shot Graph Classification with Structural-Enhanced Contrastive Learning for Graph Data Copyright Protection
9
作者 Kainan Zhang DongMyung Shin +1 位作者 Daehee Seo Zhipeng Cai 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2024年第2期605-616,共12页
Open-source licenses can promote the development of machine learning by allowing others to access,modify,and redistribute the training dataset.However,not all open-source licenses may be appropriate for data sharing,a... Open-source licenses can promote the development of machine learning by allowing others to access,modify,and redistribute the training dataset.However,not all open-source licenses may be appropriate for data sharing,as some may not provide adequate protections for sensitive or personal information such as social network data.Additionally,some data may be subject to legal or regulatory restrictions that limit its sharing,regardless of the licensing model used.Hence,obtaining large amounts of labeled data can be difficult,time-consuming,or expensive in many real-world scenarios.Few-shot graph classification,as one application of meta-learning in supervised graph learning,aims to classify unseen graph types by only using a small amount of labeled data.However,the current graph neural network methods lack full usage of graph structures on molecular graphs and social network datasets.Since structural features are known to correlate with molecular properties in chemistry,structure information tends to be ignored with sufficient property information provided.Nevertheless,the common binary classification task of chemical compounds is unsuitable in the few-shot setting requiring novel labels.Hence,this paper focuses on the graph classification tasks of a social network,whose complex topology has an uncertain relationship with its nodes'attributes.With two multi-class graph datasets with large node-attribute dimensions constructed to facilitate the research,we propose a novel learning framework that integrates both meta-learning and contrastive learning to enhance the utilization of graph topological information.Extensive experiments demonstrate the competitive performance of our framework respective to other state-of-the-art methods. 展开更多
关键词 few-shot learning contrastive learning data copyright protection
原文传递
False Negative Sample Detection for Graph Contrastive Learning
10
作者 Binbin Zhang Li Wang 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2024年第2期529-542,共14页
Recently,self-supervised learning has shown great potential in Graph Neural Networks (GNNs) through contrastive learning,which aims to learn discriminative features for each node without label information. The key to ... Recently,self-supervised learning has shown great potential in Graph Neural Networks (GNNs) through contrastive learning,which aims to learn discriminative features for each node without label information. The key to graph contrastive learning is data augmentation. The anchor node regards its augmented samples as positive samples,and the rest of the samples are regarded as negative samples,some of which may be positive samples. We call these mislabeled samples as “false negative” samples,which will seriously affect the final learning effect. Since such semantically similar samples are ubiquitous in the graph,the problem of false negative samples is very significant. To address this issue,the paper proposes a novel model,False negative sample Detection for Graph Contrastive Learning (FD4GCL),which uses attribute and structure-aware to detect false negative samples. Experimental results on seven datasets show that FD4GCL outperforms the state-of-the-art baselines and even exceeds several supervised methods. 展开更多
关键词 graph representation learning contrastive learning false negative sample detection
原文传递
SimCLIC:A Simple Framework for Contrastive Learning of Image Classification 被引量:2
11
作者 Han YANG Jun LI 《Journal of Systems Science and Information》 CSCD 2023年第2期204-218,共15页
Contrastive learning,a self-supervised learning method,is widely used in image representation learning.The core idea is to close the distance between positive sample pairs and increase the distance between negative sa... Contrastive learning,a self-supervised learning method,is widely used in image representation learning.The core idea is to close the distance between positive sample pairs and increase the distance between negative sample pairs in the representation space.Siamese networks are the most common structure among various current contrastive learning models.However,contrastive learning using positive and negative sample pairs on large datasets is computationally expensive.In addition,there are cases where positive samples are mislabeled as negative samples.Contrastive learning without negative sample pairs can still learn good representations.In this paper,we propose a simple framework for contrastive learning of image classification(SimCLIC).SimCLIC simplifies the Siamese network and is able to learn the representation of an image without negative sample pairs and momentum encoders.It is mainly by perturbing the image representation generated by the encoder to generate different contrastive views.We apply three representation perturbation methods,namely,history representation,representation dropoput,and representation noise.We conducted experiments on several benchmark datasets to compare with current popular models,using image classification accuracy as a measure,and the results show that our SimCLIC is competitive.Finally,we did ablation experiments to verify the effect of different hyperparameters and structures on the model effectiveness. 展开更多
关键词 contrastive learning representation learning image classification
原文传递
Person Re-Identification with Model-Contrastive Federated Learning in Edge-Cloud Environment 被引量:1
12
作者 Baixuan Tang Xiaolong Xu +1 位作者 Fei Dai Song Wang 《Intelligent Automation & Soft Computing》 2023年第10期35-55,共21页
Person re-identification(ReID)aims to recognize the same person in multiple images from different camera views.Training person ReID models are time-consuming and resource-intensive;thus,cloud computing is an appropria... Person re-identification(ReID)aims to recognize the same person in multiple images from different camera views.Training person ReID models are time-consuming and resource-intensive;thus,cloud computing is an appropriate model training solution.However,the required massive personal data for training contain private information with a significant risk of data leakage in cloud environments,leading to significant communication overheads.This paper proposes a federated person ReID method with model-contrastive learning(MOON)in an edge-cloud environment,named FRM.Specifically,based on federated partial averaging,MOON warmup is added to correct the local training of individual edge servers and improve the model’s effectiveness by calculating and back-propagating a model-contrastive loss,which represents the similarity between local and global models.In addition,we propose a lightweight person ReID network,named multi-branch combined depth space network(MB-CDNet),to reduce the computing resource usage of the edge device when training and testing the person ReID model.MB-CDNet is a multi-branch version of combined depth space network(CDNet).We add a part branch and a global branch on the basis of CDNet and introduce an attention pyramid to improve the performance of the model.The experimental results on open-access person ReID datasets demonstrate that FRM achieves better performance than existing baseline. 展开更多
关键词 Person re-identification federated learning contrastive learning
下载PDF
Cross-modal Contrastive Learning for Generalizable and Efficient Image-text Retrieval
13
作者 Haoyu Lu Yuqi Huo +2 位作者 Mingyu Ding Nanyi Fei Zhiwu Lu 《Machine Intelligence Research》 EI CSCD 2023年第4期569-582,共14页
Cross-modal image-text retrieval is a fundamental task in bridging vision and language. It faces two main challenges that are typically not well addressed in previous works. 1) Generalizability: Existing methods often... Cross-modal image-text retrieval is a fundamental task in bridging vision and language. It faces two main challenges that are typically not well addressed in previous works. 1) Generalizability: Existing methods often assume a strong semantic correlation between each text-image pair, which are thus difficult to generalize to real-world scenarios where the weak correlation dominates. 2) Efficiency: Many latest works adopt the single-tower architecture with heavy detectors, which are inefficient during the inference stage because the costly computation needs to be repeated for each text-image pair. In this work, to overcome these two challenges, we propose a two-tower cross-modal contrastive learning (CMCL) framework. Specifically, we first devise a two-tower architecture, which enables a unified feature space for the text and image modalities to be directly compared with each other, alleviating the heavy computation during inference. We further introduce a simple yet effective module named multi-grid split (MGS) to learn fine-grained image features without using detectors. Last but not the least, we deploy a cross-modal contrastive loss on the global image/text features to learn their weak correlation and thus achieve high generalizability. To validate that our CMCL can be readily generalized to real-world scenarios, we construct a large multi-source image-text dataset called weak semantic correlation dataset (WSCD). Extensive experiments show that our CMCL outperforms the state-of-the-arts while being much more efficient. 展开更多
关键词 Image-text retrieval multimodal modeling contrastive learning weak correlation computer vision
原文传递
EFECL:Feature encoding enhancement with contrastive learning for indoor 3D object detection
14
作者 Yao Duan Renjiao Yi +2 位作者 Yuanming Gao Kai Xu Chenyang Zhu 《Computational Visual Media》 SCIE EI CSCD 2023年第4期875-892,共18页
Good proposal initials are critical for 3D object detection applications.However,due to the significant geometry variation of indoor scenes,incomplete and noisy proposals are inevitable in most cases.Mining feature in... Good proposal initials are critical for 3D object detection applications.However,due to the significant geometry variation of indoor scenes,incomplete and noisy proposals are inevitable in most cases.Mining feature information among these“bad”proposals may mislead the detection.Contrastive learning provides a feasible way for representing proposals,which can align complete and incomplete/noisy proposals in feature space.The aligned feature space can help us build robust 3D representation even if bad proposals are given.Therefore,we devise a new contrast learning framework for indoor 3D object detection,called EFECL,that learns robust 3D representations by contrastive learning of proposals on two different levels.Specifically,we optimize both instance-level and category-level contrasts to align features by capturing instance-specific characteristics and semantic-aware common patterns.Furthermore,we propose an enhanced feature aggregation module to extract more general and informative features for contrastive learning.Evaluations on ScanNet V2 and SUN RGB-D benchmarks demonstrate the generalizability and effectiveness of our method,and our method can achieve 12.3%and 7.3%improvements on both datasets over the benchmark alternatives.The code and models are publicly available at https://github.com/YaraDuan/EFECL. 展开更多
关键词 indoor scene object detection contrastive learning feature enhancement
原文传递
Height estimation from single aerial imagery using contrastive learning based multi-scale refinement network
15
作者 Wufan Zhao Hu Ding +2 位作者 Jiaming Na Mengmeng Li Dirk Tiede 《International Journal of Digital Earth》 SCIE EI 2023年第1期2322-2340,共19页
Height map estimation from a single aerial image plays a crucial role in localization,mapping,and 3D object detection.Deep convolutional neural networks have been used to predict height information from single-view re... Height map estimation from a single aerial image plays a crucial role in localization,mapping,and 3D object detection.Deep convolutional neural networks have been used to predict height information from single-view remote sensing images,but these methods rely on large volumes of training data and often overlook geometric features present in orthographic images.To address these issues,this study proposes a gradient-based self-supervised learning network with momentum contrastive loss to extract geometric information from non-labeled images in the pretraining stage.Additionally,novel local implicit constraint layers are used at multiple decoding stages in the proposed supervised network to refine high-resolution features in height estimation.The structural-aware loss is also applied to improve the robustness of the network to positional shift and minor structural changes along the boundary area.Experimental evaluation on the ISPRS benchmark datasets shows that the proposed method outperforms other baseline networks,with minimum MAE and RMSE of 0.116 and 0.289 for the Vaihingen dataset and 0.077 and 0.481 for the Potsdam dataset,respectively.The proposed method also shows around threefold data efficiency improvements on the Potsdam dataset and domain generalization on the Enschede datasets.These results demonstrate the effectiveness of the proposed method in height map estimation from single-view remote sensing images. 展开更多
关键词 Height estimation aerial imagery digital surface models contrastive learning local implicit constrain
原文传递
Supervised Contrastive Learning with Term Weighting for Improving Chinese Text Classification
16
作者 Jiabao Guo Bo Zhao +2 位作者 Hui Liu Yifan Liu Qian Zhong 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2023年第1期59-68,共10页
With the rapid growth of information retrieval technology,Chinese text classification,which is the basis of information content security,has become a widely discussed topic.In view of the huge difference compared with... With the rapid growth of information retrieval technology,Chinese text classification,which is the basis of information content security,has become a widely discussed topic.In view of the huge difference compared with English,Chinese text task is more complex in semantic information representations.However,most existing Chinese text classification approaches typically regard feature representation and feature selection as the key points,but fail to take into account the learning strategy that adapts to the task.Besides,these approaches compress the Chinese word into a representation vector,without considering the distribution of the term among the categories of interest.In order to improve the effect of Chinese text classification,a unified method,called Supervised Contrastive Learning with Term Weighting(SCL-TW),is proposed in this paper.Supervised contrastive learning makes full use of a large amount of unlabeled data to improve model stability.In SCL-TW,we calculate the score of term weighting to optimize the process of data augmentation of Chinese text.Subsequently,the transformed features are fed into a temporal convolution network to conduct feature representation.Experimental verifications are conducted on two Chinese benchmark datasets.The results demonstrate that SCL-TW outperforms other advanced Chinese text classification approaches by an amazing margin. 展开更多
关键词 Chinese text classification Supervised contrastive learning(SCL) Term Weighting(TW) Temporal Convolution Network(TCN)
原文传递
Distilling base-and-meta network with contrastive learning for few-shot semantic segmentation
17
作者 Xinyue Chen Yueyi Wang +1 位作者 Yingyue Xu Miaojing Shi 《Autonomous Intelligent Systems》 EI 2023年第1期1-11,共11页
Current studies in few-shot semantic segmentation mostly utilize meta-learning frameworks to obtain models that can be generalized to new categories.However,these models trained on base classes with sufficient annotat... Current studies in few-shot semantic segmentation mostly utilize meta-learning frameworks to obtain models that can be generalized to new categories.However,these models trained on base classes with sufficient annotated samples are biased towards these base classes,which results in semantic confusion and ambiguity between base classes and new classes.A strategy is to use an additional base learner to recognize the objects of base classes and then refine the prediction results output by the meta learner.In this way,the interaction between these two learners and the way of combining results from the two learners are important.This paper proposes a new model,namely Distilling Base and Meta(DBAM)network by using self-attention mechanism and contrastive learning to enhance the few-shot segmentation performance.First,the self-attention-based ensemble module(SEM)is proposed to produce a more accurate adjustment factor for improving the fusion of two predictions of the two learners.Second,the prototype feature optimization module(PFOM)is proposed to provide an interaction between the two learners,which enhances the ability to distinguish the base classes from the target class by introducing contrastive learning loss.Extensive experiments have demonstrated that our method improves on the PASCAL-5i under 1-shot and 5-shot settings,respectively. 展开更多
关键词 Semantic segmentation Few-shot learning Meta learning contrastive learning Self-attention
原文传递
A mutli-scale spatial-temporal convolutional neural network with contrastive learning for motor imagery EEG classification
18
作者 Ruoqi Zhao Yuwen Wang +5 位作者 Xiangxin Cheng Wanlin Zhu Xia Meng Haijun Niu Jian Cheng Tao Liu 《Medicine in Novel Technology and Devices》 2023年第1期123-131,共9页
Motor imagery(MI)based Brain-computer interfaces(BCIs)have a wide range of applications in the stroke rehabilitation field.However,due to the low signal-to-noise ratio and high cross-subject variation of the electroen... Motor imagery(MI)based Brain-computer interfaces(BCIs)have a wide range of applications in the stroke rehabilitation field.However,due to the low signal-to-noise ratio and high cross-subject variation of the electroencephalogram(EEG)signals generated by motor imagery,the classification performance of the existing methods still needs to be improved to meet the need of real practice.To overcome this problem,we propose a multi-scale spatial-temporal convolutional neural network called MSCNet.We introduce the contrastive learning into a multi-temporal convolution scale backbone to further improve the robustness and discrimination of embedding vectors.Experimental results of binary classification show that MSCNet outperforms the state-of-theart methods,achieving accuracy improvement of 6.04%,3.98%,and 8.15%on BCIC IV 2a,SMR-BCI,and OpenBMI datasets in subject-dependent manner,respectively.The results show that the contrastive learning method can significantly improve the classification accuracy of motor imagery EEG signals,which provides an important reference for the design of motor imagery classification algorithms. 展开更多
关键词 Motor imagery ELECTROENCEPHALOGRAM contrastive learning Convolutional neural network
下载PDF
Knowledge-based recommendation with contrastive learning
19
作者 Yang He Xu Zheng +1 位作者 Rui Xu Ling Tian 《High-Confidence Computing》 EI 2023年第4期41-46,共6页
Knowledge Graphs(KGs)have been incorporated as external information into recommendation systems to ensure the high-confidence system.Recently,Contrastive Learning(CL)framework has been widely used in knowledge-based r... Knowledge Graphs(KGs)have been incorporated as external information into recommendation systems to ensure the high-confidence system.Recently,Contrastive Learning(CL)framework has been widely used in knowledge-based recommendation,owing to the ability to mitigate data sparsity and it considers the expandable computing of the system.However,existing CL-based methods still have the following shortcomings in dealing with the introduced knowledge:(1)For the knowledge view generation,they only perform simple data augmentation operations on KGs,resulting in the introduction of noise and irrelevant information,and the loss of essential information.(2)For the knowledge view encoder,they simply add the edge information into some GNN models,without considering the relations between edges and entities.Therefore,this paper proposes a Knowledge-based Recommendation with Contrastive Learning(KRCL)framework,which generates dual views from user–item interaction graph and KG.Specifically,through data enhancement technology,KRCL introduces historical interaction information,background knowledge and item–item semantic information.Then,a novel relation-aware GNN model is proposed to encode the knowledge view.Finally,through the designed contrastive loss,the representations of the same item in different views are closer to each other.Compared with various recommendation methods on benchmark datasets,KRCL has shown significant improvement in different scenarios. 展开更多
关键词 Knowledge graph Recommendation systems contrastive learning Graph neural network
下载PDF
GPDCCL: Cross-Domain Named Entity Recognition with Span-Based Domain Confusion Contrastive Learning
20
作者 Ye Wang Chenxiao Shi +1 位作者 Lijie Li Manyuan Guo 《国际计算机前沿大会会议论文集》 EI 2023年第2期202-212,共11页
The goal of cross-domain named entity recognition is to transfer mod-els learned from labelled source domain data to unlabelled or lightly labelled target domain datasets.This paper discusses how to adapt a cross-doma... The goal of cross-domain named entity recognition is to transfer mod-els learned from labelled source domain data to unlabelled or lightly labelled target domain datasets.This paper discusses how to adapt a cross-domain sen-timent analysis model to thefield of named entity recognition,as the sentiment analysis model is more relevant to the tasks and data characteristics of named entity recognition.Most previous classification methods were based on a token-wise approach,and this paper introduces entity boundary information to prevent the model from being affected by a large number of nonentity labels.Specifically,adversarial training is used to enable the model to learn domain-confusing knowl-edge,and contrastive learning is used to reduce domain shift problems.The entity boundary information is transformed into a global boundary matrix representing sentence-level target labels,enabling the model to learn explicit span boundary information.Experimental results demonstrate that this method achieves good per-formance compared to multiple cross-domain named entity recognition models on the SciTech dataset.Ablation experiments reveal that the method of introducing entity boundary information significantly improves KL divergence and contrastive learning. 展开更多
关键词 Transfer learning Named Entity Recognition Domain Adaptation contrastive learning Adversarial Training
原文传递
上一页 1 2 3 下一页 到第
使用帮助 返回顶部