期刊文献+
共找到10篇文章
< 1 >
每页显示 20 50 100
Multi-Label Learning Based on Transfer Learning and Label Correlation 被引量:1
1
作者 Kehua Yang Chaowei She +2 位作者 Wei Zhang Jiqing Yao Shaosong Long 《Computers, Materials & Continua》 SCIE EI 2019年第7期155-169,共15页
In recent years,multi-label learning has received a lot of attention.However,most of the existing methods only consider global label correlation or local label correlation.In fact,on the one hand,both global and local... In recent years,multi-label learning has received a lot of attention.However,most of the existing methods only consider global label correlation or local label correlation.In fact,on the one hand,both global and local label correlations can appear in real-world situation at same time.On the other hand,we should not be limited to pairwise labels while ignoring the high-order label correlation.In this paper,we propose a novel and effective method called GLLCBN for multi-label learning.Firstly,we obtain the global label correlation by exploiting label semantic similarity.Then,we analyze the pairwise labels in the label space of the data set to acquire the local correlation.Next,we build the original version of the label dependency model by global and local label correlations.After that,we use graph theory,probability theory and Bayesian networks to eliminate redundant dependency structure in the initial version model,so as to get the optimal label dependent model.Finally,we obtain the feature extraction model by adjusting the Inception V3 model of convolution neural network and combine it with the GLLCBN model to achieve the multi-label learning.The experimental results show that our proposed model has better performance than other multi-label learning methods in performance evaluating. 展开更多
关键词 Bayesian networks multi-label learning global and local label correlations transfer learning
下载PDF
Stable Label-Specific Features Generation for Multi-Label Learning via Mixture-Based Clustering Ensemble
2
作者 Yi-Bo Wang Jun-Yi Hang Min-Ling Zhang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第7期1248-1261,共14页
Multi-label learning deals with objects associated with multiple class labels,and aims to induce a predictive model which can assign a set of relevant class labels for an unseen instance.Since each class might possess... Multi-label learning deals with objects associated with multiple class labels,and aims to induce a predictive model which can assign a set of relevant class labels for an unseen instance.Since each class might possess its own characteristics,the strategy of extracting label-specific features has been widely employed to improve the discrimination process in multi-label learning,where the predictive model is induced based on tailored features specific to each class label instead of the identical instance representations.As a representative approach,LIFT generates label-specific features by conducting clustering analysis.However,its performance may be degraded due to the inherent instability of the single clustering algorithm.To improve this,a novel multi-label learning approach named SENCE(stable label-Specific features gENeration for multi-label learning via mixture-based Clustering Ensemble)is proposed,which stabilizes the generation process of label-specific features via clustering ensemble techniques.Specifically,more stable clustering results are obtained by firstly augmenting the original instance repre-sentation with cluster assignments from base clusters and then fitting a mixture model via the expectation-maximization(EM)algorithm.Extensive experiments on eighteen benchmark data sets show that SENCE performs better than LIFT and other well-established multi-label learning algorithms. 展开更多
关键词 Clustering ensemble expectation-maximization al-gorithm label-specific features multi-label learning
下载PDF
Optimization Model and Algorithm for Multi-Label Learning
3
作者 Zhengyang Li 《Journal of Applied Mathematics and Physics》 2021年第5期969-975,共7页
<div style="text-align:justify;"> This paper studies a kind of urban security risk assessment model based on multi-label learning, which is transformed into the solution of linear equations through a s... <div style="text-align:justify;"> This paper studies a kind of urban security risk assessment model based on multi-label learning, which is transformed into the solution of linear equations through a series of transformations, and then the solution of linear equations is transformed into an optimization problem. Finally, this paper uses some classical optimization algorithms to solve these optimization problems, the convergence of the algorithm is proved, and the advantages and disadvantages of several optimization methods are compared. </div> 展开更多
关键词 Operations Research multi-label learning Linear Equations Solving Optimization Algorithm
下载PDF
ML-ANet:A Transfer Learning Approach Using Adaptation Network for Multi-label Image Classification in Autonomous Driving
4
作者 Guofa Li Zefeng Ji +3 位作者 Yunlong Chang Shen Li Xingda Qu Dongpu Cao 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2021年第5期107-117,共11页
To reduce the discrepancy between the source and target domains,a new multi-label adaptation network(ML-ANet)based on multiple kernel variants with maximum mean discrepancies is proposed in this paper.The hidden repre... To reduce the discrepancy between the source and target domains,a new multi-label adaptation network(ML-ANet)based on multiple kernel variants with maximum mean discrepancies is proposed in this paper.The hidden representations of the task-specific layers in ML-ANet are embedded in the reproducing kernel Hilbert space(RKHS)so that the mean-embeddings of specific features in different domains could be precisely matched.Multiple kernel functions are used to improve feature distribution efficiency for explicit mean embedding matching,which can further reduce domain discrepancy.Adverse weather and cross-camera adaptation examinations are conducted to verify the effectiveness of our proposed ML-ANet.The results show that our proposed ML-ANet achieves higher accuracies than the compared state-of-the-art methods for multi-label image classification in both the adverse weather adaptation and cross-camera adaptation experiments.These results indicate that ML-ANet can alleviate the reliance on fully labeled training data and improve the accuracy of multi-label image classification in various domain shift scenarios. 展开更多
关键词 Autonomous vehicles Deep learning Image classification multi-label learning Transfer learning
下载PDF
Non-negative matrix factorization based modeling and training algorithm for multi-label learning
5
作者 Liang SUN Hongwei GE Wenjing KANG 《Frontiers of Computer Science》 SCIE EI CSCD 2019年第6期1243-1254,共12页
Multi-label learning is more complicated than single-label learning since the semantics of the instances are usually overlapped and not identical.The effectiveness of many algorithms often fails when the correlations ... Multi-label learning is more complicated than single-label learning since the semantics of the instances are usually overlapped and not identical.The effectiveness of many algorithms often fails when the correlations in the feature and label space are not fully exploited.To this end,we propose a novel non-negative matrix factorization(NMF)based modeling and training algorithm that learns from both the adjacencies of the instances and the labels of the training set.In the modeling process,a set of generators are constructed,and the associations among generators,instances,and labels are set up,with which the label prediction is conducted.In the training process,the parameters involved in the process of modeling are determined.Specifically,an NMF based algorithm is proposed to determine the associations between generators and instances,and a non-negative least square optimization algorithm is applied to determine the associations between generators and labels.The proposed algorithm fully takes the advantage of smoothness assumption,so that the labels are properly propagated.The experiments were carried out on six set of benchmarks.The results demonstrate the effectiveness of the proposed algorithms. 展开更多
关键词 multi-label learning non-negative least square optimization non-negative matrix factorization smoothness assumption
原文传递
Feature Selection for Multi-label Classification Using Neighborhood Preservation 被引量:7
6
作者 Zhiling Cai William Zhu 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2018年第1期320-330,共11页
Multi-label learning deals with data associated with a set of labels simultaneously. Dimensionality reduction is an important but challenging task in multi-label learning. Feature selection is an efficient technique f... Multi-label learning deals with data associated with a set of labels simultaneously. Dimensionality reduction is an important but challenging task in multi-label learning. Feature selection is an efficient technique for dimensionality reduction to search an optimal feature subset preserving the most relevant information. In this paper, we propose an effective feature evaluation criterion for multi-label feature selection, called neighborhood relationship preserving score. This criterion is inspired by similarity preservation, which is widely used in single-label feature selection. It evaluates each feature subset by measuring its capability in preserving neighborhood relationship among samples. Unlike similarity preservation, we address the order of sample similarities which can well express the neighborhood relationship among samples, not just the pairwise sample similarity. With this criterion, we also design one ranking algorithm and one greedy algorithm for feature selection problem. The proposed algorithms are validated in six publicly available data sets from machine learning repository. Experimental results demonstrate their superiorities over the compared state-of-the-art methods. 展开更多
关键词 Feature selection multi-label learning neighborhood relationship preserving sample similarity
下载PDF
Compositional metric learning for multi-label classification
7
作者 Yan-Ping SUN Min-Ling ZHANG 《Frontiers of Computer Science》 SCIE EI CSCD 2021年第5期1-12,共12页
Multi-label classification aims to assign a set of proper labels for each instance,where distance metric learning can help improve the generalization ability of instance-based multi-label classification models.Existin... Multi-label classification aims to assign a set of proper labels for each instance,where distance metric learning can help improve the generalization ability of instance-based multi-label classification models.Existing multi-label metric learning techniques work by utilizing pairwise constraints to enforce that examples with similar label assignments should have close distance in the embedded feature space.In this paper,a novel distance metric learning approach for multi-label classification is proposed by modeling structural interactions between instance space and label space.On one hand,compositional distance metric is employed which adopts the representation of a weighted sum of rank-1 PSD matrices based on com-ponent bases.On the other hand,compositional weights are optimized by exploiting triplet similarity constraints derived from both instance and label spaces.Due to the compositional nature of employed distance metric,the resulting problem admits quadratic programming formulation with linear optimization complexity w.r.t.the number of training examples.We also derive the generalization bound for the proposed approach based on algorithmic robustness analysis of the compositional metric.Extensive experiments on sixteen benchmark data sets clearly validate the usefulness of compositional metric in yielding effective distance metric for multi-label classification. 展开更多
关键词 machine learning multi-label learning metric learning compositional metric positive semidefinite matrix decomposition
原文传递
Multi-task multi-label multiple instance learning
8
作者 Yi SHEN Jian-ping FAN 《Journal of Zhejiang University-Science C(Computers and Electronics)》 SCIE EI 2010年第11期860-871,共12页
For automatic object detection tasks,large amounts of training images are usually labeled to achieve more reliable training of the object classifiers;this is cost-expensive since it requires hiring professionals to la... For automatic object detection tasks,large amounts of training images are usually labeled to achieve more reliable training of the object classifiers;this is cost-expensive since it requires hiring professionals to label large-scale training images.When a large number of object classes come into view,the issue of obtaining a large enough amount of the labeled training images becomes more critical.There are three potential solutions to reduce the burden for image labeling:(1) allowing people to provide the object labels loosely at the image level rather than at the object level(e.g.,loosely-tagged images without identifying the exact object locations in the images) ;(2) harnessing large-scale collaboratively-tagged images that are available on the Internet;and,(3) developing new machine learning algorithms that can directly leverage large-scale collaboratively-or loosely-tagged images for achieving more eective training of a large number of object classifiers.Based on these observations,a multi-task multi-label multiple instance learning(MTML-MIL) algorithm is developed in this paper by leveraging both inter-object correlations and large-scale loosely-labeled images for object classifier training.By seamlessly integrating multi-task learning,multi-label learning,and multiple instance learning,our MTML-MIL algorithm can achieve more accurate training of a large number of inter-related object classifiers(where an object network is constructed for determining the inter-related learning tasks directly in the feature space rather than in the label space) .Our experimental results have shown that our MTML-MIL algorithm can achieve higher detection accuracy rates for automatic object detection. 展开更多
关键词 Object network Loosely tagged images Multi-task learning multi-label learning Multiple instance learning
原文传递
Multi-task MIML learning for pre-course student performance prediction 被引量:1
9
作者 Yuling Ma Chaoran Cui +3 位作者 Jun Yu Jie Guo Gongping Yang Yilong Yin 《Frontiers of Computer Science》 SCIE EI CSCD 2020年第5期113-121,共9页
In higher education,the initial studying period of each course plays a crucial role for students,and seriously influences the subsequent learning activities.However,given the large size of a course’s students at univ... In higher education,the initial studying period of each course plays a crucial role for students,and seriously influences the subsequent learning activities.However,given the large size of a course’s students at universities,it has become impossible for teachers to keep track of the performance of individual students.In this circumstance,an academic early warning system is desirable,which automatically detects students with difficulties in learning(i.e.,at-risk students)prior to a course starting.However,previous studies are not well suited to this purpose for two reasons:1)they have mainly concentrated on e-learning platforms,e.g.,massive open online courses(MOOCs),and relied on the data about students’online activities,which is hardly accessed in traditional teaching scenarios;and 2)they have only made performance prediction when a course is in progress or even close to the end.In this paper,for traditional classroom-teaching scenarios,we investigate the task of pre-course student performance prediction,which refers to detecting at-risk students for each course before its commencement.To better represent a student sample and utilize the correlations among courses,we cast the problem as a multi-instance multi-label(MIML)problem.Besides,given the problem of data scarcity,we propose a novel multi-task learning method,i.e.,MIML-Circle,to predict the performance of students from different specialties in a unified framework.Extensive experiments are conducted on five real-world datasets,and the results demonstrate the superiority of our approach over the state-of-the-art methods. 展开更多
关键词 educational data mining academic early warning system student performance prediction multi-instance multi-label learning multi-task learning
原文传递
NetGO 3.0:Protein Language Model Improves Large-scale Functional Annotations
10
作者 Shaojun Wang Ronghui You +2 位作者 Yunjia Liu Yi Xiong Shanfeng Zhu 《Genomics, Proteomics & Bioinformatics》 SCIE CAS CSCD 2023年第2期349-358,共10页
As one of the state-of-the-art automated function prediction(AFP)methods,NetGO 2.0 integrates multi-source information to improve the performance.However,it mainly utilizes the proteins with experimentally supported f... As one of the state-of-the-art automated function prediction(AFP)methods,NetGO 2.0 integrates multi-source information to improve the performance.However,it mainly utilizes the proteins with experimentally supported functional annotations without leveraging valuable information from a vast number of unannotated proteins.Recently,protein language models have been proposed to learn informative representations[e.g.,Evolutionary Scale Modeling(ESM)-1b embedding] from protein sequences based on self-supervision.Here,we represented each protein by ESM-1b and used logistic regression(LR)to train a new model,LR-ESM,for AFP.The experimental results showed that LR-ESM achieved comparable performance with the best-performing component of NetGO 2.0.Therefore,by incorporating LR-ESM into NetGO 2.0,we developed NetGO 3.0 to improve the performance of AFP extensively. 展开更多
关键词 Protein function prediction Web service Protein language model learning to rank Large-scale multi-label learning
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部