The Neighborhood Preserving Embedding(NPE) algorithm is recently proposed as a new dimensionality reduction method.However, it is confined to linear transforms in the data space.For this, based on the NPE algorithm, a...The Neighborhood Preserving Embedding(NPE) algorithm is recently proposed as a new dimensionality reduction method.However, it is confined to linear transforms in the data space.For this, based on the NPE algorithm, a new nonlinear dimensionality reduction method is proposed, which can preserve the local structures of the data in the feature space.First, combined with the Mercer kernel, the solution to the weight matrix in the feature space is gotten and then the corresponding eigenvalue problem of the Kernel NPE(KNPE) method is deduced.Finally, the KNPE algorithm is resolved through a transformed optimization problem and QR decomposition.The experimental results on three real-world data sets show that the new method is better than NPE, Kernel PCA(KPCA) and Kernel LDA(KLDA) in performance.展开更多
To separate each pattern class more strongly and deal with nonlinear ease, a new nonlinear manifold learning algorithm named supervised kernel uneorrelated diseriminant neighborhood preserving projections (SKUDNPP) ...To separate each pattern class more strongly and deal with nonlinear ease, a new nonlinear manifold learning algorithm named supervised kernel uneorrelated diseriminant neighborhood preserving projections (SKUDNPP) is proposed. The algorithm utilizes supervised weight and kernel technique which makes the algorithm cope with classifying and nonlinear problems competently. The within-class geometric structure is preserved, while maximizing the between-class distance. And the features extracted are statistically uneorrelated by introducing an uneorrelated constraint. Experiment results on millimeter wave (MMW) radar target recognition show that the method can give competitive results in comparison with current papular algorithms.展开更多
Multi-label learning deals with data associated with a set of labels simultaneously. Dimensionality reduction is an important but challenging task in multi-label learning. Feature selection is an efficient technique f...Multi-label learning deals with data associated with a set of labels simultaneously. Dimensionality reduction is an important but challenging task in multi-label learning. Feature selection is an efficient technique for dimensionality reduction to search an optimal feature subset preserving the most relevant information. In this paper, we propose an effective feature evaluation criterion for multi-label feature selection, called neighborhood relationship preserving score. This criterion is inspired by similarity preservation, which is widely used in single-label feature selection. It evaluates each feature subset by measuring its capability in preserving neighborhood relationship among samples. Unlike similarity preservation, we address the order of sample similarities which can well express the neighborhood relationship among samples, not just the pairwise sample similarity. With this criterion, we also design one ranking algorithm and one greedy algorithm for feature selection problem. The proposed algorithms are validated in six publicly available data sets from machine learning repository. Experimental results demonstrate their superiorities over the compared state-of-the-art methods.展开更多
文摘The Neighborhood Preserving Embedding(NPE) algorithm is recently proposed as a new dimensionality reduction method.However, it is confined to linear transforms in the data space.For this, based on the NPE algorithm, a new nonlinear dimensionality reduction method is proposed, which can preserve the local structures of the data in the feature space.First, combined with the Mercer kernel, the solution to the weight matrix in the feature space is gotten and then the corresponding eigenvalue problem of the Kernel NPE(KNPE) method is deduced.Finally, the KNPE algorithm is resolved through a transformed optimization problem and QR decomposition.The experimental results on three real-world data sets show that the new method is better than NPE, Kernel PCA(KPCA) and Kernel LDA(KLDA) in performance.
基金Natural Science Foundation of Jiangsu Higher Education Institutions of China (No. 11KJB510020)National Natural Science Foundation of China (No. 61171077)College Industrialization Project of Jiangsu Province,China (No. JH09-24)
文摘To separate each pattern class more strongly and deal with nonlinear ease, a new nonlinear manifold learning algorithm named supervised kernel uneorrelated diseriminant neighborhood preserving projections (SKUDNPP) is proposed. The algorithm utilizes supervised weight and kernel technique which makes the algorithm cope with classifying and nonlinear problems competently. The within-class geometric structure is preserved, while maximizing the between-class distance. And the features extracted are statistically uneorrelated by introducing an uneorrelated constraint. Experiment results on millimeter wave (MMW) radar target recognition show that the method can give competitive results in comparison with current papular algorithms.
基金supported in part by the National Natural Science Foundation of China(61379049,61772120)
文摘Multi-label learning deals with data associated with a set of labels simultaneously. Dimensionality reduction is an important but challenging task in multi-label learning. Feature selection is an efficient technique for dimensionality reduction to search an optimal feature subset preserving the most relevant information. In this paper, we propose an effective feature evaluation criterion for multi-label feature selection, called neighborhood relationship preserving score. This criterion is inspired by similarity preservation, which is widely used in single-label feature selection. It evaluates each feature subset by measuring its capability in preserving neighborhood relationship among samples. Unlike similarity preservation, we address the order of sample similarities which can well express the neighborhood relationship among samples, not just the pairwise sample similarity. With this criterion, we also design one ranking algorithm and one greedy algorithm for feature selection problem. The proposed algorithms are validated in six publicly available data sets from machine learning repository. Experimental results demonstrate their superiorities over the compared state-of-the-art methods.