摘要
支持向量机是一种很流行的机器学习方法,在许多领域都有了广泛的使用。传统的支持向量机模型是寻求类之间的间隔最大化,而忽视了一个重要的信息—样本的类内结构,类内离散度。文中将Fisher判别分析里面的类内离散度引入到最小二乘支持向量机中,提出了基于类内离散度的最小二乘支持向量机模型。并通过核函数将样本映射到高维特征空间,在特征空间中进行样本分类。基于UCI数据库的数据集实验测试表明,基于类内离散度的最小二乘支持向量机提高了分类的准确度。
Support vector machine (SVM) is a popular machine learning technique, and it has been widely used in many real-world applications. Traditional SVMs aims at seeking the hyperplane that maximizes the margin and ignores an important prior knowledge, the within-class structure. It formulates a Least Square Support Vector Machine (LSSVM) based on within-class scatter for binary classification,which incorporates minimum within-class scatter in Fisher Discriminant Analysis (FDA) into LSSVM. The sample points are mapped to a high dimensional feature space where the samples are classified by using the kernel method. Experiments on four benchmarking datasets based on UCI show that the proposed WCS-LSSVM can improve the classification accuracy.
出处
《计算机技术与发展》
2015年第4期71-74,共4页
Computer Technology and Development
基金
国家自然科学基金资助项目(61373137)
关键词
类内离散度
分类
最小二乘支持向量机
核方法
within-class scatter
classification
least squares support vector machine
kernel methods