期刊文献+

基于β-混合输入的经验风险最小化回归的学习速率(英文) 被引量:2

Learning Rates of Empirical Risk Minimization Regression with Beta-Mixing Inputs
下载PDF
导出
摘要 研究最小平方损失下的经验风险最小化算法是统计学习理论中非常重要研究内容之一.而以往研究经验风险最小化回归学习速率的几乎所有工作都是基于独立同分布输入假设的.然而,独立的输入样本是一个非常强的条件.因此,在本文,我们超出了独立输入样本这个经典框架来研究了基于β混合输入样本的经验风险最小化回归算法学习速率的界.我们证明了基于β混合输入样本的经验风险最小化回归算法是一致的,指出了本文所建立的结果同样适合输入样本是马氏链、隐马氏链的情形. The study of empirical risk minimization (ERM) algorithm associated with least squared loss is one of very important issues in statistical learning theory. The main results describing the learning rates of ERM regression are almost based on independent and identically distributed (i.i.d.) inputs. However, independence is a very restrictive concept. In this paper we go far beyond this classical framework by establishing the bound on the learning rates of ERM regression with geometrically β-mixing inputs. We prove that the ERM regression with geometrically β-mixing inputs is consistent and the main results obtained in this paper are also suited to a large class of Markov chains samples and hidden Markov models.
出处 《应用概率统计》 CSCD 北大核心 2011年第6期597-613,共17页 Chinese Journal of Applied Probability and Statistics
基金 supported by National 973 project(2007CB311002)NSFC key project(70501030)NSFC project(61070225)China Postdoctoral Science Foundation(20080440190,200902592)
关键词 学习速率 经验风险最小化 β混合 最小平方损失. Learning rate, empirical risk minimization, β-mixing, least squared loss.
  • 相关文献

参考文献18

  • 1Vapnik, V., Statistical Learning Theory, John Wiley, New York, 1998.
  • 2Cucker, F. and Smale, S., On the mathematical foundations of learning, Bulletin of the American Mathematical Society, 39(2001), 1-49.
  • 3Smale, S. and Zhou, D.X., Estimating the approximation error in learning theory, Anal. Appl., 1(2003), 17-41.
  • 4Smale, S. and Zhou, D.X., Shannon sampling and function reconstruction from point values, Bull. Amer. Math. Soc., 41(2004), 279-305.
  • 5DeVore, R., Kerkyacharian, G., Picard, D. and Temlyakov, V., Approximation methods for supervised learning, Foundations of Computational Mathematics, 6(2006), 3-58.
  • 6Vidyasagar, M., Learning and Generalization with Applications to Neural Networks, Springer, London, 2003.
  • 7Steinwart, I., Hush, D. and Scovel, C., Learning from dependent observations, Multivariate Anal., 100(2009), 175-194.
  • 8Smale, S. and Zhou, D.X., Online learning with Markov sampling, Anal. Appl., 7(2009), 87-113.
  • 9Yu, B., Rates of convergence for empirical processes of stationary mixing sequences, Ann. Probab., 22(1994), 94-114.
  • 10Nobel, A. and Dembo, A., A note on uniform laws of averages for dependent processes, Statist. Probab. Lett., 17(1993), 169-172.

同被引文献4

引证文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部