巢湖学院学报 ›› 2021, Vol. 23 ›› Issue (3): 51-54+60.doi: 10.12152/j.issn.1672-2868.2021.03.007

• 数理科学 • 上一篇    下一篇

非强凸和光滑条件下随机经典动量算法的收敛性

费经泰,查星星,王冬银:巢湖学院 数学与统计学院   

  1. 巢湖学院 数学与统计学院,安徽 巢湖 238024
  • 收稿日期:2021-04-14 出版日期:2021-05-25 发布日期:2021-08-11
  • 作者简介:费经泰(1992—),男,安徽六安人,巢湖学院数学与统计学院助教,主要从事模式识别和机器学习研究。
  • 基金资助:
    安徽省高校优秀青年人才支持项目(项目编号:gxyq2018076);安徽省高校自然科学研究项目(项目编号:KJ2018A0455);安徽省高校青年人才支持项目(项目编号:gxyq2019082)

Convergence of Stochastic Classical Momentum Algorithm with Non-strongly Convex and Non-smooth

FEI Jing-tai,ZHA Xing-xing,WANG Dong-yin:School of Mathematics and Statistics, Chaohu University   

  1. School of Mathematics and Statistics, Chaohu University, Chaohu Anhui 238024
  • Received:2021-04-14 Online:2021-05-25 Published:2021-08-11

摘要: 对随机经典动量算法(CM)的收敛速度问题进行深入研究,通过对传统带动量随机梯度下降算法的迭代公式进行改造,在非强凸和光滑的条件下得到了算法的收敛阶。当动量系数pt取常数的时候,收敛阶为O,当动量系数pt取变系数的时候,通过设置不同的学习率,分别得到收敛速率。最后通过数值实验说明其合理性。

关键词: 机器学习, 随机经典动量算法, 收敛阶, 动量系数, 学习率

Abstract: In this paper, we have conducted in-depth research on the convergence rate of the Stochastic Classical Momentum Algorithm (CM). By modifying the iterative formula of traditional stochastic gradient descent algorithm with momentum, we obtain the convergence rate of algorithm with non-strongly convex and non-smooth. When the momentum coefficient pt  is a constant, the algorithm achieves a convergence rate of O.When the momentum coefficient pt is not a constant, we obtain the convergence rate by setting different learning rates. Finally, the rationality is demonstrated through numerical experiments.

Key words: machine learning, Stochastic Classical Momentum Algorithm, convergence rate, momentum coefficient, learning rates

中图分类号: 

  • TP181