摘要
随着人工智能的发展、含有激活函数库开源框架的增加,针对激活函数库的对比与分析越来越重要。在Intel x86架构上进行实验,从函数性能、稳定性、精度3个方面测试并分析了PyTorch和TensorFlow两种主流人工智能框架中的常用激活函数。实验结果表明,PyTorch的整体稳定性要高于TensorFlow,且Sigmoid、Hardsigmoid、SeLU、ReLU、ReLU6、Tanh函数的性能皆优于TensorFlow;在精度方面,TensorFlow中除SeLU函数与LeakyReLU函数稍差些,其余函数与PyTorch表现相当。
With the development of AI,an increasing number of open source framework that includes activation function began to emerge,so the comparison and analysis between different activation libraries become more and more important.The experiment is carried out on the architecture of Intel x86.This paper tests commonly used activation function in two mainstream AI framework,PyTorch and TensorFlow,from three aspects:performance,stability and accuracy.Results show that the stability of PyTorch is higher than that of TensorFlow.The performance of Sigmoid,Hardsigmoid,SeLU,ReLU,ReLU6 and Tanh in PyTorch is better than that in TensorFlow.In terms of accuracy,except SeLU function and leaky ReLU function,the performance of other functions in TensorFlow is similar with that of PyTorch.
作者
王攀杰
郭绍忠
侯明
郝江伟
许瑾晨
WANG Panjie;GUO Shaozhong;HOU Ming;HAO Jiangwei;XU Jinchen(Information Engineering University,Zhengzhou 450001,China;State Key Laboratory of Mathematical Engineering and Advanced Computing,Zhengzhou 450001,China)
出处
《信息工程大学学报》
2021年第5期551-557,共7页
Journal of Information Engineering University
基金
国家自然科学基金资助项目(61802434)。