期刊文献+

基于TensorFlow的K-means算法的研究 被引量:4

Research on K-means algorithm based on TensorFlow
下载PDF
导出
摘要 在大规模数据集的背景下,K-means算法随着计算量变大,计算耗时长的问题变得越来越严重。为提高算法计算速度,对传统的K-means算法进行并行化处理。TensorFlow是谷歌开发的开源机器学习库,可部署于不同的计算设备,具有强大表达能力的原语。TensorFlow可以使用CUDA(Compute Unified Device Architecture)和cu DNN(CUDA Deep Neural Network library)实现GPU计算,充分利用GPU并行计算架构提高算法运行效率。 In the context of large-scale data sets,the K-means algorithm becomes more time-consuming as the amount of computation becomes larger.In order to improve the calculation speed of the algorithm,the traditional K-means algorithm is parallelized.TensorFlow is an open source machine learning library developed by Google that can be deployed on different computing devices and has powerful expressive primitives.TensorFlow can use CUDA(Compute Unified Device Architecture) and cuDNN (CUDA Deep Neural Network library) to implement GPU computing,making full use of GPU parallel computing architecture to improve the efficiency of the algorithm.
作者 李昱锋 李建宏 文永明 Li Yufeng;Li Jianhong;Wen Yongming(National Computer System Engineering Research Institute of China,Beijing 100083,China;Chinese People′s Liberation Army 5718 Factory,Guilin 541003,China)
出处 《信息技术与网络安全》 2019年第5期37-41,共5页 Information Technology and Network Security
关键词 K-MEANS 并行计算 TensorFlow K-means parallel computation TensorFlow
  • 相关文献

参考文献12

二级参考文献70

共引文献1202

同被引文献23

引证文献4

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部