摘要
在大规模数据集的背景下,K-means算法随着计算量变大,计算耗时长的问题变得越来越严重。为提高算法计算速度,对传统的K-means算法进行并行化处理。TensorFlow是谷歌开发的开源机器学习库,可部署于不同的计算设备,具有强大表达能力的原语。TensorFlow可以使用CUDA(Compute Unified Device Architecture)和cu DNN(CUDA Deep Neural Network library)实现GPU计算,充分利用GPU并行计算架构提高算法运行效率。
In the context of large-scale data sets,the K-means algorithm becomes more time-consuming as the amount of computation becomes larger.In order to improve the calculation speed of the algorithm,the traditional K-means algorithm is parallelized.TensorFlow is an open source machine learning library developed by Google that can be deployed on different computing devices and has powerful expressive primitives.TensorFlow can use CUDA(Compute Unified Device Architecture) and cuDNN (CUDA Deep Neural Network library) to implement GPU computing,making full use of GPU parallel computing architecture to improve the efficiency of the algorithm.
作者
李昱锋
李建宏
文永明
Li Yufeng;Li Jianhong;Wen Yongming(National Computer System Engineering Research Institute of China,Beijing 100083,China;Chinese People′s Liberation Army 5718 Factory,Guilin 541003,China)
出处
《信息技术与网络安全》
2019年第5期37-41,共5页
Information Technology and Network Security