摘要
粒子群优化(PSO)算法与误差反向传播(BP)算法相结合训练神经网络(PSO-BP-NN),可以有效提高网络的泛化能力,但是面临的最大问题就是计算时间过长。为此,提出了基于图形处理单元(GPU)的并行加速解决方案,并基于该方法对波达方向(DOA)估计问题进行了建模。在算法执行过程中,利用粒子群神经网络(PSO-NN)粒子行为的可并行性和误差反向传播神经网络(BP-NN)样本训练的可并行性来减少神经网络(NN)的训练时间。在统一计算设备架构(CUDA)下对DOA估计进行了NN建模。数值计算结果表明,相对于CPU端串行PSO-BP-NN,GPU端并行PSO-BP-NN在收敛稳定性一致的前提下取得了65倍的计算加速比。
The neural network( NN) trained by particle swarm optimization( PSO) algorithm and error back propagation( BP) algorithm,which is called PSO-BP-NN,can effectively improve the generalization ability. However,its execution time is very long. In order to deal with the problem,this paper proposed parallel proposal based on graphic processing unit( GPU). Moreover,it used the proposal to model the direction of arrival( DOA) estimation. The proposed proposal used the particle behavior parallelization of PSO-based NN( PSO-NN) and data parallelization of BP-based NN( BP-NN) simultaneously to reduce the training time of NN. Under the compute unified device architecture( CUDA),it modeled the DOA estimation by the parallel PSO-BP-NN. The simulation results show that compared with CPU-based sequential PSO-BP-NN,65 times of speedup ratio has achieved in GPU-based parallel PSO-BP-NN with the same optimization stability.
出处
《计算机应用研究》
CSCD
北大核心
2015年第10期2963-2966,共4页
Application Research of Computers
基金
船舶工业国防科技预研基金资助项目
关键词
波达方向估计
粒子群优化
神经网络
图形处理单元
统一计算设备架构
direction of arrival(DOA) estimation
particle swarm optimization(PSO)
neural network(NN)
graphic processing unit(GPU)
compute unified device architecture(CUDA)