期刊文献+

基于FPGA的人体行为识别系统的设计

Design of human activity recognition system based on FPGA
下载PDF
导出
摘要 为实现边缘端人体行为识别需满足低功耗、低延时的目标,本文设计了一种以卷积神经网络(CNN)为基础、基于可穿戴传感器的快速识别系统.首先通过传感器采集数据,制作人体行为识别数据集,在PC端预训练基于CNN的行为识别模型,在测试集达到93.61%的准确率.然后,通过数据定点化、卷积核复用、并行处理数据和流水线等方法实现硬件加速.最后在FPGA上部署识别模型,并将采集到的传感器数据输入到系统中,实现边缘端的人体行为识别.整个系统基于Ultra96-V2进行软硬件联合开发,实验结果表明,输入时钟为200 M的情况下,系统在FPGA上运行准确率达到91.80%的同时,识别速度高于CPU,功耗仅为CPU的1/10,能耗比相对于GPU提升了91%,达到了低功耗、低延时的设计要求. In order to achieve the goal of low power consumption and low latency for edge-end human activity recognition,this paper designs a fast recognition system based on wearable sensors and Convolutional Neural Networks(CNNs).First,the system collects data through sensors to make a human activity recognition dataset,and pre-trains a CNN-based behavior recognition model on the PC side,which achieves an accuracy of 93.61%on the test set.Then,hardware acceleration is realized through methods such as data fixed point,convolution kernel multiplexing,parallel processing of data,and pipeline.Finally,the recognition model is deployed on the FPGA,and the collected sensor data are input into the system to realize the recognition of human activity at the edge.The whole system is developed jointly with hardware and software based on Ultra96-V2.The experimental results show that when the input clock is 200 M,the system runs on FPGA with an accuracy of 91.80%;the proposed system is superior to CPU in recognition speed as well as power consumption,specifically,the power consumption is only one-tenth of CPU consumed,and energy consumption ratio is 91%higher than that of GPU.It can be concluded that the FPGA-based human activity recognition system meets the design requirements of low power consumption and low delay.
作者 吴宇航 何军 WU Yuhang;HE Jun(School of Electronics&Information Engineering,Nanjing University of Information Science&Technology,Nanjing 210044;School of Artificial Intelligence,Nanjing University of Information Science&Technology,Nanjing 210044)
出处 《南京信息工程大学学报(自然科学版)》 CAS 北大核心 2022年第3期331-340,共10页 Journal of Nanjing University of Information Science & Technology(Natural Science Edition)
基金 国家自然科学基金(61601230)。
关键词 人体行为识别(HAR) 边缘端 可穿戴传感器 卷积神经网络(CNN) 现场可编程门阵列(FPGA) 硬件加速 human activity recognition(HAR) edge-end wearable sensor convolutional neural networks(CNNs) field programmable gate array(FPGA) hardware acceleration
  • 相关文献

参考文献7

二级参考文献85

  • 1高建忠,蒋庄德,赵玉龙,朱笠,赵国仙.Full distributed fiber optical sensor for intrusion detection in application to buried pipelines[J].Chinese Optics Letters,2005,3(11):633-635. 被引量:21
  • 2Fujiyoshi H, Lipton A J, Kanade T. Real-time human mo- tion analysis by image skeletonization. IEICE Transactions on Information and Systems, 2004, 87-D(1): 113-120.
  • 3Chaudhry R, Ravichandran A, Hager G, Vidal R. His- tograms of oriented optical flow and Binet-Cauchy kernels on nonlinear dynamical systems for the recognition of hu- man actions. In: Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, FL: IEEE, 2009. 1932-1939.
  • 4Dalal N, Triggs B. Histograms of oriented gradients for human detection. In: Proceedings of the 2005 IEEE Con- ference on Computer Vision and Pattern Recognition. San Diego, CA, USA: IEEE, 2005. 886-893.
  • 5Lowe D G. Object recognition from local scale-invariant fea- tures. In: Proceedings of the 7th IEEE International Confer- ence on Computer Vision. Kerkyra: IEEE, 1999. 1150-1157.
  • 6Schuldt C, Laptev I, Caputo B. Recognizing human actions: a local SVM approach. In: Proceedings of the 17th In- ternational Conference on Pattern Recognition. Cambridge: IEEE, 2004. 32-36.
  • 7Dollar P, Rabaud V, Cottrell G, Belongie S. Behavior recog- nition via sparse spatio-temporal features. In: Proceedings of the 2005 IEEE International Workshop on Visual Surveil- lance and Performance Evaluation of Tracking and Surveil- lance. Beijing, China: IEEE, 2005.65-72.
  • 8Rapantzikos K, Avrithis Y, Kollias S. Dense saliency-based spatiotemporal feature points for action recognition. In: Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, FL: IEEE, 2009. 1454-1461.
  • 9Knopp J, Prasad M, Willems G, Timofte R, Van Gool L. Hough transform and 3D SURF for robust three dimensional classification. In: Proceedings of the llth European Confer- ence on Computer Vision (ECCV 2010). Berlin Heidelberg: Springer. 2010. 589-602.
  • 10Klaser A, Marszaeek M, Schmid C. A spatio-temporal de- scriptor based on 3D-gradients. In: Proceedings of the 19th British Machine Vision Conference. Leeds: BMVA Press, 2008. 99.1-99.10.

共引文献186

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部