Artificial intelligence(AI)technology has become integral in the realm of medicine and healthcare,particularly in human activity recognition(HAR)applications such as fitness and rehabilitation tracking.This study intr...Artificial intelligence(AI)technology has become integral in the realm of medicine and healthcare,particularly in human activity recognition(HAR)applications such as fitness and rehabilitation tracking.This study introduces a robust coupling analysis framework that integrates four AI-enabled models,combining both machine learning(ML)and deep learning(DL)approaches to evaluate their effectiveness in HAR.The analytical dataset comprises 561 features sourced from the UCI-HAR database,forming the foundation for training the models.Additionally,the MHEALTH database is employed to replicate the modeling process for comparative purposes,while inclusion of the WISDM database,renowned for its challenging features,supports the framework’s resilience and adaptability.The ML-based models employ the methodologies including adaptive neuro-fuzzy inference system(ANFIS),support vector machine(SVM),and random forest(RF),for data training.In contrast,a DL-based model utilizes one-dimensional convolution neural network(1dCNN)to automate feature extraction.Furthermore,the recursive feature elimination(RFE)algorithm,which drives an ML-based estimator to eliminate low-participation features,helps identify the optimal features for enhancing model performance.The best accuracies of the ANFIS,SVM,RF,and 1dCNN models with meticulous featuring process achieve around 90%,96%,91%,and 93%,respectively.Comparative analysis using the MHEALTH dataset showcases the 1dCNN model’s remarkable perfect accuracy(100%),while the RF,SVM,and ANFIS models equipped with selected features achieve accuracies of 99.8%,99.7%,and 96.5%,respectively.Finally,when applied to the WISDM dataset,the DL-based and ML-based models attain accuracies of 91.4%and 87.3%,respectively,aligning with prior research findings.In conclusion,the proposed framework yields HAR models with commendable performance metrics,exhibiting its suitability for integration into the healthcare services system through AI-driven applications.展开更多
In this present time,Human Activity Recognition(HAR)has been of considerable aid in the case of health monitoring and recovery.The exploitation of machine learning with an intelligent agent in the area of health infor...In this present time,Human Activity Recognition(HAR)has been of considerable aid in the case of health monitoring and recovery.The exploitation of machine learning with an intelligent agent in the area of health informatics gathered using HAR augments the decision-making quality and significance.Although many research works conducted on Smart Healthcare Monitoring,there remain a certain number of pitfalls such as time,overhead,and falsification involved during analysis.Therefore,this paper proposes a Statistical Partial Regression and Support Vector Intelligent Agent Learning(SPR-SVIAL)for Smart Healthcare Monitoring.At first,the Statistical Partial Regression Feature Extraction model is used for data preprocessing along with the dimensionality-reduced features extraction process.Here,the input dataset the continuous beat-to-beat heart data,triaxial accelerometer data,and psychological characteristics were acquired from IoT wearable devices.To attain highly accurate Smart Healthcare Monitoring with less time,Partial Least Square helps extract the dimensionality-reduced features.After that,with these resulting features,SVIAL is proposed for Smart Healthcare Monitoring with the help of Machine Learning and Intelligent Agents to minimize both analysis falsification and overhead.Experimental evaluation is carried out for factors such as time,overhead,and false positive rate accuracy concerning several instances.The quantitatively analyzed results indicate the better performance of our proposed SPR-SVIAL method when compared with two state-of-the-art methods.展开更多
The rapidly advancing Convolutional Neural Networks(CNNs)have brought about a paradigm shift in various computer vision tasks,while also garnering increasing interest and application in sensor-based Human Activity Rec...The rapidly advancing Convolutional Neural Networks(CNNs)have brought about a paradigm shift in various computer vision tasks,while also garnering increasing interest and application in sensor-based Human Activity Recognition(HAR)efforts.However,the significant computational demands and memory requirements hinder the practical deployment of deep networks in resource-constrained systems.This paper introduces a novel network pruning method based on the energy spectral density of data in the frequency domain,which reduces the model’s depth and accelerates activity inference.Unlike traditional pruning methods that focus on the spatial domain and the importance of filters,this method converts sensor data,such as HAR data,to the frequency domain for analysis.It emphasizes the low-frequency components by calculating their energy spectral density values.Subsequently,filters that meet the predefined thresholds are retained,and redundant filters are removed,leading to a significant reduction in model size without compromising performance or incurring additional computational costs.Notably,the proposed algorithm’s effectiveness is empirically validated on a standard five-layer CNNs backbone architecture.The computational feasibility and data sensitivity of the proposed scheme are thoroughly examined.Impressively,the classification accuracy on three benchmark HAR datasets UCI-HAR,WISDM,and PAMAP2 reaches 96.20%,98.40%,and 92.38%,respectively.Concurrently,our strategy achieves a reduction in Floating Point Operations(FLOPs)by 90.73%,93.70%,and 90.74%,respectively,along with a corresponding decrease in memory consumption by 90.53%,93.43%,and 90.05%.展开更多
RFID-based human activity recognition(HAR)attracts attention due to its convenience,noninvasiveness,and privacy protection.Existing RFID-based HAR methods use modeling,CNN,or LSTM to extract features effectively.Still...RFID-based human activity recognition(HAR)attracts attention due to its convenience,noninvasiveness,and privacy protection.Existing RFID-based HAR methods use modeling,CNN,or LSTM to extract features effectively.Still,they have shortcomings:1)requiring complex hand-crafted data cleaning processes and 2)only addressing single-person activity recognition based on specific RF signals.To solve these problems,this paper proposes a novel device-free method based on Time-streaming Multiscale Transformer called TransTM.This model leverages the Transformer's powerful data fitting capabilities to take raw RFID RSSI data as input without pre-processing.Concretely,we propose a multiscale convolutional hybrid Transformer to capture behavioral features that recognizes singlehuman activities and human-to-human interactions.Compared with existing CNN-and LSTM-based methods,the Transformer-based method has more data fitting power,generalization,and scalability.Furthermore,using RF signals,our method achieves an excellent classification effect on human behaviorbased classification tasks.Experimental results on the actual RFID datasets show that this model achieves a high average recognition accuracy(99.1%).The dataset we collected for detecting RFID-based indoor human activities will be published.展开更多
Human Activity Recognition (HAR) is an important way for lower limb exoskeleton robots to implement human-computer collaboration with users. Most of the existing methods in this field focus on a simple scenario recogn...Human Activity Recognition (HAR) is an important way for lower limb exoskeleton robots to implement human-computer collaboration with users. Most of the existing methods in this field focus on a simple scenario recognizing activities for specific users, which does not consider the individual differences among users and cannot adapt to new users. In order to improve the generalization ability of HAR model, this paper proposes a novel method that combines the theories in transfer learning and active learning to mitigate the cross-subject issue, so that it can enable lower limb exoskeleton robots being used in more complex scenarios. First, a neural network based on convolutional neural networks (CNN) is designed, which can extract temporal and spatial features from sensor signals collected from different parts of human body. It can recognize human activities with high accuracy after trained by labeled data. Second, in order to improve the cross-subject adaptation ability of the pre-trained model, we design a cross-subject HAR algorithm based on sparse interrogation and label propagation. Through leave-one-subject-out validation on two widely-used public datasets with existing methods, our method achieves average accuracies of 91.77% on DSAD and 80.97% on PAMAP2, respectively. The experimental results demonstrate the potential of implementing cross-subject HAR for lower limb exoskeleton robots.展开更多
Human Activity Recognition(HAR)has been made simple in recent years,thanks to recent advancements made in Artificial Intelligence(AI)techni-ques.These techniques are applied in several areas like security,surveillance,...Human Activity Recognition(HAR)has been made simple in recent years,thanks to recent advancements made in Artificial Intelligence(AI)techni-ques.These techniques are applied in several areas like security,surveillance,healthcare,human-robot interaction,and entertainment.Since wearable sensor-based HAR system includes in-built sensors,human activities can be categorized based on sensor values.Further,it can also be employed in other applications such as gait diagnosis,observation of children/adult’s cognitive nature,stroke-patient hospital direction,Epilepsy and Parkinson’s disease examination,etc.Recently-developed Artificial Intelligence(AI)techniques,especially Deep Learning(DL)models can be deployed to accomplish effective outcomes on HAR process.With this motivation,the current research paper focuses on designing Intelligent Hyperparameter Tuned Deep Learning-based HAR(IHPTDL-HAR)technique in healthcare environment.The proposed IHPTDL-HAR technique aims at recogniz-ing the human actions in healthcare environment and helps the patients in mana-ging their healthcare service.In addition,the presented model makes use of Hierarchical Clustering(HC)-based outlier detection technique to remove the out-liers.IHPTDL-HAR technique incorporates DL-based Deep Belief Network(DBN)model to recognize the activities of users.Moreover,Harris Hawks Opti-mization(HHO)algorithm is used for hyperparameter tuning of DBN model.Finally,a comprehensive experimental analysis was conducted upon benchmark dataset and the results were examined under different aspects.The experimental results demonstrate that the proposed IHPTDL-HAR technique is a superior per-former compared to other recent techniques under different measures.展开更多
With the rapid advancement of wearable devices,Human Activities Recognition(HAR)based on these devices has emerged as a prominent research field.The objective of this study is to enhance the recognition performance of...With the rapid advancement of wearable devices,Human Activities Recognition(HAR)based on these devices has emerged as a prominent research field.The objective of this study is to enhance the recognition performance of HAR by proposing an LSTM-1DCNN recognition algorithm that utilizes a single triaxial accelerometer.This algorithm comprises two branches:one branch consists of a Long and Short-Term Memory Network(LSTM),while the other parallel branch incorporates a one-dimensional Convolutional Neural Network(1DCNN).The parallel architecture of LSTM-1DCNN initially extracts spatial and temporal features from the accelerometer data separately,which are then concatenated and fed into a fully connected neural network for information fusion.In the LSTM-1DCNN architecture,the 1DCNN branch primarily focuses on extracting spatial features during convolution operations,whereas the LSTM branch mainly captures temporal features.Nine sets of accelerometer data from five publicly available HAR datasets are employed for training and evaluation purposes.The performance of the proposed LSTM-1DCNN model is compared with five other HAR algorithms including Decision Tree,Random Forest,Support Vector Machine,1DCNN,and LSTM on these five public datasets.Experimental results demonstrate that the F1-score achieved by the proposed LSTM-1DCNN ranges from 90.36%to 99.68%,with a mean value of 96.22%and standard deviation of 0.03 across all evaluated metrics on these five public datasets-outperforming other existing HAR algorithms significantly in terms of evaluation metrics used in this study.Finally the proposed LSTM-1DCNN is validated in real-world applications by collecting acceleration data of seven human activities for training and testing purposes.Subsequently,the trained HAR algorithm is deployed on Android phones to evaluate its performance.Experimental results demonstrate that the proposed LSTM-1DCNN algorithm achieves an impressive F1-score of 97.67%on our self-built dataset.In conclusion,the fusion of temporal and spatial information in the measured data contributes to the excellent HAR performance and robustness exhibited by the proposed 1DCNN-LSTM architecture.展开更多
Human Activity Recognition(HAR)is an active research area due to its applications in pervasive computing,human-computer interaction,artificial intelligence,health care,and social sciences.Moreover,dynamic environments...Human Activity Recognition(HAR)is an active research area due to its applications in pervasive computing,human-computer interaction,artificial intelligence,health care,and social sciences.Moreover,dynamic environments and anthropometric differences between individuals make it harder to recognize actions.This study focused on human activity in video sequences acquired with an RGB camera because of its vast range of real-world applications.It uses two-stream ConvNet to extract spatial and temporal information and proposes a fine-tuned deep neural network.Moreover,the transfer learning paradigm is adopted to extract varied and fixed frames while reusing object identification information.Six state-of-the-art pre-trained models are exploited to find the best model for spatial feature extraction.For temporal sequence,this study uses dense optical flow following the two-stream ConvNet and Bidirectional Long Short TermMemory(BiLSTM)to capture longtermdependencies.Two state-of-the-art datasets,UCF101 and HMDB51,are used for evaluation purposes.In addition,seven state-of-the-art optimizers are used to fine-tune the proposed network parameters.Furthermore,this study utilizes an ensemble mechanism to aggregate spatial-temporal features using a four-stream Convolutional Neural Network(CNN),where two streams use RGB data.In contrast,the other uses optical flow images.Finally,the proposed ensemble approach using max hard voting outperforms state-ofthe-art methods with 96.30%and 90.07%accuracies on the UCF101 and HMDB51 datasets.展开更多
Human Action Recognition(HAR)and pose estimation from videos have gained significant attention among research communities due to its applica-tion in several areas namely intelligent surveillance,human robot interaction...Human Action Recognition(HAR)and pose estimation from videos have gained significant attention among research communities due to its applica-tion in several areas namely intelligent surveillance,human robot interaction,robot vision,etc.Though considerable improvements have been made in recent days,design of an effective and accurate action recognition model is yet a difficult process owing to the existence of different obstacles such as variations in camera angle,occlusion,background,movement speed,and so on.From the literature,it is observed that hard to deal with the temporal dimension in the action recognition process.Convolutional neural network(CNN)models could be used widely to solve this.With this motivation,this study designs a novel key point extraction with deep convolutional neural networks based pose estimation(KPE-DCNN)model for activity recognition.The KPE-DCNN technique initially converts the input video into a sequence of frames followed by a three stage process namely key point extraction,hyperparameter tuning,and pose estimation.In the keypoint extraction process an OpenPose model is designed to compute the accurate key-points in the human pose.Then,an optimal DCNN model is developed to classify the human activities label based on the extracted key points.For improving the training process of the DCNN technique,RMSProp optimizer is used to optimally adjust the hyperparameters such as learning rate,batch size,and epoch count.The experimental results tested using benchmark dataset like UCF sports dataset showed that KPE-DCNN technique is able to achieve good results compared with benchmark algorithms like CNN,DBN,SVM,STAL,T-CNN and so on.展开更多
Smoking is a major cause of cancer,heart disease and other afflictions that lead to early mortality.An effective smoking classification mechanism that provides insights into individual smoking habits would assist in i...Smoking is a major cause of cancer,heart disease and other afflictions that lead to early mortality.An effective smoking classification mechanism that provides insights into individual smoking habits would assist in implementing addiction treatment initiatives.Smoking activities often accompany other activities such as drinking or eating.Consequently,smoking activity recognition can be a challenging topic in human activity recognition(HAR).A deep learning framework for smoking activity recognition(SAR)employing smartwatch sensors was proposed together with a deep residual network combined with squeeze-and-excitation modules(ResNetSE)to increase the effectiveness of the SAR framework.The proposed model was tested against basic convolutional neural networks(CNNs)and recurrent neural networks(LSTM,BiLSTM,GRU and BiGRU)to recognize smoking and other similar activities such as drinking,eating and walking using the UT-Smoke dataset.Three different scenarios were investigated for their recognition performances using standard HAR metrics(accuracy,F1-score and the area under the ROC curve).Our proposed ResNetSE outperformed the other basic deep learning networks,with maximum accuracy of 98.63%.展开更多
Traditional indoor human activity recognition(HAR)is a timeseries data classification problem and needs feature extraction.Presently,considerable attention has been given to the domain ofHARdue to the enormous amount ...Traditional indoor human activity recognition(HAR)is a timeseries data classification problem and needs feature extraction.Presently,considerable attention has been given to the domain ofHARdue to the enormous amount of its real-time uses in real-time applications,namely surveillance by authorities,biometric user identification,and health monitoring of older people.The extensive usage of the Internet of Things(IoT)and wearable sensor devices has made the topic of HAR a vital subject in ubiquitous and mobile computing.The more commonly utilized inference and problemsolving technique in the HAR system have recently been deep learning(DL).The study develops aModifiedWild Horse Optimization withDLAided Symmetric Human Activity Recognition(MWHODL-SHAR)model.The major intention of the MWHODL-SHAR model lies in recognition of symmetric activities,namely jogging,walking,standing,sitting,etc.In the presented MWHODL-SHAR technique,the human activities data is pre-processed in various stages to make it compatible for further processing.A convolution neural network with an attention-based long short-term memory(CNNALSTM)model is applied for activity recognition.The MWHO algorithm is utilized as a hyperparameter tuning strategy to improve the detection rate of the CNN-ALSTM algorithm.The experimental validation of the MWHODL-SHAR technique is simulated using a benchmark dataset.An extensive comparison study revealed the betterment of theMWHODL-SHAR technique over other recent approaches.展开更多
Elderly or disabled people can be supported by a human activity recognition(HAR)system that monitors their activity intervenes and pat-terns in case of changes in their behaviors or critical events have occurred.An au...Elderly or disabled people can be supported by a human activity recognition(HAR)system that monitors their activity intervenes and pat-terns in case of changes in their behaviors or critical events have occurred.An automated HAR could assist these persons to have a more indepen-dent life.Providing appropriate and accurate data regarding the activity is the most crucial computation task in the activity recognition system.With the fast development of neural networks,computing,and machine learning algorithms,HAR system based on wearable sensors has gained popularity in several areas,such as medical services,smart homes,improving human communication with computers,security systems,healthcare for the elderly,mechanization in industry,robot monitoring system,monitoring athlete train-ing,and rehabilitation systems.In this view,this study develops an improved pelican optimization with deep transfer learning enabled HAR(IPODTL-HAR)system for disabled persons.The major goal of the IPODTL-HAR method was recognizing the human activities for disabled person and improve the quality of living.The presented IPODTL-HAR model follows data pre-processing for improvising the quality of the data.Besides,EfficientNet model is applied to derive a useful set of feature vectors and the hyperparameters are adjusted by the use of Nadam optimizer.Finally,the IPO with deep belief network(DBN)model is utilized for the recognition and classification of human activities.The utilization of Nadam optimizer and IPO algorithm helps in effectually tuning the hyperparameters related to the EfficientNet and DBN models respectively.The experimental validation of the IPODTL-HAR method is tested using benchmark dataset.Extensive comparison study highlighted the betterment of the IPODTL-HAR model over recent state of art HAR approaches interms of different measures.展开更多
Human-Computer Interaction(HCI)is a sub-area within computer science focused on the study of the communication between people(users)and computers and the evaluation,implementation,and design of user interfaces for com...Human-Computer Interaction(HCI)is a sub-area within computer science focused on the study of the communication between people(users)and computers and the evaluation,implementation,and design of user interfaces for computer systems.HCI has accomplished effective incorporation of the human factors and software engineering of computing systems through the methods and concepts of cognitive science.Usability is an aspect of HCI dedicated to guar-anteeing that human–computer communication is,amongst other things,efficient,effective,and sustaining for the user.Simultaneously,Human activity recognition(HAR)aim is to identify actions from a sequence of observations on the activities of subjects and the environmental conditions.The vision-based HAR study is the basis of several applications involving health care,HCI,and video surveillance.This article develops a Fire Hawk Optimizer with Deep Learning Enabled Activ-ity Recognition(FHODL-AR)on HCI driven usability.In the presented FHODL-AR technique,the input images are investigated for the identification of different human activities.For feature extraction,a modified SqueezeNet model is intro-duced by the inclusion of few bypass connections to the SqueezeNet among Fire modules.Besides,the FHO algorithm is utilized as a hyperparameter optimization algorithm,which in turn boosts the classification performance.To detect and cate-gorize different kinds of activities,probabilistic neural network(PNN)classifier is applied.The experimental validation of the FHODL-AR technique is tested using benchmark datasets,and the outcomes reported the improvements of the FHODL-AR technique over other recent approaches.展开更多
Recognition of human activity is one of the most exciting aspects of time-series classification,with substantial practical and theoretical impli-cations.Recent evidence indicates that activity recognition from wearabl...Recognition of human activity is one of the most exciting aspects of time-series classification,with substantial practical and theoretical impli-cations.Recent evidence indicates that activity recognition from wearable sensors is an effective technique for tracking elderly adults and children in indoor and outdoor environments.Consequently,researchers have demon-strated considerable passion for developing cutting-edge deep learning sys-tems capable of exploiting unprocessed sensor data from wearable devices and generating practical decision assistance in many contexts.This study provides a deep learning-based approach for recognizing indoor and outdoor movement utilizing an enhanced deep pyramidal residual model called Sen-PyramidNet and motion information from wearable sensors(accelerometer and gyroscope).The suggested technique develops a residual unit based on a deep pyramidal residual network and introduces the concept of a pyramidal residual unit to increase detection capability.The proposed deep learning-based model was assessed using the publicly available 19Nonsens dataset,which gathered motion signals from various indoor and outdoor activities,including practicing various body parts.The experimental findings demon-strate that the proposed approach can efficiently reuse characteristics and has achieved an identification accuracy of 96.37%for indoor and 97.25%for outdoor activity.Moreover,comparison experiments demonstrate that the SenPyramidNet surpasses other cutting-edge deep learning models in terms of accuracy and F1-score.Furthermore,this study explores the influence of several wearable sensors on indoor and outdoor action recognition ability.展开更多
Activity recognition of indoor occupants using indirect sensing with less privacy violation is one of the hot research topics. This paper proposes a CO<sub>2</sub> sensor-based indoor occupant activity mon...Activity recognition of indoor occupants using indirect sensing with less privacy violation is one of the hot research topics. This paper proposes a CO<sub>2</sub> sensor-based indoor occupant activity monitoring system. Using the IoT sensor node that contains CO<sub>2</sub> sensors, the measured CO<sub>2</sub> concentrations in three locations (laboratory, office, and bedroom) were stored in a cloud server for up to 35 days starting July 1, 2023. The CO<sub>2</sub> measurements stored at 30-second intervals were statistically processed to produce a heat-mapped display of the hourly average or maximum CO<sub>2</sub> concentration. From the heatmap visualizations of CO<sub>2</sub> concentration, the proposed system estimated meeting, heating water using a portable stove, and sleep for the occupants’ activity recognition.展开更多
The purpose of Human Activities Recognition(HAR)is to recognize human activities with sensors like accelerometers and gyroscopes.The normal research strategy is to obtain better HAR results by finding more efficient e...The purpose of Human Activities Recognition(HAR)is to recognize human activities with sensors like accelerometers and gyroscopes.The normal research strategy is to obtain better HAR results by finding more efficient eigenvalues and classification algorithms.In this paper,we experimentally validate the HAR process and its various algorithms independently.On the base of which,it is further proposed that,in addition to the necessary eigenvalues and intelligent algorithms,correct prior knowledge is even more critical.The prior knowledge mentioned here mainly refers to the physical understanding of the analyzed object,the sampling process,the sampling data,the HAR algorithm,etc.Thus,a solution is presented under the guidance of right prior knowledge,using Back-Propagation neural networks(BP networks)and simple Convolutional Neural Networks(CNN).The results show that HAR can be achieved with 90%–100%accuracy.Further analysis shows that intelligent algorithms for pattern recognition and classification problems,typically represented by HAR,require correct prior knowledge to work effectively.展开更多
Human activity recognition is commonly used in several Internet of Things applications to recognize different contexts and respond to them.Deep learning has gained momentum for identifying activities through sensors,s...Human activity recognition is commonly used in several Internet of Things applications to recognize different contexts and respond to them.Deep learning has gained momentum for identifying activities through sensors,smartphones or even surveillance cameras.However,it is often difficult to train deep learning models on constrained IoT devices.The focus of this paper is to propose an alternative model by constructing a Deep Learning-based Human Activity Recognition framework for edge computing,which we call DL-HAR.The goal of this framework is to exploit the capabilities of cloud computing to train a deep learning model and deploy it on less-powerful edge devices for recognition.The idea is to conduct the training of the model in the Cloud and distribute it to the edge nodes.We demonstrate how the DL-HAR can perform human activity recognition at the edge while improving efficiency and accuracy.In order to evaluate the proposed framework,we conducted a comprehensive set of experiments to validate the applicability of DL-HAR.Experimental results on the benchmark dataset show a significant increase in performance compared with the state-of-the-art models.展开更多
This paper proposes a hybrid approach for recognizing human activities from trajectories. First, an improved hidden Markov model (HMM) parameter learning algorithm, HMM-PSO, is proposed, which achieves a better bala...This paper proposes a hybrid approach for recognizing human activities from trajectories. First, an improved hidden Markov model (HMM) parameter learning algorithm, HMM-PSO, is proposed, which achieves a better balance between the global and local exploitation by the nonlinear update strategy and repulsion operation. Then, the event probability sequence (EPS) which consists of a series of events is computed to describe the unique characteristic of human activities. The anatysis on EPS indicates that it is robust to the changes in viewing direction and contributes to improving the recognition rate. Finally, the effectiveness of the proposed approach is evaluated by data experiments on current popular datasets.展开更多
A new method for complex activity recognition in videos by key frames was presented. The progressive bisection strategy(PBS) was employed to divide the complex activity into a series of simple activities and the key f...A new method for complex activity recognition in videos by key frames was presented. The progressive bisection strategy(PBS) was employed to divide the complex activity into a series of simple activities and the key frames representing the simple activities were extracted by the self-splitting competitive learning(SSCL) algorithm. A new similarity criterion of complex activities was defined. Besides the regular visual factor, the order factor and the interference factor measuring the timing matching relationship of the simple activities and the discontinuous matching relationship of the simple activities respectively were considered. On these bases, the complex human activity recognition could be achieved by calculating their similarities. The recognition error was reduced compared with other methods when ignoring the recognition of simple activities. The proposed method was tested and evaluated on the self-built broadcast gymnastic database and the dancing database. The experimental results prove the superior efficiency.展开更多
We study the problem of humanactivity recognition from RGB-Depth(RGBD)sensors when the skeletons are not available.The skeleton tracking in Kinect SDK workswell when the human subject is facing thecamera and there are...We study the problem of humanactivity recognition from RGB-Depth(RGBD)sensors when the skeletons are not available.The skeleton tracking in Kinect SDK workswell when the human subject is facing thecamera and there are no occlusions.In surveillance or nursing home monitoring scenarios,however,the camera is usually mounted higher than human subjects,and there may beocclusions.The interest-point based approachis widely used in RGB based activity recognition,it can be used in both RGB and depthchannels.Whether we should extract interestpoints independently of each channel or extract interest points from only one of thechannels is discussed in this paper.The goal ofthis paper is to compare the performances ofdifferent methods of extracting interest points.In addition,we have developed a depth mapbased descriptor and built an RGBD dataset,called RGBD-SAR,for senior activity recognition.We show that the best performance isachieved when we extract interest points solely from RGB channels,and combine the RGBbased descriptors with the depth map-baseddescriptors.We also present a baseline performance of the RGBD-SAR dataset.展开更多
基金funded by the National Science and Technology Council,Taiwan(Grant No.NSTC 112-2121-M-039-001)by China Medical University(Grant No.CMU112-MF-79).
文摘Artificial intelligence(AI)technology has become integral in the realm of medicine and healthcare,particularly in human activity recognition(HAR)applications such as fitness and rehabilitation tracking.This study introduces a robust coupling analysis framework that integrates four AI-enabled models,combining both machine learning(ML)and deep learning(DL)approaches to evaluate their effectiveness in HAR.The analytical dataset comprises 561 features sourced from the UCI-HAR database,forming the foundation for training the models.Additionally,the MHEALTH database is employed to replicate the modeling process for comparative purposes,while inclusion of the WISDM database,renowned for its challenging features,supports the framework’s resilience and adaptability.The ML-based models employ the methodologies including adaptive neuro-fuzzy inference system(ANFIS),support vector machine(SVM),and random forest(RF),for data training.In contrast,a DL-based model utilizes one-dimensional convolution neural network(1dCNN)to automate feature extraction.Furthermore,the recursive feature elimination(RFE)algorithm,which drives an ML-based estimator to eliminate low-participation features,helps identify the optimal features for enhancing model performance.The best accuracies of the ANFIS,SVM,RF,and 1dCNN models with meticulous featuring process achieve around 90%,96%,91%,and 93%,respectively.Comparative analysis using the MHEALTH dataset showcases the 1dCNN model’s remarkable perfect accuracy(100%),while the RF,SVM,and ANFIS models equipped with selected features achieve accuracies of 99.8%,99.7%,and 96.5%,respectively.Finally,when applied to the WISDM dataset,the DL-based and ML-based models attain accuracies of 91.4%and 87.3%,respectively,aligning with prior research findings.In conclusion,the proposed framework yields HAR models with commendable performance metrics,exhibiting its suitability for integration into the healthcare services system through AI-driven applications.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R194)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘In this present time,Human Activity Recognition(HAR)has been of considerable aid in the case of health monitoring and recovery.The exploitation of machine learning with an intelligent agent in the area of health informatics gathered using HAR augments the decision-making quality and significance.Although many research works conducted on Smart Healthcare Monitoring,there remain a certain number of pitfalls such as time,overhead,and falsification involved during analysis.Therefore,this paper proposes a Statistical Partial Regression and Support Vector Intelligent Agent Learning(SPR-SVIAL)for Smart Healthcare Monitoring.At first,the Statistical Partial Regression Feature Extraction model is used for data preprocessing along with the dimensionality-reduced features extraction process.Here,the input dataset the continuous beat-to-beat heart data,triaxial accelerometer data,and psychological characteristics were acquired from IoT wearable devices.To attain highly accurate Smart Healthcare Monitoring with less time,Partial Least Square helps extract the dimensionality-reduced features.After that,with these resulting features,SVIAL is proposed for Smart Healthcare Monitoring with the help of Machine Learning and Intelligent Agents to minimize both analysis falsification and overhead.Experimental evaluation is carried out for factors such as time,overhead,and false positive rate accuracy concerning several instances.The quantitatively analyzed results indicate the better performance of our proposed SPR-SVIAL method when compared with two state-of-the-art methods.
基金supported by National Natural Science Foundation of China(Nos.61902158 and 62202210).
文摘The rapidly advancing Convolutional Neural Networks(CNNs)have brought about a paradigm shift in various computer vision tasks,while also garnering increasing interest and application in sensor-based Human Activity Recognition(HAR)efforts.However,the significant computational demands and memory requirements hinder the practical deployment of deep networks in resource-constrained systems.This paper introduces a novel network pruning method based on the energy spectral density of data in the frequency domain,which reduces the model’s depth and accelerates activity inference.Unlike traditional pruning methods that focus on the spatial domain and the importance of filters,this method converts sensor data,such as HAR data,to the frequency domain for analysis.It emphasizes the low-frequency components by calculating their energy spectral density values.Subsequently,filters that meet the predefined thresholds are retained,and redundant filters are removed,leading to a significant reduction in model size without compromising performance or incurring additional computational costs.Notably,the proposed algorithm’s effectiveness is empirically validated on a standard five-layer CNNs backbone architecture.The computational feasibility and data sensitivity of the proposed scheme are thoroughly examined.Impressively,the classification accuracy on three benchmark HAR datasets UCI-HAR,WISDM,and PAMAP2 reaches 96.20%,98.40%,and 92.38%,respectively.Concurrently,our strategy achieves a reduction in Floating Point Operations(FLOPs)by 90.73%,93.70%,and 90.74%,respectively,along with a corresponding decrease in memory consumption by 90.53%,93.43%,and 90.05%.
基金the Strategic Priority Research Program of Chinese Academy of Sciences(Grant No.XDC02040300)for this study.
文摘RFID-based human activity recognition(HAR)attracts attention due to its convenience,noninvasiveness,and privacy protection.Existing RFID-based HAR methods use modeling,CNN,or LSTM to extract features effectively.Still,they have shortcomings:1)requiring complex hand-crafted data cleaning processes and 2)only addressing single-person activity recognition based on specific RF signals.To solve these problems,this paper proposes a novel device-free method based on Time-streaming Multiscale Transformer called TransTM.This model leverages the Transformer's powerful data fitting capabilities to take raw RFID RSSI data as input without pre-processing.Concretely,we propose a multiscale convolutional hybrid Transformer to capture behavioral features that recognizes singlehuman activities and human-to-human interactions.Compared with existing CNN-and LSTM-based methods,the Transformer-based method has more data fitting power,generalization,and scalability.Furthermore,using RF signals,our method achieves an excellent classification effect on human behaviorbased classification tasks.Experimental results on the actual RFID datasets show that this model achieves a high average recognition accuracy(99.1%).The dataset we collected for detecting RFID-based indoor human activities will be published.
文摘Human Activity Recognition (HAR) is an important way for lower limb exoskeleton robots to implement human-computer collaboration with users. Most of the existing methods in this field focus on a simple scenario recognizing activities for specific users, which does not consider the individual differences among users and cannot adapt to new users. In order to improve the generalization ability of HAR model, this paper proposes a novel method that combines the theories in transfer learning and active learning to mitigate the cross-subject issue, so that it can enable lower limb exoskeleton robots being used in more complex scenarios. First, a neural network based on convolutional neural networks (CNN) is designed, which can extract temporal and spatial features from sensor signals collected from different parts of human body. It can recognize human activities with high accuracy after trained by labeled data. Second, in order to improve the cross-subject adaptation ability of the pre-trained model, we design a cross-subject HAR algorithm based on sparse interrogation and label propagation. Through leave-one-subject-out validation on two widely-used public datasets with existing methods, our method achieves average accuracies of 91.77% on DSAD and 80.97% on PAMAP2, respectively. The experimental results demonstrate the potential of implementing cross-subject HAR for lower limb exoskeleton robots.
基金supported by Korea Institute for Advancement of Technology(KIAT)grant fundedthe Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)the Soonchunhyang University Research Fund.
文摘Human Activity Recognition(HAR)has been made simple in recent years,thanks to recent advancements made in Artificial Intelligence(AI)techni-ques.These techniques are applied in several areas like security,surveillance,healthcare,human-robot interaction,and entertainment.Since wearable sensor-based HAR system includes in-built sensors,human activities can be categorized based on sensor values.Further,it can also be employed in other applications such as gait diagnosis,observation of children/adult’s cognitive nature,stroke-patient hospital direction,Epilepsy and Parkinson’s disease examination,etc.Recently-developed Artificial Intelligence(AI)techniques,especially Deep Learning(DL)models can be deployed to accomplish effective outcomes on HAR process.With this motivation,the current research paper focuses on designing Intelligent Hyperparameter Tuned Deep Learning-based HAR(IHPTDL-HAR)technique in healthcare environment.The proposed IHPTDL-HAR technique aims at recogniz-ing the human actions in healthcare environment and helps the patients in mana-ging their healthcare service.In addition,the presented model makes use of Hierarchical Clustering(HC)-based outlier detection technique to remove the out-liers.IHPTDL-HAR technique incorporates DL-based Deep Belief Network(DBN)model to recognize the activities of users.Moreover,Harris Hawks Opti-mization(HHO)algorithm is used for hyperparameter tuning of DBN model.Finally,a comprehensive experimental analysis was conducted upon benchmark dataset and the results were examined under different aspects.The experimental results demonstrate that the proposed IHPTDL-HAR technique is a superior per-former compared to other recent techniques under different measures.
基金supported by the Guangxi University of Science and Technology,Liuzhou,China,sponsored by the Researchers Supporting Project(No.XiaoKeBo21Z27,The Construction of Electronic Information Team supported by Artificial Intelligence Theory and Three-dimensional Visual Technology,Yuesheng Zhao)supported by the 2022 Laboratory Fund Project of the Key Laboratory of Space-Based Integrated Information System(No.SpaceInfoNet20221120,Research on the Key Technologies of Intelligent Spatiotemporal Data Engine Based on Space-Based Information Network,Yuesheng Zhao)supported by the 2023 Guangxi University Young and Middle-Aged Teachers’Basic Scientific Research Ability Improvement Project(No.2023KY0352,Research on the Recognition of Psychological Abnormalities in College Students Based on the Fusion of Pulse and EEG Techniques,Yutong Luo).
文摘With the rapid advancement of wearable devices,Human Activities Recognition(HAR)based on these devices has emerged as a prominent research field.The objective of this study is to enhance the recognition performance of HAR by proposing an LSTM-1DCNN recognition algorithm that utilizes a single triaxial accelerometer.This algorithm comprises two branches:one branch consists of a Long and Short-Term Memory Network(LSTM),while the other parallel branch incorporates a one-dimensional Convolutional Neural Network(1DCNN).The parallel architecture of LSTM-1DCNN initially extracts spatial and temporal features from the accelerometer data separately,which are then concatenated and fed into a fully connected neural network for information fusion.In the LSTM-1DCNN architecture,the 1DCNN branch primarily focuses on extracting spatial features during convolution operations,whereas the LSTM branch mainly captures temporal features.Nine sets of accelerometer data from five publicly available HAR datasets are employed for training and evaluation purposes.The performance of the proposed LSTM-1DCNN model is compared with five other HAR algorithms including Decision Tree,Random Forest,Support Vector Machine,1DCNN,and LSTM on these five public datasets.Experimental results demonstrate that the F1-score achieved by the proposed LSTM-1DCNN ranges from 90.36%to 99.68%,with a mean value of 96.22%and standard deviation of 0.03 across all evaluated metrics on these five public datasets-outperforming other existing HAR algorithms significantly in terms of evaluation metrics used in this study.Finally the proposed LSTM-1DCNN is validated in real-world applications by collecting acceleration data of seven human activities for training and testing purposes.Subsequently,the trained HAR algorithm is deployed on Android phones to evaluate its performance.Experimental results demonstrate that the proposed LSTM-1DCNN algorithm achieves an impressive F1-score of 97.67%on our self-built dataset.In conclusion,the fusion of temporal and spatial information in the measured data contributes to the excellent HAR performance and robustness exhibited by the proposed 1DCNN-LSTM architecture.
基金This work was supported by financial support from Universiti Sains Malaysia(USM)under FRGS grant number FRGS/1/2020/TK03/USM/02/1the School of Computer Sciences USM for their support.
文摘Human Activity Recognition(HAR)is an active research area due to its applications in pervasive computing,human-computer interaction,artificial intelligence,health care,and social sciences.Moreover,dynamic environments and anthropometric differences between individuals make it harder to recognize actions.This study focused on human activity in video sequences acquired with an RGB camera because of its vast range of real-world applications.It uses two-stream ConvNet to extract spatial and temporal information and proposes a fine-tuned deep neural network.Moreover,the transfer learning paradigm is adopted to extract varied and fixed frames while reusing object identification information.Six state-of-the-art pre-trained models are exploited to find the best model for spatial feature extraction.For temporal sequence,this study uses dense optical flow following the two-stream ConvNet and Bidirectional Long Short TermMemory(BiLSTM)to capture longtermdependencies.Two state-of-the-art datasets,UCF101 and HMDB51,are used for evaluation purposes.In addition,seven state-of-the-art optimizers are used to fine-tune the proposed network parameters.Furthermore,this study utilizes an ensemble mechanism to aggregate spatial-temporal features using a four-stream Convolutional Neural Network(CNN),where two streams use RGB data.In contrast,the other uses optical flow images.Finally,the proposed ensemble approach using max hard voting outperforms state-ofthe-art methods with 96.30%and 90.07%accuracies on the UCF101 and HMDB51 datasets.
文摘Human Action Recognition(HAR)and pose estimation from videos have gained significant attention among research communities due to its applica-tion in several areas namely intelligent surveillance,human robot interaction,robot vision,etc.Though considerable improvements have been made in recent days,design of an effective and accurate action recognition model is yet a difficult process owing to the existence of different obstacles such as variations in camera angle,occlusion,background,movement speed,and so on.From the literature,it is observed that hard to deal with the temporal dimension in the action recognition process.Convolutional neural network(CNN)models could be used widely to solve this.With this motivation,this study designs a novel key point extraction with deep convolutional neural networks based pose estimation(KPE-DCNN)model for activity recognition.The KPE-DCNN technique initially converts the input video into a sequence of frames followed by a three stage process namely key point extraction,hyperparameter tuning,and pose estimation.In the keypoint extraction process an OpenPose model is designed to compute the accurate key-points in the human pose.Then,an optimal DCNN model is developed to classify the human activities label based on the extracted key points.For improving the training process of the DCNN technique,RMSProp optimizer is used to optimally adjust the hyperparameters such as learning rate,batch size,and epoch count.The experimental results tested using benchmark dataset like UCF sports dataset showed that KPE-DCNN technique is able to achieve good results compared with benchmark algorithms like CNN,DBN,SVM,STAL,T-CNN and so on.
基金support provided by Thammasat University Research fund under the TSRI,Contract No.TUFF19/2564 and TUFF24/2565,for the project of“AI Ready City Networking in RUN”,based on the RUN Digital Cluster collaboration schemeThis research project was also supported by the Thailand Science Research and Innonation fund,the University of Phayao(Grant No.FF65-RIM041)supported by King Mongkut’s University of Technology North Bangkok,Contract No.KMUTNB-65-KNOW-02.
文摘Smoking is a major cause of cancer,heart disease and other afflictions that lead to early mortality.An effective smoking classification mechanism that provides insights into individual smoking habits would assist in implementing addiction treatment initiatives.Smoking activities often accompany other activities such as drinking or eating.Consequently,smoking activity recognition can be a challenging topic in human activity recognition(HAR).A deep learning framework for smoking activity recognition(SAR)employing smartwatch sensors was proposed together with a deep residual network combined with squeeze-and-excitation modules(ResNetSE)to increase the effectiveness of the SAR framework.The proposed model was tested against basic convolutional neural networks(CNNs)and recurrent neural networks(LSTM,BiLSTM,GRU and BiGRU)to recognize smoking and other similar activities such as drinking,eating and walking using the UT-Smoke dataset.Three different scenarios were investigated for their recognition performances using standard HAR metrics(accuracy,F1-score and the area under the ROC curve).Our proposed ResNetSE outperformed the other basic deep learning networks,with maximum accuracy of 98.63%.
文摘Traditional indoor human activity recognition(HAR)is a timeseries data classification problem and needs feature extraction.Presently,considerable attention has been given to the domain ofHARdue to the enormous amount of its real-time uses in real-time applications,namely surveillance by authorities,biometric user identification,and health monitoring of older people.The extensive usage of the Internet of Things(IoT)and wearable sensor devices has made the topic of HAR a vital subject in ubiquitous and mobile computing.The more commonly utilized inference and problemsolving technique in the HAR system have recently been deep learning(DL).The study develops aModifiedWild Horse Optimization withDLAided Symmetric Human Activity Recognition(MWHODL-SHAR)model.The major intention of the MWHODL-SHAR model lies in recognition of symmetric activities,namely jogging,walking,standing,sitting,etc.In the presented MWHODL-SHAR technique,the human activities data is pre-processed in various stages to make it compatible for further processing.A convolution neural network with an attention-based long short-term memory(CNNALSTM)model is applied for activity recognition.The MWHO algorithm is utilized as a hyperparameter tuning strategy to improve the detection rate of the CNN-ALSTM algorithm.The experimental validation of the MWHODL-SHAR technique is simulated using a benchmark dataset.An extensive comparison study revealed the betterment of theMWHODL-SHAR technique over other recent approaches.
文摘Elderly or disabled people can be supported by a human activity recognition(HAR)system that monitors their activity intervenes and pat-terns in case of changes in their behaviors or critical events have occurred.An automated HAR could assist these persons to have a more indepen-dent life.Providing appropriate and accurate data regarding the activity is the most crucial computation task in the activity recognition system.With the fast development of neural networks,computing,and machine learning algorithms,HAR system based on wearable sensors has gained popularity in several areas,such as medical services,smart homes,improving human communication with computers,security systems,healthcare for the elderly,mechanization in industry,robot monitoring system,monitoring athlete train-ing,and rehabilitation systems.In this view,this study develops an improved pelican optimization with deep transfer learning enabled HAR(IPODTL-HAR)system for disabled persons.The major goal of the IPODTL-HAR method was recognizing the human activities for disabled person and improve the quality of living.The presented IPODTL-HAR model follows data pre-processing for improvising the quality of the data.Besides,EfficientNet model is applied to derive a useful set of feature vectors and the hyperparameters are adjusted by the use of Nadam optimizer.Finally,the IPO with deep belief network(DBN)model is utilized for the recognition and classification of human activities.The utilization of Nadam optimizer and IPO algorithm helps in effectually tuning the hyperparameters related to the EfficientNet and DBN models respectively.The experimental validation of the IPODTL-HAR method is tested using benchmark dataset.Extensive comparison study highlighted the betterment of the IPODTL-HAR model over recent state of art HAR approaches interms of different measures.
文摘Human-Computer Interaction(HCI)is a sub-area within computer science focused on the study of the communication between people(users)and computers and the evaluation,implementation,and design of user interfaces for computer systems.HCI has accomplished effective incorporation of the human factors and software engineering of computing systems through the methods and concepts of cognitive science.Usability is an aspect of HCI dedicated to guar-anteeing that human–computer communication is,amongst other things,efficient,effective,and sustaining for the user.Simultaneously,Human activity recognition(HAR)aim is to identify actions from a sequence of observations on the activities of subjects and the environmental conditions.The vision-based HAR study is the basis of several applications involving health care,HCI,and video surveillance.This article develops a Fire Hawk Optimizer with Deep Learning Enabled Activ-ity Recognition(FHODL-AR)on HCI driven usability.In the presented FHODL-AR technique,the input images are investigated for the identification of different human activities.For feature extraction,a modified SqueezeNet model is intro-duced by the inclusion of few bypass connections to the SqueezeNet among Fire modules.Besides,the FHO algorithm is utilized as a hyperparameter optimization algorithm,which in turn boosts the classification performance.To detect and cate-gorize different kinds of activities,probabilistic neural network(PNN)classifier is applied.The experimental validation of the FHODL-AR technique is tested using benchmark datasets,and the outcomes reported the improvements of the FHODL-AR technique over other recent approaches.
基金supported by the Thailand Science Research and Innovation Fundthe University of Phayao(Grant No.FF66-UoE001)King Mongkut’s University of Technology North Bangkok,Contract No.KMUTNB-66-KNOW-05.
文摘Recognition of human activity is one of the most exciting aspects of time-series classification,with substantial practical and theoretical impli-cations.Recent evidence indicates that activity recognition from wearable sensors is an effective technique for tracking elderly adults and children in indoor and outdoor environments.Consequently,researchers have demon-strated considerable passion for developing cutting-edge deep learning sys-tems capable of exploiting unprocessed sensor data from wearable devices and generating practical decision assistance in many contexts.This study provides a deep learning-based approach for recognizing indoor and outdoor movement utilizing an enhanced deep pyramidal residual model called Sen-PyramidNet and motion information from wearable sensors(accelerometer and gyroscope).The suggested technique develops a residual unit based on a deep pyramidal residual network and introduces the concept of a pyramidal residual unit to increase detection capability.The proposed deep learning-based model was assessed using the publicly available 19Nonsens dataset,which gathered motion signals from various indoor and outdoor activities,including practicing various body parts.The experimental findings demon-strate that the proposed approach can efficiently reuse characteristics and has achieved an identification accuracy of 96.37%for indoor and 97.25%for outdoor activity.Moreover,comparison experiments demonstrate that the SenPyramidNet surpasses other cutting-edge deep learning models in terms of accuracy and F1-score.Furthermore,this study explores the influence of several wearable sensors on indoor and outdoor action recognition ability.
文摘Activity recognition of indoor occupants using indirect sensing with less privacy violation is one of the hot research topics. This paper proposes a CO<sub>2</sub> sensor-based indoor occupant activity monitoring system. Using the IoT sensor node that contains CO<sub>2</sub> sensors, the measured CO<sub>2</sub> concentrations in three locations (laboratory, office, and bedroom) were stored in a cloud server for up to 35 days starting July 1, 2023. The CO<sub>2</sub> measurements stored at 30-second intervals were statistically processed to produce a heat-mapped display of the hourly average or maximum CO<sub>2</sub> concentration. From the heatmap visualizations of CO<sub>2</sub> concentration, the proposed system estimated meeting, heating water using a portable stove, and sleep for the occupants’ activity recognition.
基金supported by the Guangxi University of Science and Technology,Liuzhou,China,sponsored by the Researchers Supporting Project(No.XiaoKeBo21Z27,The Construction of Electronic Information Team Supported by Artificial Intelligence Theory and ThreeDimensional Visual Technology,Yuesheng Zhao)supported by the Key Laboratory for Space-based Integrated Information Systems 2022 Laboratory Funding Program(No.SpaceInfoNet20221120,Research on the Key Technologies of Intelligent Spatio-Temporal Data Engine Based on Space-Based Information Network,Yuesheng Zhao)supported by the 2023 Guangxi University Young and Middle-Aged Teachers’Basic Scientific Research Ability Improvement Project(No.2023KY0352,Research on the Recognition of Psychological Abnormalities in College Students Based on the Fusion of Pulse and EEG Techniques,Yutong Lu).
文摘The purpose of Human Activities Recognition(HAR)is to recognize human activities with sensors like accelerometers and gyroscopes.The normal research strategy is to obtain better HAR results by finding more efficient eigenvalues and classification algorithms.In this paper,we experimentally validate the HAR process and its various algorithms independently.On the base of which,it is further proposed that,in addition to the necessary eigenvalues and intelligent algorithms,correct prior knowledge is even more critical.The prior knowledge mentioned here mainly refers to the physical understanding of the analyzed object,the sampling process,the sampling data,the HAR algorithm,etc.Thus,a solution is presented under the guidance of right prior knowledge,using Back-Propagation neural networks(BP networks)and simple Convolutional Neural Networks(CNN).The results show that HAR can be achieved with 90%–100%accuracy.Further analysis shows that intelligent algorithms for pattern recognition and classification problems,typically represented by HAR,require correct prior knowledge to work effectively.
文摘Human activity recognition is commonly used in several Internet of Things applications to recognize different contexts and respond to them.Deep learning has gained momentum for identifying activities through sensors,smartphones or even surveillance cameras.However,it is often difficult to train deep learning models on constrained IoT devices.The focus of this paper is to propose an alternative model by constructing a Deep Learning-based Human Activity Recognition framework for edge computing,which we call DL-HAR.The goal of this framework is to exploit the capabilities of cloud computing to train a deep learning model and deploy it on less-powerful edge devices for recognition.The idea is to conduct the training of the model in the Cloud and distribute it to the edge nodes.We demonstrate how the DL-HAR can perform human activity recognition at the edge while improving efficiency and accuracy.In order to evaluate the proposed framework,we conducted a comprehensive set of experiments to validate the applicability of DL-HAR.Experimental results on the benchmark dataset show a significant increase in performance compared with the state-of-the-art models.
基金supported by the National Natural Science Foundation of China(60573159)the Guangdong High Technique Project(201100000514)
文摘This paper proposes a hybrid approach for recognizing human activities from trajectories. First, an improved hidden Markov model (HMM) parameter learning algorithm, HMM-PSO, is proposed, which achieves a better balance between the global and local exploitation by the nonlinear update strategy and repulsion operation. Then, the event probability sequence (EPS) which consists of a series of events is computed to describe the unique characteristic of human activities. The anatysis on EPS indicates that it is robust to the changes in viewing direction and contributes to improving the recognition rate. Finally, the effectiveness of the proposed approach is evaluated by data experiments on current popular datasets.
基金Project(50808025) supported by the National Natural Science Foundation of ChinaProject(20090162110057) supported by the Doctoral Fund of Ministry of Education,China
文摘A new method for complex activity recognition in videos by key frames was presented. The progressive bisection strategy(PBS) was employed to divide the complex activity into a series of simple activities and the key frames representing the simple activities were extracted by the self-splitting competitive learning(SSCL) algorithm. A new similarity criterion of complex activities was defined. Besides the regular visual factor, the order factor and the interference factor measuring the timing matching relationship of the simple activities and the discontinuous matching relationship of the simple activities respectively were considered. On these bases, the complex human activity recognition could be achieved by calculating their similarities. The recognition error was reduced compared with other methods when ignoring the recognition of simple activities. The proposed method was tested and evaluated on the self-built broadcast gymnastic database and the dancing database. The experimental results prove the superior efficiency.
基金supported by the National Natural Science Foundation of China under Grants No.61075045,No.61273256the Program for New Century Excellent Talents in University under Grant No.NECT-10-0292+1 种基金the National Key Basic Research Program of China(973Program)under Grant No.2011-CB707000the Fundamental Research Funds for the Central Universities
文摘We study the problem of humanactivity recognition from RGB-Depth(RGBD)sensors when the skeletons are not available.The skeleton tracking in Kinect SDK workswell when the human subject is facing thecamera and there are no occlusions.In surveillance or nursing home monitoring scenarios,however,the camera is usually mounted higher than human subjects,and there may beocclusions.The interest-point based approachis widely used in RGB based activity recognition,it can be used in both RGB and depthchannels.Whether we should extract interestpoints independently of each channel or extract interest points from only one of thechannels is discussed in this paper.The goal ofthis paper is to compare the performances ofdifferent methods of extracting interest points.In addition,we have developed a depth mapbased descriptor and built an RGBD dataset,called RGBD-SAR,for senior activity recognition.We show that the best performance isachieved when we extract interest points solely from RGB channels,and combine the RGBbased descriptors with the depth map-baseddescriptors.We also present a baseline performance of the RGBD-SAR dataset.