Structural Health Monitoring(SHM)systems have become a crucial tool for the operational management of long tunnels.For immersed tunnels exposed to both traffic loads and the effects of the marine environment,efficient...Structural Health Monitoring(SHM)systems have become a crucial tool for the operational management of long tunnels.For immersed tunnels exposed to both traffic loads and the effects of the marine environment,efficiently identifying abnormal conditions from the extensive unannotated SHM data presents a significant challenge.This study proposed amodel-based approach for anomaly detection and conducted validation and comparative analysis of two distinct temporal predictive models using SHM data from a real immersed tunnel.Firstly,a dynamic predictive model-based anomaly detectionmethod is proposed,which utilizes a rolling time window for modeling to achieve dynamic prediction.Leveraging the assumption of temporal data similarity,an interval prediction value deviation was employed to determine the abnormality of the data.Subsequently,dynamic predictive models were constructed based on the Autoregressive Integrated Moving Average(ARIMA)and Long Short-Term Memory(LSTM)models.The hyperparameters of these models were optimized and selected using monitoring data from the immersed tunnel,yielding viable static and dynamic predictive models.Finally,the models were applied within the same segment of SHM data,to validate the effectiveness of the anomaly detection approach based on dynamic predictive modeling.A detailed comparative analysis discusses the discrepancies in temporal anomaly detection between the ARIMA-and LSTM-based models.The results demonstrated that the dynamic predictive modelbased anomaly detection approach was effective for dealing with unannotated SHM data.In a comparison between ARIMA and LSTM,it was found that ARIMA demonstrated higher modeling efficiency,rendering it suitable for short-term predictions.In contrast,the LSTM model exhibited greater capacity to capture long-term performance trends and enhanced early warning capabilities,thereby resulting in superior overall performance.展开更多
While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),...While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.展开更多
Internet of Things(IoT)is vulnerable to data-tampering(DT)attacks.Due to resource limitations,many anomaly detection systems(ADSs)for IoT have high false positive rates when detecting DT attacks.This leads to the misr...Internet of Things(IoT)is vulnerable to data-tampering(DT)attacks.Due to resource limitations,many anomaly detection systems(ADSs)for IoT have high false positive rates when detecting DT attacks.This leads to the misreporting of normal data,which will impact the normal operation of IoT.To mitigate the impact caused by the high false positive rate of ADS,this paper proposes an ADS management scheme for clustered IoT.First,we model the data transmission and anomaly detection in clustered IoT.Then,the operation strategy of the clustered IoT is formulated as the running probabilities of all ADSs deployed on every IoT device.In the presence of a high false positive rate in ADSs,to deal with the trade-off between the security and availability of data,we develop a linear programming model referred to as a security trade-off(ST)model.Next,we develop an analysis framework for the ST model,and solve the ST model on an IoT simulation platform.Last,we reveal the effect of some factors on the maximum combined detection rate through theoretical analysis.Simulations show that the ADS management scheme can mitigate the data unavailability loss caused by the high false positive rates in ADS.展开更多
Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconst...Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconstruction has become a prevalent approach for unsupervised anomaly detection.However,effectively learning representations and achieving accurate detection results remain challenging due to the intricate temporal patterns and dependencies in real-world time series.In this paper,we propose a cross-dimension attentive feature fusion network for time series anomaly detection,referred to as CAFFN.Specifically,a series and feature mixing block is introduced to learn representations in 1D space.Additionally,a fast Fourier transform is employed to convert the time series into 2D space,providing the capability for 2D feature extraction.Finally,a cross-dimension attentive feature fusion mechanism is designed that adaptively integrates features across different dimensions for anomaly detection.Experimental results on real-world time series datasets demonstrate that CAFFN performs better than other competing methods in time series anomaly detection.展开更多
Log anomaly detection is an important paradigm for system troubleshooting.Existing log anomaly detection based on Long Short-Term Memory(LSTM)networks is time-consuming to handle long sequences.Transformer model is in...Log anomaly detection is an important paradigm for system troubleshooting.Existing log anomaly detection based on Long Short-Term Memory(LSTM)networks is time-consuming to handle long sequences.Transformer model is introduced to promote efficiency.However,most existing Transformer-based log anomaly detection methods convert unstructured log messages into structured templates by log parsing,which introduces parsing errors.They only extract simple semantic feature,which ignores other features,and are generally supervised,relying on the amount of labeled data.To overcome the limitations of existing methods,this paper proposes a novel unsupervised log anomaly detection method based on multi-feature(UMFLog).UMFLog includes two sub-models to consider two kinds of features:semantic feature and statistical feature,respectively.UMFLog applies the log original content with detailed parameters instead of templates or template IDs to avoid log parsing errors.In the first sub-model,UMFLog uses Bidirectional Encoder Representations from Transformers(BERT)instead of random initialization to extract effective semantic feature,and an unsupervised hypersphere-based Transformer model to learn compact log sequence representations and obtain anomaly candidates.In the second sub-model,UMFLog exploits a statistical feature-based Variational Autoencoder(VAE)about word occurrence times to identify the final anomaly from anomaly candidates.Extensive experiments and evaluations are conducted on three real public log datasets.The results show that UMFLog significantly improves F1-scores compared to the state-of-the-art(SOTA)methods because of the multi-feature.展开更多
Solar arrays are important and indispensable parts of spacecraft and provide energy support for spacecraft to operate in orbit and complete on-orbit missions.When a spacecraft is in orbit,because the solar array is ex...Solar arrays are important and indispensable parts of spacecraft and provide energy support for spacecraft to operate in orbit and complete on-orbit missions.When a spacecraft is in orbit,because the solar array is exposed to the harsh space environment,with increasing working time,the performance of its internal electronic components gradually degrade until abnormal damage occurs.This damage makes solar array power generation unable to fully meet the energy demand of a spacecraft.Therefore,timely and accurate detection of solar array anomalies is of great significance for the on-orbit operation and maintenance management of spacecraft.In this paper,we propose an anomaly detection method for spacecraft solar arrays based on the integrated least squares support vector machine(ILS-SVM)model:it selects correlated telemetry data from spacecraft solar arrays to form a training set and extracts n groups of training subsets from this set,then gets n corresponding least squares support vector machine(LS-SVM)submodels by training on these training subsets,respectively;after that,the ILS-SVM model is obtained by integrating these submodels through a weighting operation to increase the prediction accuracy and so on;finally,based on the obtained ILS-SVM model,a parameterfree and unsupervised anomaly determination method is proposed to detect the health status of solar arrays.We use the telemetry data set from a satellite in orbit to carry out experimental verification and find that the proposed method can diagnose solar array anomalies in time and can capture the signs before a solar array anomaly occurs,which reflects the applicability of the method.展开更多
Despite the big success of transfer learning techniques in anomaly detection,it is still challenging to achieve good transition of detection rules merely based on the preferred data in the anomaly detection with one-c...Despite the big success of transfer learning techniques in anomaly detection,it is still challenging to achieve good transition of detection rules merely based on the preferred data in the anomaly detection with one-class classification,especially for the data with a large distribution difference.To address this challenge,a novel deep one-class transfer learning algorithm with domain-adversarial training is proposed in this paper.First,by integrating a hypersphere adaptation constraint into domainadversarial neural network,a new hypersphere adversarial training mechanism is designed.Second,an alternative optimization method is derived to seek the optimal network parameters while pushing the hyperspheres built in the source domain and target domain to be as identical as possible.Through transferring oneclass detection rule in the adaptive extraction of domain-invariant feature representation,the end-to-end anomaly detection with one-class classification is then enhanced.Furthermore,a theoretical analysis about the model reliability,as well as the strategy of avoiding invalid and negative transfer,is provided.Experiments are conducted on two typical anomaly detection problems,i.e.,image recognition detection and online early fault detection of rolling bearings.The results demonstrate that the proposed algorithm outperforms the state-of-the-art methods in terms of detection accuracy and robustness.展开更多
Explainable AI extracts a variety of patterns of data in the learning process and draws hidden information through the discovery of semantic relationships.It is possible to offer the explainable basis of decision-maki...Explainable AI extracts a variety of patterns of data in the learning process and draws hidden information through the discovery of semantic relationships.It is possible to offer the explainable basis of decision-making for inference results.Through the causality of risk factors that have an ambiguous association in big medical data,it is possible to increase transparency and reliability of explainable decision-making that helps to diagnose disease status.In addition,the technique makes it possible to accurately predict disease risk for anomaly detection.Vision transformer for anomaly detection from image data makes classification through MLP.Unfortunately,in MLP,a vector value depends on patch sequence information,and thus a weight changes.This should solve the problem that there is a difference in the result value according to the change in the weight.In addition,since the deep learning model is a black box model,there is a problem that it is difficult to interpret the results determined by the model.Therefore,there is a need for an explainablemethod for the part where the disease exists.To solve the problem,this study proposes explainable anomaly detection using vision transformerbasedDeep Support Vector Data Description(SVDD).The proposed method applies the SVDD to solve the problem of MLP in which a result value is different depending on a weight change that is influenced by patch sequence information used in the vision transformer.In order to draw the explainability of model results,it visualizes normal parts through Grad-CAM.In health data,both medical staff and patients are able to identify abnormal parts easily.In addition,it is possible to improve the reliability of models and medical staff.For performance evaluation normal/abnormal classification accuracy and f-measure are evaluated,according to whether to apply SVDD.Evaluation Results The results of classification by applying the proposed SVDD are evaluated excellently.Therefore,through the proposed method,it is possible to improve the reliability of decision-making by identifying the location of the disease and deriving consistent results.展开更多
As energy-related problems continue to emerge,the need for stable energy supplies and issues regarding both environmental and safety require urgent consideration.Renewable energy is becoming increasingly important,wit...As energy-related problems continue to emerge,the need for stable energy supplies and issues regarding both environmental and safety require urgent consideration.Renewable energy is becoming increasingly important,with solar power accounting for the most significant proportion of renewables.As the scale and importance of solar energy have increased,cyber threats against solar power plants have also increased.So,we need an anomaly detection system that effectively detects cyber threats to solar power plants.However,as mentioned earlier,the existing solar power plant anomaly detection system monitors only operating information such as power generation,making it difficult to detect cyberattacks.To address this issue,in this paper,we propose a network packet-based anomaly detection system for the Programmable Logic Controller(PLC)of the inverter,an essential system of photovoltaic plants,to detect cyber threats.Cyberattacks and vulnerabilities in solar power plants were analyzed to identify cyber threats in solar power plants.The analysis shows that Denial of Service(DoS)and Manin-the-Middle(MitM)attacks are primarily carried out on inverters,aiming to disrupt solar plant operations.To develop an anomaly detection system,we performed preprocessing,such as correlation analysis and normalization for PLC network packets data and trained various machine learning-based classification models on such data.The Random Forest model showed the best performance with an accuracy of 97.36%.The proposed system can detect anomalies based on network packets,identify potential cyber threats that cannot be identified by the anomaly detection system currently in use in solar power plants,and enhance the security of solar plants.展开更多
Modern large-scale enterprise systems produce large volumes of logs that record detailed system runtime status and key events at key points.These logs are valuable for analyzing performance issues and understanding th...Modern large-scale enterprise systems produce large volumes of logs that record detailed system runtime status and key events at key points.These logs are valuable for analyzing performance issues and understanding the status of the system.Anomaly detection plays an important role in service management and system maintenance,and guarantees the reliability and security of online systems.Logs are universal semi-structured data,which causes difficulties for traditional manual detection and pattern-matching algorithms.While some deep learning algorithms utilize neural networks to detect anomalies,these approaches have an over-reliance on manually designed features,resulting in the effectiveness of anomaly detection depending on the quality of the features.At the same time,the aforementioned methods ignore the underlying contextual information present in adjacent log entries.We propose a novel model called Logformer with two cascaded transformer-based heads to capture latent contextual information from adjacent log entries,and leverage pre-trained embeddings based on logs to improve the representation of the embedding space.The proposed model achieves comparable results on HDFS and BGL datasets in terms of metric accuracy,recall and F1-score.Moreover,the consistent rise in F1-score proves that the representation of the embedding spacewith pre-trained embeddings is closer to the semantic information of the log.展开更多
The widespread usage of Cyber Physical Systems(CPSs)generates a vast volume of time series data,and precisely determining anomalies in the data is critical for practical production.Autoencoder is the mainstream method...The widespread usage of Cyber Physical Systems(CPSs)generates a vast volume of time series data,and precisely determining anomalies in the data is critical for practical production.Autoencoder is the mainstream method for time series anomaly detection,and the anomaly is judged by reconstruction error.However,due to the strong generalization ability of neural networks,some abnormal samples close to normal samples may be judged as normal,which fails to detect the abnormality.In addition,the dataset rarely provides sufficient anomaly labels.This research proposes an unsupervised anomaly detection approach based on adversarial memory autoencoders for multivariate time series to solve the above problem.Firstly,an encoder encodes the input data into low-dimensional space to acquire a feature vector.Then,a memory module is used to learn the feature vector’s prototype patterns and update the feature vectors.The updating process allows partial forgetting of information to prevent model overgeneralization.After that,two decoders reconstruct the input data.Finally,this research uses the Peak Over Threshold(POT)method to calculate the threshold to determine anomalous samples from normal samples.This research uses a two-stage adversarial training strategy during model training to enlarge the gap between the reconstruction error of normal and abnormal samples.The proposed method achieves significant anomaly detection results on synthetic and real datasets from power systems,water treatment plants,and computer clusters.The F1 score reached an average of 0.9196 on the five datasets,which is 0.0769 higher than the best baseline method.展开更多
The process control-oriented threat,which can exploit OT(Operational Technology)vulnerabilities to forcibly insert abnormal control commands or status information,has become one of the most devastating cyber attacks i...The process control-oriented threat,which can exploit OT(Operational Technology)vulnerabilities to forcibly insert abnormal control commands or status information,has become one of the most devastating cyber attacks in industrial automation control.To effectively detect this threat,this paper proposes one functional pattern-related anomaly detection approach,which skillfully collaborates the BinSeg(Binary Segmentation)algorithm with FSM(Finite State Machine)to identify anomalies between measuring data and control data.By detecting the change points of measuring data,the BinSeg algorithm is introduced to generate some initial sequence segments,which can be further classified and merged into different functional patterns due to their backward difference means and lengths.After analyzing the pattern association according to the Bayesian network,one functional state transition model based on FSM,which accurately describes the whole control and monitoring process,is constructed as one feasible detection engine.Finally,we use the typical SWaT(Secure Water Treatment)dataset to evaluate the proposed approach,and the experimental results show that:for one thing,compared with other change-point detection approaches,the BinSeg algorithm can be more suitable for the optimal sequence segmentation of measuring data due to its highest detection accuracy and least consuming time;for another,the proposed approach exhibits relatively excellent detection ability,because the average detection precision,recall rate and F1-score to identify 10 different attacks can reach 0.872,0.982 and 0.896,respectively.展开更多
System logs are essential for detecting anomalies,querying faults,and tracing attacks.Because of the time-consuming and labor-intensive nature of manual system troubleshooting and anomaly detection,it cannot meet the ...System logs are essential for detecting anomalies,querying faults,and tracing attacks.Because of the time-consuming and labor-intensive nature of manual system troubleshooting and anomaly detection,it cannot meet the actual needs.The implementation of automated log anomaly detection is a topic that demands urgent research.However,the prior work on processing log data is mainly one-dimensional and cannot profoundly learn the complex associations in log data.Meanwhile,there is a lack of attention to the utilization of log labels and usually relies on a large number of labels for detection.This paper proposes a novel and practical detection model named LCC-HGLog,the core of which is the conversion of log anomaly detection into a graph classification problem.Semantic temporal graphs(STG)are constructed by extracting the raw logs’execution sequences and template semantics.Then a unique graph classifier is used to better comprehend each STG’s semantic,sequential,and structural features.The classification model is trained jointly by graph classification loss and label contrastive loss.While achieving discriminability at the class-level,it increases the fine-grained identification at the instance-level,thus achieving detection performance even with a small amount of labeled data.We have conducted numerous experiments on real log datasets,showing that the proposed model outperforms the baseline methods and obtains the best all-around performance.Moreover,the detection performance degrades to less than 1%when only 10%of the labeled data is used.With 200 labeled samples,we can achieve the same or better detection results than the baseline methods.展开更多
Some reconstruction-based anomaly detection models in multivariate time series have brought impressive performance advancements but suffer from weak generalization ability and a lack of anomaly identification.These li...Some reconstruction-based anomaly detection models in multivariate time series have brought impressive performance advancements but suffer from weak generalization ability and a lack of anomaly identification.These limitations can result in the misjudgment of models,leading to a degradation in overall detection performance.This paper proposes a novel transformer-like anomaly detection model adopting a contrastive learning module and a memory block(CLME)to overcome the above limitations.The contrastive learning module tailored for time series data can learn the contextual relationships to generate temporal fine-grained representations.The memory block can record normal patterns of these representations through the utilization of attention-based addressing and reintegration mechanisms.These two modules together effectively alleviate the problem of generalization.Furthermore,this paper introduces a fusion anomaly detection strategy that comprehensively takes into account the residual and feature spaces.Such a strategy can enlarge the discrepancies between normal and abnormal data,which is more conducive to anomaly identification.The proposed CLME model not only efficiently enhances the generalization performance but also improves the ability of anomaly detection.To validate the efficacy of the proposed approach,extensive experiments are conducted on well-established benchmark datasets,including SWaT,PSM,WADI,and MSL.The results demonstrate outstanding performance,with F1 scores of 90.58%,94.83%,91.58%,and 91.75%,respectively.These findings affirm the superiority of the CLME model over existing stateof-the-art anomaly detection methodologies in terms of its ability to detect anomalies within complex datasets accurately.展开更多
Nowadays,industrial control system(ICS)has begun to integrate with the Internet.While the Internet has brought convenience to ICS,it has also brought severe security concerns.Traditional ICS network traffic anomaly de...Nowadays,industrial control system(ICS)has begun to integrate with the Internet.While the Internet has brought convenience to ICS,it has also brought severe security concerns.Traditional ICS network traffic anomaly detection methods rely on statistical features manually extracted using the experience of network security experts.They are not aimed at the original network data,nor can they capture the potential characteristics of network packets.Therefore,the following improvements were made in this study:(1)A dataset that can be used to evaluate anomaly detection algorithms is produced,which provides raw network data.(2)A request response-based convolutional neural network named RRCNN is proposed,which can be used for anomaly detection of ICS network traffic.Instead of using statistical features manually extracted by security experts,this method uses the byte sequences of the original network packets directly,which can extract potential features of the network packets in greater depth.It regards the request packet and response packet in a session as a Request-Response Pair(RRP).The feature of RRP is extracted using a one-dimensional convolutional neural network,and then the RRP is judged to be normal or abnormal based on the extracted feature.Experimental results demonstrate that this model is better than several other machine learning and neural network models,with F1,accuracy,precision,and recall above 99%.展开更多
In the present technological world,surveillance cameras generate an immense amount of video data from various sources,making its scrutiny tough for computer vision specialists.It is difficult to search for anomalous e...In the present technological world,surveillance cameras generate an immense amount of video data from various sources,making its scrutiny tough for computer vision specialists.It is difficult to search for anomalous events manually in thesemassive video records since they happen infrequently and with a low probability in real-world monitoring systems.Therefore,intelligent surveillance is a requirement of the modern day,as it enables the automatic identification of normal and aberrant behavior using artificial intelligence and computer vision technologies.In this article,we introduce an efficient Attention-based deep-learning approach for anomaly detection in surveillance video(ADSV).At the input of the ADSV,a shots boundary detection technique is used to segment prominent frames.Next,The Lightweight ConvolutionNeuralNetwork(LWCNN)model receives the segmented frames to extract spatial and temporal information from the intermediate layer.Following that,spatial and temporal features are learned using Long Short-Term Memory(LSTM)cells and Attention Network from a series of frames for each anomalous activity in a sample.To detect motion and action,the LWCNN received chronologically sorted frames.Finally,the anomaly activity in the video is identified using the proposed trained ADSV model.Extensive experiments are conducted on complex and challenging benchmark datasets.In addition,the experimental results have been compared to state-ofthe-artmethodologies,and a significant improvement is attained,demonstrating the efficiency of our ADSV method.展开更多
As cloud system architectures evolve continuously,the interac-tions among distributed components in various roles become increasingly complex.This complexity makes it difficult to detect anomalies in cloud systems.The...As cloud system architectures evolve continuously,the interac-tions among distributed components in various roles become increasingly complex.This complexity makes it difficult to detect anomalies in cloud systems.The system status can no longer be determined through individual key performance indicators(KPIs)but through joint judgments based on syn-ergistic relationships among distributed components.Furthermore,anomalies in modern cloud systems are usually not sudden crashes but rather grad-ual,chronic,localized failures or quality degradations in a weakly available state.Therefore,accurately modeling cloud systems and mining the hidden system state is crucial.To address this challenge,we propose an anomaly detection method with dynamic spatiotemporal learning(AD-DSTL).AD-DSTL leverages the spatiotemporal dynamics of the system to train an end-to-end deep learning model driven by data from system monitoring to detect underlying anomalous states in complex cloud systems.Unlike previous work that focuses on the KPIs of separate components,AD-DSTL builds a model for the entire system and characterizes its spatiotemporal dynamics based on graph convolutional networks(GCN)and long short-term memory(LSTM).We validated AD-DSTL using four datasets from different backgrounds,and it demonstrated superior robustness compared to other baseline algorithms.Moreover,when raising the target exception level,both the recall and precision of AD-DSTL reached approximately 0.9.Our experimental results demon-strate that AD-DSTL can meet the requirements of anomaly detection for complex cloud systems.展开更多
Online banking fraud occurs whenever a criminal can seize accounts and transfer funds from an individual’s online bank account.Successfully preventing this requires the detection of as many fraudsters as possible,wit...Online banking fraud occurs whenever a criminal can seize accounts and transfer funds from an individual’s online bank account.Successfully preventing this requires the detection of as many fraudsters as possible,without producing too many false alarms.This is a challenge for machine learning owing to the extremely imbalanced data and complexity of fraud.In addition,classical machine learning methods must be extended,minimizing expected financial losses.Finally,fraud can only be combated systematically and economically if the risks and costs in payment channels are known.We define three models that overcome these challenges:machine learning-based fraud detection,economic optimization of machine learning results,and a risk model to predict the risk of fraud while considering countermeasures.The models were tested utilizing real data.Our machine learning model alone reduces the expected and unexpected losses in the three aggregated payment channels by 15%compared to a benchmark consisting of static if-then rules.Optimizing the machine-learning model further reduces the expected losses by 52%.These results hold with a low false positive rate of 0.4%.Thus,the risk framework of the three models is viable from a business and risk perspective.展开更多
Recently,the autoencoder(AE)based method plays a critical role in the hyperspectral anomaly detection domain.However,due to the strong generalised capacity of AE,the abnormal samples are usually reconstructed well alo...Recently,the autoencoder(AE)based method plays a critical role in the hyperspectral anomaly detection domain.However,due to the strong generalised capacity of AE,the abnormal samples are usually reconstructed well along with the normal background samples.Thus,in order to separate anomalies from the background by calculating reconstruction errors,it can be greatly beneficial to reduce the AE capability for abnormal sample reconstruction while maintaining the background reconstruction performance.A memory‐augmented autoencoder for hyperspectral anomaly detection(MAENet)is proposed to address this challenging problem.Specifically,the proposed MAENet mainly consists of an encoder,a memory module,and a decoder.First,the encoder transforms the original hyperspectral data into the low‐dimensional latent representation.Then,the latent representation is utilised to retrieve the most relevant matrix items in the memory matrix,and the retrieved matrix items will be used to replace the latent representation from the encoder.Finally,the decoder is used to reconstruct the input hyperspectral data using the retrieved memory items.With this strategy,the background can still be reconstructed well while the abnormal samples cannot.Experiments conducted on five real hyperspectral anomaly data sets demonstrate the superiority of the proposed method.展开更多
Automated live video stream analytics has been extensively researched in recent times.Most of the traditional methods for video anomaly detection is supervised and use a single classifier to identify an anomaly in a f...Automated live video stream analytics has been extensively researched in recent times.Most of the traditional methods for video anomaly detection is supervised and use a single classifier to identify an anomaly in a frame.We propose a 3-stage ensemble-based unsupervised deep reinforcement algorithm with an underlying Long Short Term Memory(LSTM)based Recurrent Neural Network(RNN).In the first stage,an ensemble of LSTM-RNNs are deployed to generate the anomaly score.The second stage uses the least square method for optimal anomaly score generation.The third stage adopts award-based reinforcement learning to update the model.The proposed Hybrid Ensemble RR Model was tested on standard pedestrian datasets UCSDPed1,USDPed2.The data set has 70 videos in UCSD Ped1 and 28 videos in UCSD Ped2 with a total of 18560 frames.Since a real-time stream has strict memory constraints and storage issues,a simple computing machine does not suffice in performing analytics with stream data.Hence the proposed research is designed to work on a GPU(Graphics Processing Unit),TPU(Tensor Processing Unit)supported framework.As shown in the experimental results section,recorded observations on framelevel EER(Equal Error Rate)and AUC(Area Under Curve)showed a 9%reduction in EER in UCSD Ped1,a 13%reduction in ERR in UCSD Ped2 and a 4%improvement in accuracy in both datasets.展开更多
基金supported by the Research and Development Center of Transport Industry of New Generation of Artificial Intelligence Technology(Grant No.202202H)the National Key R&D Program of China(Grant No.2019YFB1600702)the National Natural Science Foundation of China(Grant Nos.51978600&51808336).
文摘Structural Health Monitoring(SHM)systems have become a crucial tool for the operational management of long tunnels.For immersed tunnels exposed to both traffic loads and the effects of the marine environment,efficiently identifying abnormal conditions from the extensive unannotated SHM data presents a significant challenge.This study proposed amodel-based approach for anomaly detection and conducted validation and comparative analysis of two distinct temporal predictive models using SHM data from a real immersed tunnel.Firstly,a dynamic predictive model-based anomaly detectionmethod is proposed,which utilizes a rolling time window for modeling to achieve dynamic prediction.Leveraging the assumption of temporal data similarity,an interval prediction value deviation was employed to determine the abnormality of the data.Subsequently,dynamic predictive models were constructed based on the Autoregressive Integrated Moving Average(ARIMA)and Long Short-Term Memory(LSTM)models.The hyperparameters of these models were optimized and selected using monitoring data from the immersed tunnel,yielding viable static and dynamic predictive models.Finally,the models were applied within the same segment of SHM data,to validate the effectiveness of the anomaly detection approach based on dynamic predictive modeling.A detailed comparative analysis discusses the discrepancies in temporal anomaly detection between the ARIMA-and LSTM-based models.The results demonstrated that the dynamic predictive modelbased anomaly detection approach was effective for dealing with unannotated SHM data.In a comparison between ARIMA and LSTM,it was found that ARIMA demonstrated higher modeling efficiency,rendering it suitable for short-term predictions.In contrast,the LSTM model exhibited greater capacity to capture long-term performance trends and enhanced early warning capabilities,thereby resulting in superior overall performance.
文摘While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.
基金This study was funded by the Chongqing Normal University Startup Foundation for PhD(22XLB021)was also supported by the Open Research Project of the State Key Laboratory of Industrial Control Technology,Zhejiang University,China(No.ICT2023B40).
文摘Internet of Things(IoT)is vulnerable to data-tampering(DT)attacks.Due to resource limitations,many anomaly detection systems(ADSs)for IoT have high false positive rates when detecting DT attacks.This leads to the misreporting of normal data,which will impact the normal operation of IoT.To mitigate the impact caused by the high false positive rate of ADS,this paper proposes an ADS management scheme for clustered IoT.First,we model the data transmission and anomaly detection in clustered IoT.Then,the operation strategy of the clustered IoT is formulated as the running probabilities of all ADSs deployed on every IoT device.In the presence of a high false positive rate in ADSs,to deal with the trade-off between the security and availability of data,we develop a linear programming model referred to as a security trade-off(ST)model.Next,we develop an analysis framework for the ST model,and solve the ST model on an IoT simulation platform.Last,we reveal the effect of some factors on the maximum combined detection rate through theoretical analysis.Simulations show that the ADS management scheme can mitigate the data unavailability loss caused by the high false positive rates in ADS.
基金supported in part by the National Natural Science Foundation of China(Grants 62376172,62006163,62376043)in part by the National Postdoctoral Program for Innovative Talents(Grant BX20200226)in part by Sichuan Science and Technology Planning Project(Grants 2022YFSY0047,2022YFQ0014,2023ZYD0143,2022YFH0021,2023YFQ0020,24QYCX0354,24NSFTD0025).
文摘Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconstruction has become a prevalent approach for unsupervised anomaly detection.However,effectively learning representations and achieving accurate detection results remain challenging due to the intricate temporal patterns and dependencies in real-world time series.In this paper,we propose a cross-dimension attentive feature fusion network for time series anomaly detection,referred to as CAFFN.Specifically,a series and feature mixing block is introduced to learn representations in 1D space.Additionally,a fast Fourier transform is employed to convert the time series into 2D space,providing the capability for 2D feature extraction.Finally,a cross-dimension attentive feature fusion mechanism is designed that adaptively integrates features across different dimensions for anomaly detection.Experimental results on real-world time series datasets demonstrate that CAFFN performs better than other competing methods in time series anomaly detection.
基金supported in part by the National Natural Science Foundation of China under Grant 62272062the Scientific Research Fund of Hunan Provincial Transportation Department(No.202143)the Open Fund ofKey Laboratory of Safety Control of Bridge Engineering,Ministry of Education(Changsha University of Science Technology)under Grant 21KB07.
文摘Log anomaly detection is an important paradigm for system troubleshooting.Existing log anomaly detection based on Long Short-Term Memory(LSTM)networks is time-consuming to handle long sequences.Transformer model is introduced to promote efficiency.However,most existing Transformer-based log anomaly detection methods convert unstructured log messages into structured templates by log parsing,which introduces parsing errors.They only extract simple semantic feature,which ignores other features,and are generally supervised,relying on the amount of labeled data.To overcome the limitations of existing methods,this paper proposes a novel unsupervised log anomaly detection method based on multi-feature(UMFLog).UMFLog includes two sub-models to consider two kinds of features:semantic feature and statistical feature,respectively.UMFLog applies the log original content with detailed parameters instead of templates or template IDs to avoid log parsing errors.In the first sub-model,UMFLog uses Bidirectional Encoder Representations from Transformers(BERT)instead of random initialization to extract effective semantic feature,and an unsupervised hypersphere-based Transformer model to learn compact log sequence representations and obtain anomaly candidates.In the second sub-model,UMFLog exploits a statistical feature-based Variational Autoencoder(VAE)about word occurrence times to identify the final anomaly from anomaly candidates.Extensive experiments and evaluations are conducted on three real public log datasets.The results show that UMFLog significantly improves F1-scores compared to the state-of-the-art(SOTA)methods because of the multi-feature.
基金supported by the National Natural Science Foundation of China(7190121061973310).
文摘Solar arrays are important and indispensable parts of spacecraft and provide energy support for spacecraft to operate in orbit and complete on-orbit missions.When a spacecraft is in orbit,because the solar array is exposed to the harsh space environment,with increasing working time,the performance of its internal electronic components gradually degrade until abnormal damage occurs.This damage makes solar array power generation unable to fully meet the energy demand of a spacecraft.Therefore,timely and accurate detection of solar array anomalies is of great significance for the on-orbit operation and maintenance management of spacecraft.In this paper,we propose an anomaly detection method for spacecraft solar arrays based on the integrated least squares support vector machine(ILS-SVM)model:it selects correlated telemetry data from spacecraft solar arrays to form a training set and extracts n groups of training subsets from this set,then gets n corresponding least squares support vector machine(LS-SVM)submodels by training on these training subsets,respectively;after that,the ILS-SVM model is obtained by integrating these submodels through a weighting operation to increase the prediction accuracy and so on;finally,based on the obtained ILS-SVM model,a parameterfree and unsupervised anomaly determination method is proposed to detect the health status of solar arrays.We use the telemetry data set from a satellite in orbit to carry out experimental verification and find that the proposed method can diagnose solar array anomalies in time and can capture the signs before a solar array anomaly occurs,which reflects the applicability of the method.
基金supported by the National Natural Science Foundation of China(NSFC)(U1704158)Henan Province Technologies Research and Development Project of China(212102210103)+1 种基金the NSFC Development Funding of Henan Normal University(2020PL09)the University of Manitoba Research Grants Program(URGP)。
文摘Despite the big success of transfer learning techniques in anomaly detection,it is still challenging to achieve good transition of detection rules merely based on the preferred data in the anomaly detection with one-class classification,especially for the data with a large distribution difference.To address this challenge,a novel deep one-class transfer learning algorithm with domain-adversarial training is proposed in this paper.First,by integrating a hypersphere adaptation constraint into domainadversarial neural network,a new hypersphere adversarial training mechanism is designed.Second,an alternative optimization method is derived to seek the optimal network parameters while pushing the hyperspheres built in the source domain and target domain to be as identical as possible.Through transferring oneclass detection rule in the adaptive extraction of domain-invariant feature representation,the end-to-end anomaly detection with one-class classification is then enhanced.Furthermore,a theoretical analysis about the model reliability,as well as the strategy of avoiding invalid and negative transfer,is provided.Experiments are conducted on two typical anomaly detection problems,i.e.,image recognition detection and online early fault detection of rolling bearings.The results demonstrate that the proposed algorithm outperforms the state-of-the-art methods in terms of detection accuracy and robustness.
基金This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2020R1A6A1A03040583).
文摘Explainable AI extracts a variety of patterns of data in the learning process and draws hidden information through the discovery of semantic relationships.It is possible to offer the explainable basis of decision-making for inference results.Through the causality of risk factors that have an ambiguous association in big medical data,it is possible to increase transparency and reliability of explainable decision-making that helps to diagnose disease status.In addition,the technique makes it possible to accurately predict disease risk for anomaly detection.Vision transformer for anomaly detection from image data makes classification through MLP.Unfortunately,in MLP,a vector value depends on patch sequence information,and thus a weight changes.This should solve the problem that there is a difference in the result value according to the change in the weight.In addition,since the deep learning model is a black box model,there is a problem that it is difficult to interpret the results determined by the model.Therefore,there is a need for an explainablemethod for the part where the disease exists.To solve the problem,this study proposes explainable anomaly detection using vision transformerbasedDeep Support Vector Data Description(SVDD).The proposed method applies the SVDD to solve the problem of MLP in which a result value is different depending on a weight change that is influenced by patch sequence information used in the vision transformer.In order to draw the explainability of model results,it visualizes normal parts through Grad-CAM.In health data,both medical staff and patients are able to identify abnormal parts easily.In addition,it is possible to improve the reliability of models and medical staff.For performance evaluation normal/abnormal classification accuracy and f-measure are evaluated,according to whether to apply SVDD.Evaluation Results The results of classification by applying the proposed SVDD are evaluated excellently.Therefore,through the proposed method,it is possible to improve the reliability of decision-making by identifying the location of the disease and deriving consistent results.
基金supported by the Korea Institute of Energy Technology Evaluation and Planning(KETEP)grant funded by the Korea government(MOTIE)(20224B10100140,50%)the Nuclear Safety Research Program through the Korea Foundation of Nuclear Safety(KoFONS)using the financial resource granted by the Nuclear Safety and Security Commission(NSSC)of the Republic of Korea(No.2106058,40%)the Gachon University Research Fund of 2023(GCU-202110280001,10%)。
文摘As energy-related problems continue to emerge,the need for stable energy supplies and issues regarding both environmental and safety require urgent consideration.Renewable energy is becoming increasingly important,with solar power accounting for the most significant proportion of renewables.As the scale and importance of solar energy have increased,cyber threats against solar power plants have also increased.So,we need an anomaly detection system that effectively detects cyber threats to solar power plants.However,as mentioned earlier,the existing solar power plant anomaly detection system monitors only operating information such as power generation,making it difficult to detect cyberattacks.To address this issue,in this paper,we propose a network packet-based anomaly detection system for the Programmable Logic Controller(PLC)of the inverter,an essential system of photovoltaic plants,to detect cyber threats.Cyberattacks and vulnerabilities in solar power plants were analyzed to identify cyber threats in solar power plants.The analysis shows that Denial of Service(DoS)and Manin-the-Middle(MitM)attacks are primarily carried out on inverters,aiming to disrupt solar plant operations.To develop an anomaly detection system,we performed preprocessing,such as correlation analysis and normalization for PLC network packets data and trained various machine learning-based classification models on such data.The Random Forest model showed the best performance with an accuracy of 97.36%.The proposed system can detect anomalies based on network packets,identify potential cyber threats that cannot be identified by the anomaly detection system currently in use in solar power plants,and enhance the security of solar plants.
基金supported by the National Natural Science Foundation of China (Nos.62072074,62076054,62027827,61902054,62002047)the Frontier Science and Technology Innovation Projects of National Key R&D Program (No.2019QY1405)+1 种基金the Sichuan Science and Technology Innovation Platform and Talent Plan (No.2020TDT00020)the Sichuan Science and Technology Support Plan (No.2020YFSY0010).
文摘Modern large-scale enterprise systems produce large volumes of logs that record detailed system runtime status and key events at key points.These logs are valuable for analyzing performance issues and understanding the status of the system.Anomaly detection plays an important role in service management and system maintenance,and guarantees the reliability and security of online systems.Logs are universal semi-structured data,which causes difficulties for traditional manual detection and pattern-matching algorithms.While some deep learning algorithms utilize neural networks to detect anomalies,these approaches have an over-reliance on manually designed features,resulting in the effectiveness of anomaly detection depending on the quality of the features.At the same time,the aforementioned methods ignore the underlying contextual information present in adjacent log entries.We propose a novel model called Logformer with two cascaded transformer-based heads to capture latent contextual information from adjacent log entries,and leverage pre-trained embeddings based on logs to improve the representation of the embedding space.The proposed model achieves comparable results on HDFS and BGL datasets in terms of metric accuracy,recall and F1-score.Moreover,the consistent rise in F1-score proves that the representation of the embedding spacewith pre-trained embeddings is closer to the semantic information of the log.
基金supported by the National Natural Science Foundation of China(62203431)。
文摘The widespread usage of Cyber Physical Systems(CPSs)generates a vast volume of time series data,and precisely determining anomalies in the data is critical for practical production.Autoencoder is the mainstream method for time series anomaly detection,and the anomaly is judged by reconstruction error.However,due to the strong generalization ability of neural networks,some abnormal samples close to normal samples may be judged as normal,which fails to detect the abnormality.In addition,the dataset rarely provides sufficient anomaly labels.This research proposes an unsupervised anomaly detection approach based on adversarial memory autoencoders for multivariate time series to solve the above problem.Firstly,an encoder encodes the input data into low-dimensional space to acquire a feature vector.Then,a memory module is used to learn the feature vector’s prototype patterns and update the feature vectors.The updating process allows partial forgetting of information to prevent model overgeneralization.After that,two decoders reconstruct the input data.Finally,this research uses the Peak Over Threshold(POT)method to calculate the threshold to determine anomalous samples from normal samples.This research uses a two-stage adversarial training strategy during model training to enlarge the gap between the reconstruction error of normal and abnormal samples.The proposed method achieves significant anomaly detection results on synthetic and real datasets from power systems,water treatment plants,and computer clusters.The F1 score reached an average of 0.9196 on the five datasets,which is 0.0769 higher than the best baseline method.
基金supported by the Hainan Provincial Natural Science Foundation of China(Grant No.620RC562)the Liaoning Provincial Natural Science Foundation:Industrial Internet Identification Data Association Analysis Based on Machine Online Learning(Grant No.2022-KF-12-11)the Scientific Research Project of Educational Department of Liaoning Province(Grant No.LJKZ0082).
文摘The process control-oriented threat,which can exploit OT(Operational Technology)vulnerabilities to forcibly insert abnormal control commands or status information,has become one of the most devastating cyber attacks in industrial automation control.To effectively detect this threat,this paper proposes one functional pattern-related anomaly detection approach,which skillfully collaborates the BinSeg(Binary Segmentation)algorithm with FSM(Finite State Machine)to identify anomalies between measuring data and control data.By detecting the change points of measuring data,the BinSeg algorithm is introduced to generate some initial sequence segments,which can be further classified and merged into different functional patterns due to their backward difference means and lengths.After analyzing the pattern association according to the Bayesian network,one functional state transition model based on FSM,which accurately describes the whole control and monitoring process,is constructed as one feasible detection engine.Finally,we use the typical SWaT(Secure Water Treatment)dataset to evaluate the proposed approach,and the experimental results show that:for one thing,compared with other change-point detection approaches,the BinSeg algorithm can be more suitable for the optimal sequence segmentation of measuring data due to its highest detection accuracy and least consuming time;for another,the proposed approach exhibits relatively excellent detection ability,because the average detection precision,recall rate and F1-score to identify 10 different attacks can reach 0.872,0.982 and 0.896,respectively.
基金the National Natural Science Foundation of China(U20B2045).
文摘System logs are essential for detecting anomalies,querying faults,and tracing attacks.Because of the time-consuming and labor-intensive nature of manual system troubleshooting and anomaly detection,it cannot meet the actual needs.The implementation of automated log anomaly detection is a topic that demands urgent research.However,the prior work on processing log data is mainly one-dimensional and cannot profoundly learn the complex associations in log data.Meanwhile,there is a lack of attention to the utilization of log labels and usually relies on a large number of labels for detection.This paper proposes a novel and practical detection model named LCC-HGLog,the core of which is the conversion of log anomaly detection into a graph classification problem.Semantic temporal graphs(STG)are constructed by extracting the raw logs’execution sequences and template semantics.Then a unique graph classifier is used to better comprehend each STG’s semantic,sequential,and structural features.The classification model is trained jointly by graph classification loss and label contrastive loss.While achieving discriminability at the class-level,it increases the fine-grained identification at the instance-level,thus achieving detection performance even with a small amount of labeled data.We have conducted numerous experiments on real log datasets,showing that the proposed model outperforms the baseline methods and obtains the best all-around performance.Moreover,the detection performance degrades to less than 1%when only 10%of the labeled data is used.With 200 labeled samples,we can achieve the same or better detection results than the baseline methods.
基金support from the Major National Science and Technology Special Projects(2016ZX02301003-004-007)the Natural Science Foundation of Hebei Province(F2020202067)。
文摘Some reconstruction-based anomaly detection models in multivariate time series have brought impressive performance advancements but suffer from weak generalization ability and a lack of anomaly identification.These limitations can result in the misjudgment of models,leading to a degradation in overall detection performance.This paper proposes a novel transformer-like anomaly detection model adopting a contrastive learning module and a memory block(CLME)to overcome the above limitations.The contrastive learning module tailored for time series data can learn the contextual relationships to generate temporal fine-grained representations.The memory block can record normal patterns of these representations through the utilization of attention-based addressing and reintegration mechanisms.These two modules together effectively alleviate the problem of generalization.Furthermore,this paper introduces a fusion anomaly detection strategy that comprehensively takes into account the residual and feature spaces.Such a strategy can enlarge the discrepancies between normal and abnormal data,which is more conducive to anomaly identification.The proposed CLME model not only efficiently enhances the generalization performance but also improves the ability of anomaly detection.To validate the efficacy of the proposed approach,extensive experiments are conducted on well-established benchmark datasets,including SWaT,PSM,WADI,and MSL.The results demonstrate outstanding performance,with F1 scores of 90.58%,94.83%,91.58%,and 91.75%,respectively.These findings affirm the superiority of the CLME model over existing stateof-the-art anomaly detection methodologies in terms of its ability to detect anomalies within complex datasets accurately.
基金supported by the National Natural Science Foundation of China(No.62076042,No.62102049)the Key Research and Development Project of Sichuan Province(No.2021YFSY0012,No.2020YFG0307,No.2021YFG0332)+3 种基金the Science and Technology Innovation Project of Sichuan(No.2020017)the Key Research and Development Project of Chengdu(No.2019-YF05-02028-GX)the Innovation Team of Quantum Security Communication of Sichuan Province(No.17TD0009)the Academic and Technical Leaders Training Funding Support Projects of Sichuan Province(No.2016120080102643).
文摘Nowadays,industrial control system(ICS)has begun to integrate with the Internet.While the Internet has brought convenience to ICS,it has also brought severe security concerns.Traditional ICS network traffic anomaly detection methods rely on statistical features manually extracted using the experience of network security experts.They are not aimed at the original network data,nor can they capture the potential characteristics of network packets.Therefore,the following improvements were made in this study:(1)A dataset that can be used to evaluate anomaly detection algorithms is produced,which provides raw network data.(2)A request response-based convolutional neural network named RRCNN is proposed,which can be used for anomaly detection of ICS network traffic.Instead of using statistical features manually extracted by security experts,this method uses the byte sequences of the original network packets directly,which can extract potential features of the network packets in greater depth.It regards the request packet and response packet in a session as a Request-Response Pair(RRP).The feature of RRP is extracted using a one-dimensional convolutional neural network,and then the RRP is judged to be normal or abnormal based on the extracted feature.Experimental results demonstrate that this model is better than several other machine learning and neural network models,with F1,accuracy,precision,and recall above 99%.
基金This research was supported by the Chung-Ang University Research Scholarship Grants in 2021 and the Culture,Sports and Tourism R&D Program through the Korea Creative Content Agency grant funded by the Ministry of Culture,Sports,and Tourism in 2022(Project Name:Development of Digital Quarantine and Operation Technologies for Creation of Safe Viewing Environment in Cultural Facilities,Project Number:R2021040028,Contribution Rate:100%).
文摘In the present technological world,surveillance cameras generate an immense amount of video data from various sources,making its scrutiny tough for computer vision specialists.It is difficult to search for anomalous events manually in thesemassive video records since they happen infrequently and with a low probability in real-world monitoring systems.Therefore,intelligent surveillance is a requirement of the modern day,as it enables the automatic identification of normal and aberrant behavior using artificial intelligence and computer vision technologies.In this article,we introduce an efficient Attention-based deep-learning approach for anomaly detection in surveillance video(ADSV).At the input of the ADSV,a shots boundary detection technique is used to segment prominent frames.Next,The Lightweight ConvolutionNeuralNetwork(LWCNN)model receives the segmented frames to extract spatial and temporal information from the intermediate layer.Following that,spatial and temporal features are learned using Long Short-Term Memory(LSTM)cells and Attention Network from a series of frames for each anomalous activity in a sample.To detect motion and action,the LWCNN received chronologically sorted frames.Finally,the anomaly activity in the video is identified using the proposed trained ADSV model.Extensive experiments are conducted on complex and challenging benchmark datasets.In addition,the experimental results have been compared to state-ofthe-artmethodologies,and a significant improvement is attained,demonstrating the efficiency of our ADSV method.
基金supported by the National Key Research and Development Program of China (2022YFB4500800).
文摘As cloud system architectures evolve continuously,the interac-tions among distributed components in various roles become increasingly complex.This complexity makes it difficult to detect anomalies in cloud systems.The system status can no longer be determined through individual key performance indicators(KPIs)but through joint judgments based on syn-ergistic relationships among distributed components.Furthermore,anomalies in modern cloud systems are usually not sudden crashes but rather grad-ual,chronic,localized failures or quality degradations in a weakly available state.Therefore,accurately modeling cloud systems and mining the hidden system state is crucial.To address this challenge,we propose an anomaly detection method with dynamic spatiotemporal learning(AD-DSTL).AD-DSTL leverages the spatiotemporal dynamics of the system to train an end-to-end deep learning model driven by data from system monitoring to detect underlying anomalous states in complex cloud systems.Unlike previous work that focuses on the KPIs of separate components,AD-DSTL builds a model for the entire system and characterizes its spatiotemporal dynamics based on graph convolutional networks(GCN)and long short-term memory(LSTM).We validated AD-DSTL using four datasets from different backgrounds,and it demonstrated superior robustness compared to other baseline algorithms.Moreover,when raising the target exception level,both the recall and precision of AD-DSTL reached approximately 0.9.Our experimental results demon-strate that AD-DSTL can meet the requirements of anomaly detection for complex cloud systems.
基金from any funding agency in the public,commercial,or not-for-profit sectors.
文摘Online banking fraud occurs whenever a criminal can seize accounts and transfer funds from an individual’s online bank account.Successfully preventing this requires the detection of as many fraudsters as possible,without producing too many false alarms.This is a challenge for machine learning owing to the extremely imbalanced data and complexity of fraud.In addition,classical machine learning methods must be extended,minimizing expected financial losses.Finally,fraud can only be combated systematically and economically if the risks and costs in payment channels are known.We define three models that overcome these challenges:machine learning-based fraud detection,economic optimization of machine learning results,and a risk model to predict the risk of fraud while considering countermeasures.The models were tested utilizing real data.Our machine learning model alone reduces the expected and unexpected losses in the three aggregated payment channels by 15%compared to a benchmark consisting of static if-then rules.Optimizing the machine-learning model further reduces the expected losses by 52%.These results hold with a low false positive rate of 0.4%.Thus,the risk framework of the three models is viable from a business and risk perspective.
基金supported in part by the National Natural Science Foundation of China under Grant 62076199in part by the Open Research Fund of Beijing Key Laboratory of Big Data Technology for Food Safety under Grant BTBD‐2020KF08Beijing Technology and Business University,and in part by the Key R&D project of Shaan'xi Province under Grant 2021GY‐027 and 2022ZDLGY01‐03.
文摘Recently,the autoencoder(AE)based method plays a critical role in the hyperspectral anomaly detection domain.However,due to the strong generalised capacity of AE,the abnormal samples are usually reconstructed well along with the normal background samples.Thus,in order to separate anomalies from the background by calculating reconstruction errors,it can be greatly beneficial to reduce the AE capability for abnormal sample reconstruction while maintaining the background reconstruction performance.A memory‐augmented autoencoder for hyperspectral anomaly detection(MAENet)is proposed to address this challenging problem.Specifically,the proposed MAENet mainly consists of an encoder,a memory module,and a decoder.First,the encoder transforms the original hyperspectral data into the low‐dimensional latent representation.Then,the latent representation is utilised to retrieve the most relevant matrix items in the memory matrix,and the retrieved matrix items will be used to replace the latent representation from the encoder.Finally,the decoder is used to reconstruct the input hyperspectral data using the retrieved memory items.With this strategy,the background can still be reconstructed well while the abnormal samples cannot.Experiments conducted on five real hyperspectral anomaly data sets demonstrate the superiority of the proposed method.
文摘Automated live video stream analytics has been extensively researched in recent times.Most of the traditional methods for video anomaly detection is supervised and use a single classifier to identify an anomaly in a frame.We propose a 3-stage ensemble-based unsupervised deep reinforcement algorithm with an underlying Long Short Term Memory(LSTM)based Recurrent Neural Network(RNN).In the first stage,an ensemble of LSTM-RNNs are deployed to generate the anomaly score.The second stage uses the least square method for optimal anomaly score generation.The third stage adopts award-based reinforcement learning to update the model.The proposed Hybrid Ensemble RR Model was tested on standard pedestrian datasets UCSDPed1,USDPed2.The data set has 70 videos in UCSD Ped1 and 28 videos in UCSD Ped2 with a total of 18560 frames.Since a real-time stream has strict memory constraints and storage issues,a simple computing machine does not suffice in performing analytics with stream data.Hence the proposed research is designed to work on a GPU(Graphics Processing Unit),TPU(Tensor Processing Unit)supported framework.As shown in the experimental results section,recorded observations on framelevel EER(Equal Error Rate)and AUC(Area Under Curve)showed a 9%reduction in EER in UCSD Ped1,a 13%reduction in ERR in UCSD Ped2 and a 4%improvement in accuracy in both datasets.