Deterministic inversion based on deep learning has been widely utilized in model parameters estimation.Constrained by logging data,seismic data,wavelet and modeling operator,deterministic inversion based on deep learn...Deterministic inversion based on deep learning has been widely utilized in model parameters estimation.Constrained by logging data,seismic data,wavelet and modeling operator,deterministic inversion based on deep learning can establish nonlinear relationships between seismic data and model parameters.However,seismic data lacks low-frequency and contains noise,which increases the non-uniqueness of the solutions.The conventional inversion method based on deep learning can only establish the deterministic relationship between seismic data and parameters,and cannot quantify the uncertainty of inversion.In order to quickly quantify the uncertainty,a physics-guided deep mixture density network(PG-DMDN)is established by combining the mixture density network(MDN)with the deep neural network(DNN).Compared with Bayesian neural network(BNN)and network dropout,PG-DMDN has lower computing cost and shorter training time.A low-frequency model is introduced in the training process of the network to help the network learn the nonlinear relationship between narrowband seismic data and low-frequency impedance.In addition,the block constraints are added to the PG-DMDN framework to improve the horizontal continuity of the inversion results.To illustrate the benefits of proposed method,the PG-DMDN is compared with existing semi-supervised inversion method.Four synthetic data examples of Marmousi II model are utilized to quantify the influence of forward modeling part,low-frequency model,noise and the pseudo-wells number on inversion results,and prove the feasibility and stability of the proposed method.In addition,the robustness and generality of the proposed method are verified by the field seismic data.展开更多
Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,w...Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,which are commonly utilized in radiology.To fully exploit their potential,researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems.However,constructing and compressing these systems presents a significant challenge,as it relies heavily on the expertise of data scientists.To tackle this issue,we propose an automated approach that utilizes an evolutionary algorithm(EA)to optimize the design and compression of a convolutional neural network(CNN)for X-Ray image classification.Our approach accurately classifies radiography images and detects potential chest abnormalities and infections,including COVID-19.Furthermore,our approach incorporates transfer learning,where a pre-trainedCNNmodel on a vast dataset of chest X-Ray images is fine-tuned for the specific task of detecting COVID-19.This method can help reduce the amount of labeled data required for the task and enhance the overall performance of the model.We have validated our method via a series of experiments against state-of-the-art architectures.展开更多
By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral...By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral(HR-HS)image.With previously collected large-amount of external data,these methods are intuitively realised under the full supervision of the ground-truth data.Thus,the database construction in merging the low-resolution(LR)HS(LR-HS)and HR multispectral(MS)or RGB image research paradigm,commonly named as HSI SR,requires collecting corresponding training triplets:HR-MS(RGB),LR-HS and HR-HS image simultaneously,and often faces dif-ficulties in reality.The learned models with the training datasets collected simultaneously under controlled conditions may significantly degrade the HSI super-resolved perfor-mance to the real images captured under diverse environments.To handle the above-mentioned limitations,the authors propose to leverage the deep internal and self-supervised learning to solve the HSI SR problem.The authors advocate that it is possible to train a specific CNN model at test time,called as deep internal learning(DIL),by on-line preparing the training triplet samples from the observed LR-HS/HR-MS(or RGB)images and the down-sampled LR-HS version.However,the number of the training triplets extracted solely from the transformed data of the observation itself is extremely few particularly for the HSI SR tasks with large spatial upscale factors,which would result in limited reconstruction performance.To solve this problem,the authors further exploit deep self-supervised learning(DSL)by considering the observations as the unlabelled training samples.Specifically,the degradation modules inside the network were elaborated to realise the spatial and spectral down-sampling procedures for transforming the generated HR-HS estimation to the high-resolution RGB/LR-HS approximation,and then the reconstruction errors of the observations were formulated for measuring the network modelling performance.By consolidating the DIL and DSL into a unified deep framework,the authors construct a more robust HSI SR method without any prior training and have great potential of flexible adaptation to different settings per obser-vation.To verify the effectiveness of the proposed approach,extensive experiments have been conducted on two benchmark HS datasets,including the CAVE and Harvard datasets,and demonstrate the great performance gain of the proposed method over the state-of-the-art methods.展开更多
Solving constrained multi-objective optimization problems with evolutionary algorithms has attracted considerable attention.Various constrained multi-objective optimization evolutionary algorithms(CMOEAs)have been dev...Solving constrained multi-objective optimization problems with evolutionary algorithms has attracted considerable attention.Various constrained multi-objective optimization evolutionary algorithms(CMOEAs)have been developed with the use of different algorithmic strategies,evolutionary operators,and constraint-handling techniques.The performance of CMOEAs may be heavily dependent on the operators used,however,it is usually difficult to select suitable operators for the problem at hand.Hence,improving operator selection is promising and necessary for CMOEAs.This work proposes an online operator selection framework assisted by Deep Reinforcement Learning.The dynamics of the population,including convergence,diversity,and feasibility,are regarded as the state;the candidate operators are considered as actions;and the improvement of the population state is treated as the reward.By using a Q-network to learn a policy to estimate the Q-values of all actions,the proposed approach can adaptively select an operator that maximizes the improvement of the population according to the current state and thereby improve the algorithmic performance.The framework is embedded into four popular CMOEAs and assessed on 42 benchmark problems.The experimental results reveal that the proposed Deep Reinforcement Learning-assisted operator selection significantly improves the performance of these CMOEAs and the resulting algorithm obtains better versatility compared to nine state-of-the-art CMOEAs.展开更多
Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s li...Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s lives.Developing an efficient technology-based detection system can lead to non-destructive and preliminary cancer detection techniques.This paper proposes a comprehensive framework that can effectively diagnose cancerous cells from benign cells using the Curated Breast Imaging Subset of the Digital Database for Screening Mammography(CBIS-DDSM)data set.The novelty of the proposed framework lies in the integration of various techniques,where the fusion of deep learning(DL),traditional machine learning(ML)techniques,and enhanced classification models have been deployed using the curated dataset.The analysis outcome proves that the proposed enhanced RF(ERF),enhanced DT(EDT)and enhanced LR(ELR)models for BC detection outperformed most of the existing models with impressive results.展开更多
In the traditional well log depth matching tasks,manual adjustments are required,which means significantly labor-intensive for multiple wells,leading to low work efficiency.This paper introduces a multi-agent deep rei...In the traditional well log depth matching tasks,manual adjustments are required,which means significantly labor-intensive for multiple wells,leading to low work efficiency.This paper introduces a multi-agent deep reinforcement learning(MARL)method to automate the depth matching of multi-well logs.This method defines multiple top-down dual sliding windows based on the convolutional neural network(CNN)to extract and capture similar feature sequences on well logs,and it establishes an interaction mechanism between agents and the environment to control the depth matching process.Specifically,the agent selects an action to translate or scale the feature sequence based on the double deep Q-network(DDQN).Through the feedback of the reward signal,it evaluates the effectiveness of each action,aiming to obtain the optimal strategy and improve the accuracy of the matching task.Our experiments show that MARL can automatically perform depth matches for well-logs in multiple wells,and reduce manual intervention.In the application to the oil field,a comparative analysis of dynamic time warping(DTW),deep Q-learning network(DQN),and DDQN methods revealed that the DDQN algorithm,with its dual-network evaluation mechanism,significantly improves performance by identifying and aligning more details in the well log feature sequences,thus achieving higher depth matching accuracy.展开更多
Human Interaction Recognition(HIR)was one of the challenging issues in computer vision research due to the involvement of multiple individuals and their mutual interactions within video frames generated from their mov...Human Interaction Recognition(HIR)was one of the challenging issues in computer vision research due to the involvement of multiple individuals and their mutual interactions within video frames generated from their movements.HIR requires more sophisticated analysis than Human Action Recognition(HAR)since HAR focuses solely on individual activities like walking or running,while HIR involves the interactions between people.This research aims to develop a robust system for recognizing five common human interactions,such as hugging,kicking,pushing,pointing,and no interaction,from video sequences using multiple cameras.In this study,a hybrid Deep Learning(DL)and Machine Learning(ML)model was employed to improve classification accuracy and generalizability.The dataset was collected in an indoor environment with four-channel cameras capturing the five types of interactions among 13 participants.The data was processed using a DL model with a fine-tuned ResNet(Residual Networks)architecture based on 2D Convolutional Neural Network(CNN)layers for feature extraction.Subsequently,machine learning models were trained and utilized for interaction classification using six commonly used ML algorithms,including SVM,KNN,RF,DT,NB,and XGBoost.The results demonstrate a high accuracy of 95.45%in classifying human interactions.The hybrid approach enabled effective learning,resulting in highly accurate performance across different interaction types.Future work will explore more complex scenarios involving multiple individuals based on the application of this architecture.展开更多
The positional information of objects is crucial to enable robots to perform grasping and pushing manipulations in clutter.To effectively perform grasping and pushing manipu-lations,robots need to perceive the positio...The positional information of objects is crucial to enable robots to perform grasping and pushing manipulations in clutter.To effectively perform grasping and pushing manipu-lations,robots need to perceive the position information of objects,including the co-ordinates and spatial relationship between objects(e.g.,proximity,adjacency).The authors propose an end-to-end position-aware deep Q-learning framework to achieve efficient collaborative pushing and grasping in clutter.Specifically,a pair of conjugate pushing and grasping attention modules are proposed to capture the position information of objects and generate high-quality affordance maps of operating positions with features of pushing and grasping operations.In addition,the authors propose an object isolation metric and clutter metric based on instance segmentation to measure the spatial re-lationships between objects in cluttered environments.To further enhance the perception capacity of position information of the objects,the authors associate the change in the object isolation metric and clutter metric in cluttered environment before and after performing the action with reward function.A series of experiments are carried out in simulation and real-world which indicate that the method improves sample efficiency,task completion rate,grasping success rate and action efficiency compared to state-of-the-art end-to-end methods.Noted that the authors’system can be robustly applied to real-world use and extended to novel objects.Supplementary material is available at https://youtu.be/NhG\_k5v3NnM}{https://youtu.be/NhG\_k5v3NnM.展开更多
In the era of internet proliferation,safeguarding digital media copyright and integrity,especially for images,is imperative.Digital watermarking stands out as a pivotal solution for image security.With the advent of d...In the era of internet proliferation,safeguarding digital media copyright and integrity,especially for images,is imperative.Digital watermarking stands out as a pivotal solution for image security.With the advent of deep learning,watermarking has seen significant advancements.Our review focuses on the innovative deep watermarking approaches that employ neural networks to identify robust embedding spaces,resilient to various attacks.These methods,characterized by a streamlined encoder-decoder architecture,have shown enhanced performance through the incorporation of novel training modules.This article offers an in-depth analysis of deep watermarking’s core technologies,current status,and prospective trajectories,evaluating recent scholarly contributions across diverse frameworks.It concludes with an overview of the technical hurdles and prospects,providing essential insights for ongoing and future research endeavors in digital image watermarking.展开更多
This study describes improving network security by implementing and assessing an intrusion detection system(IDS)based on deep neural networks(DNNs).The paper investigates contemporary technical ways for enhancing intr...This study describes improving network security by implementing and assessing an intrusion detection system(IDS)based on deep neural networks(DNNs).The paper investigates contemporary technical ways for enhancing intrusion detection performance,given the vital relevance of safeguarding computer networks against harmful activity.The DNN-based IDS is trained and validated by the model using the NSL-KDD dataset,a popular benchmark for IDS research.The model performs well in both the training and validation stages,with 91.30%training accuracy and 94.38%validation accuracy.Thus,the model shows good learning and generalization capabilities with minor losses of 0.22 in training and 0.1553 in validation.Furthermore,for both macro and micro averages across class 0(normal)and class 1(anomalous)data,the study evaluates the model using a variety of assessment measures,such as accuracy scores,precision,recall,and F1 scores.The macro-average recall is 0.9422,the macro-average precision is 0.9482,and the accuracy scores are 0.942.Furthermore,macro-averaged F1 scores of 0.9245 for class 1 and 0.9434 for class 0 demonstrate the model’s ability to precisely identify anomalies precisely.The research also highlights how real-time threat monitoring and enhanced resistance against new online attacks may be achieved byDNN-based intrusion detection systems,which can significantly improve network security.The study underscores the critical function ofDNN-based IDS in contemporary cybersecurity procedures by setting the foundation for further developments in this field.Upcoming research aims to enhance intrusion detection systems by examining cooperative learning techniques and integrating up-to-date threat knowledge.展开更多
As deep learning evolves,neural network structures become increasingly sophisticated,bringing a series of new optimisation challenges.For example,deep neural networks(DNNs)are vulnerable to a variety of attacks.Traini...As deep learning evolves,neural network structures become increasingly sophisticated,bringing a series of new optimisation challenges.For example,deep neural networks(DNNs)are vulnerable to a variety of attacks.Training neural networks under privacy constraints is a method to alleviate privacy leakage,and one way to do this is to add noise to the gradient.However,the existing optimisers suffer from weak convergence in the presence of increased noise during training,which leads to a low robustness of the optimiser.To stabilise and improve the convergence of DNNs,the authors propose a neural dynamics(ND)optimiser,which is inspired by the zeroing neural dynamics originated from zeroing neural networks.The authors first analyse the relationship be-tween DNNs and control systems.Then,the authors construct the ND optimiser to update network parameters.Moreover,the proposed ND optimiser alleviates the non-convergence problem that may be suffered by adding noise to the gradient from different scenarios.Furthermore,experiments are conducted on different neural network structures,including ResNet18,ResNet34,Inception-v3,MobileNet,and long and short-term memory network.Comparative results using CIFAR,YouTube Faces,and R8 datasets demonstrate that the ND optimiser improves the accuracy and stability of DNNs under noise-free and noise-polluted conditions.The source code is publicly available at https://github.com/LongJin-lab/ND.展开更多
Based on the new data of drilling, seismic, logging, test and experiments, the key scientific problems in reservoir formation, hydrocarbon accumulation and efficient oil and gas development methods of deep and ultra-d...Based on the new data of drilling, seismic, logging, test and experiments, the key scientific problems in reservoir formation, hydrocarbon accumulation and efficient oil and gas development methods of deep and ultra-deep marine carbonate strata in the central and western superimposed basin in China have been continuously studied.(1) The fault-controlled carbonate reservoir and the ancient dolomite reservoir are two important types of reservoirs in the deep and ultra-deep marine carbonates. According to the formation origin, the large-scale fault-controlled reservoir can be further divided into three types:fracture-cavity reservoir formed by tectonic rupture, fault and fluid-controlled reservoir, and shoal and mound reservoir modified by fault and fluid. The Sinian microbial dolomites are developed in the aragonite-dolomite sea. The predominant mound-shoal facies, early dolomitization and dissolution, acidic fluid environment, anhydrite capping and overpressure are the key factors for the formation and preservation of high-quality dolomite reservoirs.(2) The organic-rich shale of the marine carbonate strata in the superimposed basins of central and western China are mainly developed in the sedimentary environments of deep-water shelf of passive continental margin and carbonate ramp. The tectonic-thermal system is the important factor controlling the hydrocarbon phase in deep and ultra-deep reservoirs, and the reformed dynamic field controls oil and gas accumulation and distribution in deep and ultra-deep marine carbonates.(3) During the development of high-sulfur gas fields such as Puguang, sulfur precipitation blocks the wellbore. The application of sulfur solvent combined with coiled tubing has a significant effect on removing sulfur blockage. The integrated technology of dual-medium modeling and numerical simulation based on sedimentary simulation can accurately characterize the spatial distribution and changes of the water invasion front.Afterward, water control strategies for the entire life cycle of gas wells are proposed, including flow rate management, water drainage and plugging.(4) In the development of ultra-deep fault-controlled fractured-cavity reservoirs, well production declines rapidly due to the permeability reduction, which is a consequence of reservoir stress-sensitivity. The rapid phase change in condensate gas reservoir and pressure decline significantly affect the recovery of condensate oil. Innovative development methods such as gravity drive through water and natural gas injection, and natural gas drive through top injection and bottom production for ultra-deep fault-controlled condensate gas reservoirs are proposed. By adopting the hierarchical geological modeling and the fluid-solid-thermal coupled numerical simulation, the accuracy of producing performance prediction in oil and gas reservoirs has been effectively improved.展开更多
The accumulation of defects on wind turbine blade surfaces can lead to irreversible damage,impacting the aero-dynamic performance of the blades.To address the challenge of detecting and quantifying surface defects on ...The accumulation of defects on wind turbine blade surfaces can lead to irreversible damage,impacting the aero-dynamic performance of the blades.To address the challenge of detecting and quantifying surface defects on wind turbine blades,a blade surface defect detection and quantification method based on an improved Deeplabv3+deep learning model is proposed.Firstly,an improved method for wind turbine blade surface defect detection,utilizing Mobilenetv2 as the backbone feature extraction network,is proposed based on an original Deeplabv3+deep learning model to address the issue of limited robustness.Secondly,through integrating the concept of pre-trained weights from transfer learning and implementing a freeze training strategy,significant improvements have been made to enhance both the training speed and model training accuracy of this deep learning model.Finally,based on segmented blade surface defect images,a method for quantifying blade defects is proposed.This method combines image stitching algorithms to achieve overall quantification and risk assessment of the entire blade.Test results show that the improved Deeplabv3+deep learning model reduces training time by approximately 43.03%compared to the original model,while achieving mAP and MIoU values of 96.87%and 96.93%,respectively.Moreover,it demonstrates robustness in detecting different surface defects on blades across different back-grounds.The application of a blade surface defect quantification method enables the precise quantification of dif-ferent defects and facilitates the assessment of risk levels associated with defect measurements across the entire blade.This method enables non-contact,long-distance,high-precision detection and quantification of surface defects on the blades,providing a reference for assessing surface defects on wind turbine blades.展开更多
At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns st...At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns still need enhancement,particularly accuracy,sensitivity,false positive and false negative,to improve the brain tumor prediction system symmetrically.Therefore,this work proposed an Extended Deep Learning Algorithm(EDLA)to measure performance parameters such as accuracy,sensitivity,and false positive and false negative rates.In addition,these iterated measures were analyzed by comparing the EDLA method with the Convolutional Neural Network(CNN)way further using the SPSS tool,and respective graphical illustrations were shown.The results were that the mean performance measures for the proposed EDLA algorithm were calculated,and those measured were accuracy(97.665%),sensitivity(97.939%),false positive(3.012%),and false negative(3.182%)for ten iterations.Whereas in the case of the CNN,the algorithm means accuracy gained was 94.287%,mean sensitivity 95.612%,mean false positive 5.328%,and mean false negative 4.756%.These results show that the proposed EDLA method has outperformed existing algorithms,including CNN,and ensures symmetrically improved parameters.Thus EDLA algorithm introduces novelty concerning its performance and particular activation function.This proposed method will be utilized effectively in brain tumor detection in a precise and accurate manner.This algorithm would apply to brain tumor diagnosis and be involved in various medical diagnoses aftermodification.If the quantity of dataset records is enormous,then themethod’s computation power has to be updated.展开更多
Formany years,researchers have explored power allocation(PA)algorithms driven bymodels in wireless networks where multiple-user communications with interference are present.Nowadays,data-driven machine learning method...Formany years,researchers have explored power allocation(PA)algorithms driven bymodels in wireless networks where multiple-user communications with interference are present.Nowadays,data-driven machine learning methods have become quite popular in analyzing wireless communication systems,which among them deep reinforcement learning(DRL)has a significant role in solving optimization issues under certain constraints.To this purpose,in this paper,we investigate the PA problem in a k-user multiple access channels(MAC),where k transmitters(e.g.,mobile users)aim to send an independent message to a common receiver(e.g.,base station)through wireless channels.To this end,we first train the deep Q network(DQN)with a deep Q learning(DQL)algorithm over the simulation environment,utilizing offline learning.Then,the DQN will be used with the real data in the online training method for the PA issue by maximizing the sumrate subjected to the source power.Finally,the simulation results indicate that our proposedDQNmethod provides better performance in terms of the sumrate compared with the available DQL training approaches such as fractional programming(FP)and weighted minimum mean squared error(WMMSE).Additionally,by considering different user densities,we show that our proposed DQN outperforms benchmark algorithms,thereby,a good generalization ability is verified over wireless multi-user communication systems.展开更多
基金the sponsorship of Shandong Province Foundation for Laoshan National Laboratory of Science and Technology Foundation(LSKJ202203400)National Natural Science Foundation of China(42174139,42030103)Science Foundation from Innovation and Technology Support Program for Young Scientists in Colleges of Shandong Province and Ministry of Science and Technology of China(2019RA2136)。
文摘Deterministic inversion based on deep learning has been widely utilized in model parameters estimation.Constrained by logging data,seismic data,wavelet and modeling operator,deterministic inversion based on deep learning can establish nonlinear relationships between seismic data and model parameters.However,seismic data lacks low-frequency and contains noise,which increases the non-uniqueness of the solutions.The conventional inversion method based on deep learning can only establish the deterministic relationship between seismic data and parameters,and cannot quantify the uncertainty of inversion.In order to quickly quantify the uncertainty,a physics-guided deep mixture density network(PG-DMDN)is established by combining the mixture density network(MDN)with the deep neural network(DNN).Compared with Bayesian neural network(BNN)and network dropout,PG-DMDN has lower computing cost and shorter training time.A low-frequency model is introduced in the training process of the network to help the network learn the nonlinear relationship between narrowband seismic data and low-frequency impedance.In addition,the block constraints are added to the PG-DMDN framework to improve the horizontal continuity of the inversion results.To illustrate the benefits of proposed method,the PG-DMDN is compared with existing semi-supervised inversion method.Four synthetic data examples of Marmousi II model are utilized to quantify the influence of forward modeling part,low-frequency model,noise and the pseudo-wells number on inversion results,and prove the feasibility and stability of the proposed method.In addition,the robustness and generality of the proposed method are verified by the field seismic data.
基金via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2023/R/1444).
文摘Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,which are commonly utilized in radiology.To fully exploit their potential,researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems.However,constructing and compressing these systems presents a significant challenge,as it relies heavily on the expertise of data scientists.To tackle this issue,we propose an automated approach that utilizes an evolutionary algorithm(EA)to optimize the design and compression of a convolutional neural network(CNN)for X-Ray image classification.Our approach accurately classifies radiography images and detects potential chest abnormalities and infections,including COVID-19.Furthermore,our approach incorporates transfer learning,where a pre-trainedCNNmodel on a vast dataset of chest X-Ray images is fine-tuned for the specific task of detecting COVID-19.This method can help reduce the amount of labeled data required for the task and enhance the overall performance of the model.We have validated our method via a series of experiments against state-of-the-art architectures.
基金Ministry of Education,Culture,Sports,Science and Technology,Grant/Award Number:20K11867。
文摘By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral(HR-HS)image.With previously collected large-amount of external data,these methods are intuitively realised under the full supervision of the ground-truth data.Thus,the database construction in merging the low-resolution(LR)HS(LR-HS)and HR multispectral(MS)or RGB image research paradigm,commonly named as HSI SR,requires collecting corresponding training triplets:HR-MS(RGB),LR-HS and HR-HS image simultaneously,and often faces dif-ficulties in reality.The learned models with the training datasets collected simultaneously under controlled conditions may significantly degrade the HSI super-resolved perfor-mance to the real images captured under diverse environments.To handle the above-mentioned limitations,the authors propose to leverage the deep internal and self-supervised learning to solve the HSI SR problem.The authors advocate that it is possible to train a specific CNN model at test time,called as deep internal learning(DIL),by on-line preparing the training triplet samples from the observed LR-HS/HR-MS(or RGB)images and the down-sampled LR-HS version.However,the number of the training triplets extracted solely from the transformed data of the observation itself is extremely few particularly for the HSI SR tasks with large spatial upscale factors,which would result in limited reconstruction performance.To solve this problem,the authors further exploit deep self-supervised learning(DSL)by considering the observations as the unlabelled training samples.Specifically,the degradation modules inside the network were elaborated to realise the spatial and spectral down-sampling procedures for transforming the generated HR-HS estimation to the high-resolution RGB/LR-HS approximation,and then the reconstruction errors of the observations were formulated for measuring the network modelling performance.By consolidating the DIL and DSL into a unified deep framework,the authors construct a more robust HSI SR method without any prior training and have great potential of flexible adaptation to different settings per obser-vation.To verify the effectiveness of the proposed approach,extensive experiments have been conducted on two benchmark HS datasets,including the CAVE and Harvard datasets,and demonstrate the great performance gain of the proposed method over the state-of-the-art methods.
基金the National Natural Science Foundation of China(62076225,62073300)the Natural Science Foundation for Distinguished Young Scholars of Hubei(2019CFA081)。
文摘Solving constrained multi-objective optimization problems with evolutionary algorithms has attracted considerable attention.Various constrained multi-objective optimization evolutionary algorithms(CMOEAs)have been developed with the use of different algorithmic strategies,evolutionary operators,and constraint-handling techniques.The performance of CMOEAs may be heavily dependent on the operators used,however,it is usually difficult to select suitable operators for the problem at hand.Hence,improving operator selection is promising and necessary for CMOEAs.This work proposes an online operator selection framework assisted by Deep Reinforcement Learning.The dynamics of the population,including convergence,diversity,and feasibility,are regarded as the state;the candidate operators are considered as actions;and the improvement of the population state is treated as the reward.By using a Q-network to learn a policy to estimate the Q-values of all actions,the proposed approach can adaptively select an operator that maximizes the improvement of the population according to the current state and thereby improve the algorithmic performance.The framework is embedded into four popular CMOEAs and assessed on 42 benchmark problems.The experimental results reveal that the proposed Deep Reinforcement Learning-assisted operator selection significantly improves the performance of these CMOEAs and the resulting algorithm obtains better versatility compared to nine state-of-the-art CMOEAs.
文摘Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s lives.Developing an efficient technology-based detection system can lead to non-destructive and preliminary cancer detection techniques.This paper proposes a comprehensive framework that can effectively diagnose cancerous cells from benign cells using the Curated Breast Imaging Subset of the Digital Database for Screening Mammography(CBIS-DDSM)data set.The novelty of the proposed framework lies in the integration of various techniques,where the fusion of deep learning(DL),traditional machine learning(ML)techniques,and enhanced classification models have been deployed using the curated dataset.The analysis outcome proves that the proposed enhanced RF(ERF),enhanced DT(EDT)and enhanced LR(ELR)models for BC detection outperformed most of the existing models with impressive results.
基金Supported by the China National Petroleum Corporation Limited-China University of Petroleum(Beijing)Strategic Cooperation Science and Technology Project(ZLZX2020-03).
文摘In the traditional well log depth matching tasks,manual adjustments are required,which means significantly labor-intensive for multiple wells,leading to low work efficiency.This paper introduces a multi-agent deep reinforcement learning(MARL)method to automate the depth matching of multi-well logs.This method defines multiple top-down dual sliding windows based on the convolutional neural network(CNN)to extract and capture similar feature sequences on well logs,and it establishes an interaction mechanism between agents and the environment to control the depth matching process.Specifically,the agent selects an action to translate or scale the feature sequence based on the double deep Q-network(DDQN).Through the feedback of the reward signal,it evaluates the effectiveness of each action,aiming to obtain the optimal strategy and improve the accuracy of the matching task.Our experiments show that MARL can automatically perform depth matches for well-logs in multiple wells,and reduce manual intervention.In the application to the oil field,a comparative analysis of dynamic time warping(DTW),deep Q-learning network(DQN),and DDQN methods revealed that the DDQN algorithm,with its dual-network evaluation mechanism,significantly improves performance by identifying and aligning more details in the well log feature sequences,thus achieving higher depth matching accuracy.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.RS-2023-00218176)and the Soonchunhyang University Research Fund.
文摘Human Interaction Recognition(HIR)was one of the challenging issues in computer vision research due to the involvement of multiple individuals and their mutual interactions within video frames generated from their movements.HIR requires more sophisticated analysis than Human Action Recognition(HAR)since HAR focuses solely on individual activities like walking or running,while HIR involves the interactions between people.This research aims to develop a robust system for recognizing five common human interactions,such as hugging,kicking,pushing,pointing,and no interaction,from video sequences using multiple cameras.In this study,a hybrid Deep Learning(DL)and Machine Learning(ML)model was employed to improve classification accuracy and generalizability.The dataset was collected in an indoor environment with four-channel cameras capturing the five types of interactions among 13 participants.The data was processed using a DL model with a fine-tuned ResNet(Residual Networks)architecture based on 2D Convolutional Neural Network(CNN)layers for feature extraction.Subsequently,machine learning models were trained and utilized for interaction classification using six commonly used ML algorithms,including SVM,KNN,RF,DT,NB,and XGBoost.The results demonstrate a high accuracy of 95.45%in classifying human interactions.The hybrid approach enabled effective learning,resulting in highly accurate performance across different interaction types.Future work will explore more complex scenarios involving multiple individuals based on the application of this architecture.
基金Beijing Municipal Natural Science Foundation,Grant/Award Number:4212933National Natural Science Foundation of China,Grant/Award Number:61873008National Key R&D Plan,Grant/Award Number:2018YFB1307004。
文摘The positional information of objects is crucial to enable robots to perform grasping and pushing manipulations in clutter.To effectively perform grasping and pushing manipu-lations,robots need to perceive the position information of objects,including the co-ordinates and spatial relationship between objects(e.g.,proximity,adjacency).The authors propose an end-to-end position-aware deep Q-learning framework to achieve efficient collaborative pushing and grasping in clutter.Specifically,a pair of conjugate pushing and grasping attention modules are proposed to capture the position information of objects and generate high-quality affordance maps of operating positions with features of pushing and grasping operations.In addition,the authors propose an object isolation metric and clutter metric based on instance segmentation to measure the spatial re-lationships between objects in cluttered environments.To further enhance the perception capacity of position information of the objects,the authors associate the change in the object isolation metric and clutter metric in cluttered environment before and after performing the action with reward function.A series of experiments are carried out in simulation and real-world which indicate that the method improves sample efficiency,task completion rate,grasping success rate and action efficiency compared to state-of-the-art end-to-end methods.Noted that the authors’system can be robustly applied to real-world use and extended to novel objects.Supplementary material is available at https://youtu.be/NhG\_k5v3NnM}{https://youtu.be/NhG\_k5v3NnM.
基金supported by the National Natural Science Foundation of China(Nos.62072465,62102425)the Science and Technology Innovation Program of Hunan Province(Nos.2022RC3061,2023RC3027).
文摘In the era of internet proliferation,safeguarding digital media copyright and integrity,especially for images,is imperative.Digital watermarking stands out as a pivotal solution for image security.With the advent of deep learning,watermarking has seen significant advancements.Our review focuses on the innovative deep watermarking approaches that employ neural networks to identify robust embedding spaces,resilient to various attacks.These methods,characterized by a streamlined encoder-decoder architecture,have shown enhanced performance through the incorporation of novel training modules.This article offers an in-depth analysis of deep watermarking’s core technologies,current status,and prospective trajectories,evaluating recent scholarly contributions across diverse frameworks.It concludes with an overview of the technical hurdles and prospects,providing essential insights for ongoing and future research endeavors in digital image watermarking.
基金Princess Nourah bint Abdulrahman University for funding this project through the Researchers Supporting Project(PNURSP2024R319)funded by the Prince Sultan University,Riyadh,Saudi Arabia.
文摘This study describes improving network security by implementing and assessing an intrusion detection system(IDS)based on deep neural networks(DNNs).The paper investigates contemporary technical ways for enhancing intrusion detection performance,given the vital relevance of safeguarding computer networks against harmful activity.The DNN-based IDS is trained and validated by the model using the NSL-KDD dataset,a popular benchmark for IDS research.The model performs well in both the training and validation stages,with 91.30%training accuracy and 94.38%validation accuracy.Thus,the model shows good learning and generalization capabilities with minor losses of 0.22 in training and 0.1553 in validation.Furthermore,for both macro and micro averages across class 0(normal)and class 1(anomalous)data,the study evaluates the model using a variety of assessment measures,such as accuracy scores,precision,recall,and F1 scores.The macro-average recall is 0.9422,the macro-average precision is 0.9482,and the accuracy scores are 0.942.Furthermore,macro-averaged F1 scores of 0.9245 for class 1 and 0.9434 for class 0 demonstrate the model’s ability to precisely identify anomalies precisely.The research also highlights how real-time threat monitoring and enhanced resistance against new online attacks may be achieved byDNN-based intrusion detection systems,which can significantly improve network security.The study underscores the critical function ofDNN-based IDS in contemporary cybersecurity procedures by setting the foundation for further developments in this field.Upcoming research aims to enhance intrusion detection systems by examining cooperative learning techniques and integrating up-to-date threat knowledge.
基金Sichuan Science and Technology Program,Grant/Award Number:2022nsfsc0916Fundamental Research Funds for the Central Universities,Grant/Award Number:lzujbky-2023-eyt04+6 种基金Natural Science Foundation of Gansu Province,Grant/Award Numbers:21JR7RA531,23JRRA1116Ministry of Education,Science and Technological Development,Republic of Serbia,Grant/Award Number:451-03-68/2022-14/200124National Natural Science Foundation of China,Grant/Award Numbers:62176109,62311530099Joint Education Project of Universities in China-Central-and-Eastern-European Countries,Grant/Award Number:2022226Science Fund of the Republic of Serbia,Grant/Award Number:7750185Program for Scientific Research Start-up Funds of Guangdong Ocean University,Grant/Award Number:060302112201the Supercomputing Center of Lanzhou University。
文摘As deep learning evolves,neural network structures become increasingly sophisticated,bringing a series of new optimisation challenges.For example,deep neural networks(DNNs)are vulnerable to a variety of attacks.Training neural networks under privacy constraints is a method to alleviate privacy leakage,and one way to do this is to add noise to the gradient.However,the existing optimisers suffer from weak convergence in the presence of increased noise during training,which leads to a low robustness of the optimiser.To stabilise and improve the convergence of DNNs,the authors propose a neural dynamics(ND)optimiser,which is inspired by the zeroing neural dynamics originated from zeroing neural networks.The authors first analyse the relationship be-tween DNNs and control systems.Then,the authors construct the ND optimiser to update network parameters.Moreover,the proposed ND optimiser alleviates the non-convergence problem that may be suffered by adding noise to the gradient from different scenarios.Furthermore,experiments are conducted on different neural network structures,including ResNet18,ResNet34,Inception-v3,MobileNet,and long and short-term memory network.Comparative results using CIFAR,YouTube Faces,and R8 datasets demonstrate that the ND optimiser improves the accuracy and stability of DNNs under noise-free and noise-polluted conditions.The source code is publicly available at https://github.com/LongJin-lab/ND.
基金Supported by the National Natural Science Foundation of ChinaCorporate Innovative Development Joint Fund(U19B6003)。
文摘Based on the new data of drilling, seismic, logging, test and experiments, the key scientific problems in reservoir formation, hydrocarbon accumulation and efficient oil and gas development methods of deep and ultra-deep marine carbonate strata in the central and western superimposed basin in China have been continuously studied.(1) The fault-controlled carbonate reservoir and the ancient dolomite reservoir are two important types of reservoirs in the deep and ultra-deep marine carbonates. According to the formation origin, the large-scale fault-controlled reservoir can be further divided into three types:fracture-cavity reservoir formed by tectonic rupture, fault and fluid-controlled reservoir, and shoal and mound reservoir modified by fault and fluid. The Sinian microbial dolomites are developed in the aragonite-dolomite sea. The predominant mound-shoal facies, early dolomitization and dissolution, acidic fluid environment, anhydrite capping and overpressure are the key factors for the formation and preservation of high-quality dolomite reservoirs.(2) The organic-rich shale of the marine carbonate strata in the superimposed basins of central and western China are mainly developed in the sedimentary environments of deep-water shelf of passive continental margin and carbonate ramp. The tectonic-thermal system is the important factor controlling the hydrocarbon phase in deep and ultra-deep reservoirs, and the reformed dynamic field controls oil and gas accumulation and distribution in deep and ultra-deep marine carbonates.(3) During the development of high-sulfur gas fields such as Puguang, sulfur precipitation blocks the wellbore. The application of sulfur solvent combined with coiled tubing has a significant effect on removing sulfur blockage. The integrated technology of dual-medium modeling and numerical simulation based on sedimentary simulation can accurately characterize the spatial distribution and changes of the water invasion front.Afterward, water control strategies for the entire life cycle of gas wells are proposed, including flow rate management, water drainage and plugging.(4) In the development of ultra-deep fault-controlled fractured-cavity reservoirs, well production declines rapidly due to the permeability reduction, which is a consequence of reservoir stress-sensitivity. The rapid phase change in condensate gas reservoir and pressure decline significantly affect the recovery of condensate oil. Innovative development methods such as gravity drive through water and natural gas injection, and natural gas drive through top injection and bottom production for ultra-deep fault-controlled condensate gas reservoirs are proposed. By adopting the hierarchical geological modeling and the fluid-solid-thermal coupled numerical simulation, the accuracy of producing performance prediction in oil and gas reservoirs has been effectively improved.
基金supported by the National Science Foundation of China(Grant Nos.52068049 and 51908266)the Science Fund for Distinguished Young Scholars of Gansu Province(No.21JR7RA267)Hongliu Outstanding Young Talents Program of Lanzhou University of Technology.
文摘The accumulation of defects on wind turbine blade surfaces can lead to irreversible damage,impacting the aero-dynamic performance of the blades.To address the challenge of detecting and quantifying surface defects on wind turbine blades,a blade surface defect detection and quantification method based on an improved Deeplabv3+deep learning model is proposed.Firstly,an improved method for wind turbine blade surface defect detection,utilizing Mobilenetv2 as the backbone feature extraction network,is proposed based on an original Deeplabv3+deep learning model to address the issue of limited robustness.Secondly,through integrating the concept of pre-trained weights from transfer learning and implementing a freeze training strategy,significant improvements have been made to enhance both the training speed and model training accuracy of this deep learning model.Finally,based on segmented blade surface defect images,a method for quantifying blade defects is proposed.This method combines image stitching algorithms to achieve overall quantification and risk assessment of the entire blade.Test results show that the improved Deeplabv3+deep learning model reduces training time by approximately 43.03%compared to the original model,while achieving mAP and MIoU values of 96.87%and 96.93%,respectively.Moreover,it demonstrates robustness in detecting different surface defects on blades across different back-grounds.The application of a blade surface defect quantification method enables the precise quantification of dif-ferent defects and facilitates the assessment of risk levels associated with defect measurements across the entire blade.This method enables non-contact,long-distance,high-precision detection and quantification of surface defects on the blades,providing a reference for assessing surface defects on wind turbine blades.
基金supported by Project No.R-2023-23 of the Deanship of Scientific Research at Majmaah University.
文摘At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns still need enhancement,particularly accuracy,sensitivity,false positive and false negative,to improve the brain tumor prediction system symmetrically.Therefore,this work proposed an Extended Deep Learning Algorithm(EDLA)to measure performance parameters such as accuracy,sensitivity,and false positive and false negative rates.In addition,these iterated measures were analyzed by comparing the EDLA method with the Convolutional Neural Network(CNN)way further using the SPSS tool,and respective graphical illustrations were shown.The results were that the mean performance measures for the proposed EDLA algorithm were calculated,and those measured were accuracy(97.665%),sensitivity(97.939%),false positive(3.012%),and false negative(3.182%)for ten iterations.Whereas in the case of the CNN,the algorithm means accuracy gained was 94.287%,mean sensitivity 95.612%,mean false positive 5.328%,and mean false negative 4.756%.These results show that the proposed EDLA method has outperformed existing algorithms,including CNN,and ensures symmetrically improved parameters.Thus EDLA algorithm introduces novelty concerning its performance and particular activation function.This proposed method will be utilized effectively in brain tumor detection in a precise and accurate manner.This algorithm would apply to brain tumor diagnosis and be involved in various medical diagnoses aftermodification.If the quantity of dataset records is enormous,then themethod’s computation power has to be updated.
文摘Formany years,researchers have explored power allocation(PA)algorithms driven bymodels in wireless networks where multiple-user communications with interference are present.Nowadays,data-driven machine learning methods have become quite popular in analyzing wireless communication systems,which among them deep reinforcement learning(DRL)has a significant role in solving optimization issues under certain constraints.To this purpose,in this paper,we investigate the PA problem in a k-user multiple access channels(MAC),where k transmitters(e.g.,mobile users)aim to send an independent message to a common receiver(e.g.,base station)through wireless channels.To this end,we first train the deep Q network(DQN)with a deep Q learning(DQL)algorithm over the simulation environment,utilizing offline learning.Then,the DQN will be used with the real data in the online training method for the PA issue by maximizing the sumrate subjected to the source power.Finally,the simulation results indicate that our proposedDQNmethod provides better performance in terms of the sumrate compared with the available DQL training approaches such as fractional programming(FP)and weighted minimum mean squared error(WMMSE).Additionally,by considering different user densities,we show that our proposed DQN outperforms benchmark algorithms,thereby,a good generalization ability is verified over wireless multi-user communication systems.