At present,the acquisition of seismic data is developing toward high-precision and high-density methods.However,complex natural environments and cultural factors in many exploration areas cause difficulties in achievi...At present,the acquisition of seismic data is developing toward high-precision and high-density methods.However,complex natural environments and cultural factors in many exploration areas cause difficulties in achieving uniform and intensive acquisition,which makes complete seismic data collection impossible.Therefore,data reconstruction is required in the processing link to ensure imaging accuracy.Deep learning,as a new field in rapid development,presents clear advantages in feature extraction and modeling.In this study,the convolutional neural network deep learning algorithm is applied to seismic data reconstruction.Based on the convolutional neural network algorithm and combined with the characteristics of seismic data acquisition,two training strategies of supervised and unsupervised learning are designed to reconstruct sparse acquisition seismic records.First,a supervised learning strategy is proposed for labeled data,wherein the complete seismic data are segmented as the input of the training set and are randomly sampled before each training,thereby increasing the number of samples and the richness of features.Second,an unsupervised learning strategy based on large samples is proposed for unlabeled data,and the rolling segmentation method is used to update(pseudo)labels and training parameters in the training process.Through the reconstruction test of simulated and actual data,the deep learning algorithm based on a convolutional neural network shows better reconstruction quality and higher accuracy than compressed sensing based on Curvelet transform.展开更多
Early diagnosis and detection are important tasks in controlling the spread of COVID-19.A number of Deep Learning techniques has been established by researchers to detect the presence of COVID-19 using CT scan images ...Early diagnosis and detection are important tasks in controlling the spread of COVID-19.A number of Deep Learning techniques has been established by researchers to detect the presence of COVID-19 using CT scan images and X-rays.However,these methods suffer from biased results and inaccurate detection of the disease.So,the current research article developed Oppositional-based Chimp Optimization Algorithm and Deep Dense Convolutional Neural Network(OCOA-DDCNN)for COVID-19 prediction using CT images in IoT environment.The proposed methodology works on the basis of two stages such as pre-processing and prediction.Initially,CT scan images generated from prospective COVID-19 are collected from open-source system using IoT devices.The collected images are then preprocessed using Gaussian filter.Gaussian filter can be utilized in the removal of unwanted noise from the collected CT scan images.Afterwards,the preprocessed images are sent to prediction phase.In this phase,Deep Dense Convolutional Neural Network(DDCNN)is applied upon the pre-processed images.The proposed classifier is optimally designed with the consideration of Oppositional-basedChimp Optimization Algorithm(OCOA).This algorithm is utilized in the selection of optimal parameters for the proposed classifier.Finally,the proposed technique is used in the prediction of COVID-19 and classify the results as either COVID-19 or non-COVID-19.The projected method was implemented in MATLAB and the performances were evaluated through statistical measurements.The proposed method was contrasted with conventional techniques such as Convolutional Neural Network-Firefly Algorithm(CNN-FA),Emperor Penguin Optimization(CNN-EPO)respectively.The results established the supremacy of the proposed model.展开更多
The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly int...The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly intrusion attacks.In addition,IoT devices generate a high volume of unstructured data.Traditional intrusion detection systems often struggle to cope with the unique characteristics of IoT networks,such as resource constraints and heterogeneous data sources.Given the unpredictable nature of network technologies and diverse intrusion methods,conventional machine-learning approaches seem to lack efficiency.Across numerous research domains,deep learning techniques have demonstrated their capability to precisely detect anomalies.This study designs and enhances a novel anomaly-based intrusion detection system(AIDS)for IoT networks.Firstly,a Sparse Autoencoder(SAE)is applied to reduce the high dimension and get a significant data representation by calculating the reconstructed error.Secondly,the Convolutional Neural Network(CNN)technique is employed to create a binary classification approach.The proposed SAE-CNN approach is validated using the Bot-IoT dataset.The proposed models exceed the performance of the existing deep learning approach in the literature with an accuracy of 99.9%,precision of 99.9%,recall of 100%,F1 of 99.9%,False Positive Rate(FPR)of 0.0003,and True Positive Rate(TPR)of 0.9992.In addition,alternative metrics,such as training and testing durations,indicated that SAE-CNN performs better.展开更多
针对拥挤场景下的尺度变化导致人群计数任务中精度较低的问题,提出一种基于多尺度注意力网络(MANet)的密集人群计数模型。通过构建多列模型以捕获多尺度特征,促进尺度信息融合;使用双注意力模块获取上下文依赖关系,增强多尺度特征图的信...针对拥挤场景下的尺度变化导致人群计数任务中精度较低的问题,提出一种基于多尺度注意力网络(MANet)的密集人群计数模型。通过构建多列模型以捕获多尺度特征,促进尺度信息融合;使用双注意力模块获取上下文依赖关系,增强多尺度特征图的信息;采用密集连接重用多尺度特征图,生成高质量的密度图,之后对密度图积分得到计数。此外,提出一种新的损失函数,直接使用点注释图进行训练,以减少由高斯滤波生成新的密度图而带来的额外的误差。在公开人群数据集ShanghaiTech Part A/B、UCF-CC-50、UCF-QNRF上的实验结果均达到了最优,表明该网络可以有效处理拥挤场景下的目标多尺度,并且生成高质量的密度图。展开更多
Removing rain from a single image is a challenging task due to the absence of temporal information. Considering that a rainy image can be decomposed into the low-frequency(LF) and high-frequency(HF) components, where ...Removing rain from a single image is a challenging task due to the absence of temporal information. Considering that a rainy image can be decomposed into the low-frequency(LF) and high-frequency(HF) components, where the coarse scale information is retained in the LF component and the rain streaks and texture correspond to the HF component, we propose a single image rain removal algorithm using image decomposition and a dense network. We design two task-driven sub-networks to estimate the LF and non-rain HF components of a rainy image. The high-frequency estimation sub-network employs a densely connected network structure, while the low-frequency sub-network uses a simple convolutional neural network(CNN).We add total variation(TV) regularization and LF-channel fidelity terms to the loss function to optimize the two subnetworks jointly. The method then obtains de-rained output by combining the estimated LF and non-rain HF components.Extensive experiments on synthetic and real-world rainy images demonstrate that our method removes rain streaks while preserving non-rain details, and achieves superior de-raining performance both perceptually and quantitatively.展开更多
To overcome the computational burden of processing three-dimensional(3 D)medical scans and the lack of spatial information in two-dimensional(2 D)medical scans,a novel segmentation method was proposed that integrates ...To overcome the computational burden of processing three-dimensional(3 D)medical scans and the lack of spatial information in two-dimensional(2 D)medical scans,a novel segmentation method was proposed that integrates the segmentation results of three densely connected 2 D convolutional neural networks(2 D-CNNs).In order to combine the lowlevel features and high-level features,we added densely connected blocks in the network structure design so that the low-level features will not be missed as the network layer increases during the learning process.Further,in order to resolve the problems of the blurred boundary of the glioma edema area,we superimposed and fused the T2-weighted fluid-attenuated inversion recovery(FLAIR)modal image and the T2-weighted(T2)modal image to enhance the edema section.For the loss function of network training,we improved the cross-entropy loss function to effectively avoid network over-fitting.On the Multimodal Brain Tumor Image Segmentation Challenge(BraTS)datasets,our method achieves dice similarity coefficient values of 0.84,0.82,and 0.83 on the BraTS2018 training;0.82,0.85,and 0.83 on the BraTS2018 validation;and 0.81,0.78,and 0.83 on the BraTS2013 testing in terms of whole tumors,tumor cores,and enhancing cores,respectively.Experimental results showed that the proposed method achieved promising accuracy and fast processing,demonstrating good potential for clinical medicine.展开更多
Biological slices are an effective tool for studying the physiological structure and evolutionmechanism of biological systems.However,due to the complexity of preparation technology and the presence of many uncontroll...Biological slices are an effective tool for studying the physiological structure and evolutionmechanism of biological systems.However,due to the complexity of preparation technology and the presence of many uncontrollable factors during the preparation processing,leads to problems such as difficulty in preparing slice images and breakage of slice images.Therefore,we proposed a biological slice image small-scale corruption inpainting algorithm with interpretability based on multi-layer deep sparse representation,achieving the high-fidelity reconstruction of slice images.We further discussed the relationship between deep convolutional neural networks and sparse representation,ensuring the high-fidelity characteristic of the algorithm first.A novel deep wavelet dictionary is proposed that can better obtain image prior and possess learnable feature.And multi-layer deep sparse representation is used to implement dictionary learning,acquiring better signal expression.Compared with methods such as NLABH,Shearlet,Partial Differential Equation(PDE),K-Singular Value Decomposition(K-SVD),Convolutional Sparse Coding,and Deep Image Prior,the proposed algorithm has better subjective reconstruction and objective evaluation with small-scale image data,which realized high-fidelity inpainting,under the condition of small-scale image data.And theOn2-level time complexitymakes the proposed algorithm practical.The proposed algorithm can be effectively extended to other cross-sectional image inpainting problems,such as magnetic resonance images,and computed tomography images.展开更多
Purpose: To detect small diagnostic signals such as lung nodules in chest radiographs, radiologists magnify a region-of-interest using linear interpolation methods. However, such methods tend to generate over-smoothed...Purpose: To detect small diagnostic signals such as lung nodules in chest radiographs, radiologists magnify a region-of-interest using linear interpolation methods. However, such methods tend to generate over-smoothed images with artifacts that can make interpretation difficult. The purpose of this study was to investigate the effectiveness of super-resolution methods for improving the image quality of magnified chest radiographs. Materials and Methods: A total of 247 chest X-rays were sampled from the JSRT database, then divided into 93 training cases with non-nodules and 154 test cases with lung nodules. We first trained two types of super-resolution methods, sparse-coding super-resolution (ScSR) and super-resolution convolutional neural network (SRCNN). With the trained super-resolution methods, the high-resolution image was then reconstructed using the super-resolution methods from a low-resolution image that was down-sampled from the original test image. We compared the image quality of the super-resolution methods and the linear interpolations (nearest neighbor and bilinear interpolations). For quantitative evaluation, we measured two image quality metrics: peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). For comparative evaluation of the super-resolution methods, we measured the computation time per image. Results: The PSNRs and SSIMs for the ScSR and the SRCNN schemes were significantly higher than those of the linear interpolation methods (p p p Conclusion: Super-resolution methods provide significantly better image quality than linear interpolation methods for magnified chest radiograph images. Of the two tested schemes, the SRCNN scheme processed the images fastest;thus, SRCNN could be clinically superior for processing radiographs in terms of both image quality and processing speed.展开更多
基金This study was supported by the National Natural Science Foundation of China under the project‘Research on the Dynamic Location of Receiver Points and Wave Field Separation Technology Based on Deep Learning in OBN Seismic Exploration’(No.42074140).
文摘At present,the acquisition of seismic data is developing toward high-precision and high-density methods.However,complex natural environments and cultural factors in many exploration areas cause difficulties in achieving uniform and intensive acquisition,which makes complete seismic data collection impossible.Therefore,data reconstruction is required in the processing link to ensure imaging accuracy.Deep learning,as a new field in rapid development,presents clear advantages in feature extraction and modeling.In this study,the convolutional neural network deep learning algorithm is applied to seismic data reconstruction.Based on the convolutional neural network algorithm and combined with the characteristics of seismic data acquisition,two training strategies of supervised and unsupervised learning are designed to reconstruct sparse acquisition seismic records.First,a supervised learning strategy is proposed for labeled data,wherein the complete seismic data are segmented as the input of the training set and are randomly sampled before each training,thereby increasing the number of samples and the richness of features.Second,an unsupervised learning strategy based on large samples is proposed for unlabeled data,and the rolling segmentation method is used to update(pseudo)labels and training parameters in the training process.Through the reconstruction test of simulated and actual data,the deep learning algorithm based on a convolutional neural network shows better reconstruction quality and higher accuracy than compressed sensing based on Curvelet transform.
文摘Early diagnosis and detection are important tasks in controlling the spread of COVID-19.A number of Deep Learning techniques has been established by researchers to detect the presence of COVID-19 using CT scan images and X-rays.However,these methods suffer from biased results and inaccurate detection of the disease.So,the current research article developed Oppositional-based Chimp Optimization Algorithm and Deep Dense Convolutional Neural Network(OCOA-DDCNN)for COVID-19 prediction using CT images in IoT environment.The proposed methodology works on the basis of two stages such as pre-processing and prediction.Initially,CT scan images generated from prospective COVID-19 are collected from open-source system using IoT devices.The collected images are then preprocessed using Gaussian filter.Gaussian filter can be utilized in the removal of unwanted noise from the collected CT scan images.Afterwards,the preprocessed images are sent to prediction phase.In this phase,Deep Dense Convolutional Neural Network(DDCNN)is applied upon the pre-processed images.The proposed classifier is optimally designed with the consideration of Oppositional-basedChimp Optimization Algorithm(OCOA).This algorithm is utilized in the selection of optimal parameters for the proposed classifier.Finally,the proposed technique is used in the prediction of COVID-19 and classify the results as either COVID-19 or non-COVID-19.The projected method was implemented in MATLAB and the performances were evaluated through statistical measurements.The proposed method was contrasted with conventional techniques such as Convolutional Neural Network-Firefly Algorithm(CNN-FA),Emperor Penguin Optimization(CNN-EPO)respectively.The results established the supremacy of the proposed model.
基金Researchers Supporting Project Number(RSP2024R206),King Saud University,Riyadh,Saudi Arabia.
文摘The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly intrusion attacks.In addition,IoT devices generate a high volume of unstructured data.Traditional intrusion detection systems often struggle to cope with the unique characteristics of IoT networks,such as resource constraints and heterogeneous data sources.Given the unpredictable nature of network technologies and diverse intrusion methods,conventional machine-learning approaches seem to lack efficiency.Across numerous research domains,deep learning techniques have demonstrated their capability to precisely detect anomalies.This study designs and enhances a novel anomaly-based intrusion detection system(AIDS)for IoT networks.Firstly,a Sparse Autoencoder(SAE)is applied to reduce the high dimension and get a significant data representation by calculating the reconstructed error.Secondly,the Convolutional Neural Network(CNN)technique is employed to create a binary classification approach.The proposed SAE-CNN approach is validated using the Bot-IoT dataset.The proposed models exceed the performance of the existing deep learning approach in the literature with an accuracy of 99.9%,precision of 99.9%,recall of 100%,F1 of 99.9%,False Positive Rate(FPR)of 0.0003,and True Positive Rate(TPR)of 0.9992.In addition,alternative metrics,such as training and testing durations,indicated that SAE-CNN performs better.
文摘针对拥挤场景下的尺度变化导致人群计数任务中精度较低的问题,提出一种基于多尺度注意力网络(MANet)的密集人群计数模型。通过构建多列模型以捕获多尺度特征,促进尺度信息融合;使用双注意力模块获取上下文依赖关系,增强多尺度特征图的信息;采用密集连接重用多尺度特征图,生成高质量的密度图,之后对密度图积分得到计数。此外,提出一种新的损失函数,直接使用点注释图进行训练,以减少由高斯滤波生成新的密度图而带来的额外的误差。在公开人群数据集ShanghaiTech Part A/B、UCF-CC-50、UCF-QNRF上的实验结果均达到了最优,表明该网络可以有效处理拥挤场景下的目标多尺度,并且生成高质量的密度图。
基金supported by the National Natural Science Foundation of China(61471313)the Natural Science Foundation of Hebei Province(F2019203318)
文摘Removing rain from a single image is a challenging task due to the absence of temporal information. Considering that a rainy image can be decomposed into the low-frequency(LF) and high-frequency(HF) components, where the coarse scale information is retained in the LF component and the rain streaks and texture correspond to the HF component, we propose a single image rain removal algorithm using image decomposition and a dense network. We design two task-driven sub-networks to estimate the LF and non-rain HF components of a rainy image. The high-frequency estimation sub-network employs a densely connected network structure, while the low-frequency sub-network uses a simple convolutional neural network(CNN).We add total variation(TV) regularization and LF-channel fidelity terms to the loss function to optimize the two subnetworks jointly. The method then obtains de-rained output by combining the estimated LF and non-rain HF components.Extensive experiments on synthetic and real-world rainy images demonstrate that our method removes rain streaks while preserving non-rain details, and achieves superior de-raining performance both perceptually and quantitatively.
基金the National Natural Science Foundation of China(No.81830052)the Shanghai Natural Science Foundation of China(No.20ZR1438300)the Shanghai Science and Technology Support Project(No.18441900500),China。
文摘To overcome the computational burden of processing three-dimensional(3 D)medical scans and the lack of spatial information in two-dimensional(2 D)medical scans,a novel segmentation method was proposed that integrates the segmentation results of three densely connected 2 D convolutional neural networks(2 D-CNNs).In order to combine the lowlevel features and high-level features,we added densely connected blocks in the network structure design so that the low-level features will not be missed as the network layer increases during the learning process.Further,in order to resolve the problems of the blurred boundary of the glioma edema area,we superimposed and fused the T2-weighted fluid-attenuated inversion recovery(FLAIR)modal image and the T2-weighted(T2)modal image to enhance the edema section.For the loss function of network training,we improved the cross-entropy loss function to effectively avoid network over-fitting.On the Multimodal Brain Tumor Image Segmentation Challenge(BraTS)datasets,our method achieves dice similarity coefficient values of 0.84,0.82,and 0.83 on the BraTS2018 training;0.82,0.85,and 0.83 on the BraTS2018 validation;and 0.81,0.78,and 0.83 on the BraTS2013 testing in terms of whole tumors,tumor cores,and enhancing cores,respectively.Experimental results showed that the proposed method achieved promising accuracy and fast processing,demonstrating good potential for clinical medicine.
基金supported by the National Natural Science Foundation of China(Grant No.61871380)the Shandong Provincial Natural Science Foundation(Grant No.ZR2020MF019)Beijing Natural Science Foundation(Grant No.4172034).
文摘Biological slices are an effective tool for studying the physiological structure and evolutionmechanism of biological systems.However,due to the complexity of preparation technology and the presence of many uncontrollable factors during the preparation processing,leads to problems such as difficulty in preparing slice images and breakage of slice images.Therefore,we proposed a biological slice image small-scale corruption inpainting algorithm with interpretability based on multi-layer deep sparse representation,achieving the high-fidelity reconstruction of slice images.We further discussed the relationship between deep convolutional neural networks and sparse representation,ensuring the high-fidelity characteristic of the algorithm first.A novel deep wavelet dictionary is proposed that can better obtain image prior and possess learnable feature.And multi-layer deep sparse representation is used to implement dictionary learning,acquiring better signal expression.Compared with methods such as NLABH,Shearlet,Partial Differential Equation(PDE),K-Singular Value Decomposition(K-SVD),Convolutional Sparse Coding,and Deep Image Prior,the proposed algorithm has better subjective reconstruction and objective evaluation with small-scale image data,which realized high-fidelity inpainting,under the condition of small-scale image data.And theOn2-level time complexitymakes the proposed algorithm practical.The proposed algorithm can be effectively extended to other cross-sectional image inpainting problems,such as magnetic resonance images,and computed tomography images.
文摘Purpose: To detect small diagnostic signals such as lung nodules in chest radiographs, radiologists magnify a region-of-interest using linear interpolation methods. However, such methods tend to generate over-smoothed images with artifacts that can make interpretation difficult. The purpose of this study was to investigate the effectiveness of super-resolution methods for improving the image quality of magnified chest radiographs. Materials and Methods: A total of 247 chest X-rays were sampled from the JSRT database, then divided into 93 training cases with non-nodules and 154 test cases with lung nodules. We first trained two types of super-resolution methods, sparse-coding super-resolution (ScSR) and super-resolution convolutional neural network (SRCNN). With the trained super-resolution methods, the high-resolution image was then reconstructed using the super-resolution methods from a low-resolution image that was down-sampled from the original test image. We compared the image quality of the super-resolution methods and the linear interpolations (nearest neighbor and bilinear interpolations). For quantitative evaluation, we measured two image quality metrics: peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). For comparative evaluation of the super-resolution methods, we measured the computation time per image. Results: The PSNRs and SSIMs for the ScSR and the SRCNN schemes were significantly higher than those of the linear interpolation methods (p p p Conclusion: Super-resolution methods provide significantly better image quality than linear interpolation methods for magnified chest radiograph images. Of the two tested schemes, the SRCNN scheme processed the images fastest;thus, SRCNN could be clinically superior for processing radiographs in terms of both image quality and processing speed.