At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns st...At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns still need enhancement,particularly accuracy,sensitivity,false positive and false negative,to improve the brain tumor prediction system symmetrically.Therefore,this work proposed an Extended Deep Learning Algorithm(EDLA)to measure performance parameters such as accuracy,sensitivity,and false positive and false negative rates.In addition,these iterated measures were analyzed by comparing the EDLA method with the Convolutional Neural Network(CNN)way further using the SPSS tool,and respective graphical illustrations were shown.The results were that the mean performance measures for the proposed EDLA algorithm were calculated,and those measured were accuracy(97.665%),sensitivity(97.939%),false positive(3.012%),and false negative(3.182%)for ten iterations.Whereas in the case of the CNN,the algorithm means accuracy gained was 94.287%,mean sensitivity 95.612%,mean false positive 5.328%,and mean false negative 4.756%.These results show that the proposed EDLA method has outperformed existing algorithms,including CNN,and ensures symmetrically improved parameters.Thus EDLA algorithm introduces novelty concerning its performance and particular activation function.This proposed method will be utilized effectively in brain tumor detection in a precise and accurate manner.This algorithm would apply to brain tumor diagnosis and be involved in various medical diagnoses aftermodification.If the quantity of dataset records is enormous,then themethod’s computation power has to be updated.展开更多
Background:Prenatal evaluation of fetal lung maturity(FLM)is a challenge,and an effective non-invasive method for prenatal assessment of FLM is needed.The study aimed to establish a normal fetal lung gestational age(G...Background:Prenatal evaluation of fetal lung maturity(FLM)is a challenge,and an effective non-invasive method for prenatal assessment of FLM is needed.The study aimed to establish a normal fetal lung gestational age(GA)grading model based on deep learning(DL)algorithms,validate the effectiveness of the model,and explore the potential value of DL algorithms in assessing FLM.Methods:A total of 7013 ultrasound images obtained from 1023 normal pregnancies between 20 and 41+6 weeks were analyzed in this study.There were no pregnancy-related complications that affected fetal lung development,and all infants were born without neonatal respiratory diseases.The images were divided into three classes based on the gestational week:class I:20 to 29+6 weeks,class II:30 to 36+6 weeks,and class III:37 to 41+6 weeks.There were 3323,2142,and 1548 images in each class,respectively.First,we performed a pre-processing algorithm to remove irrelevant information from each image.Then,a convolutional neural network was designed to identify different categories of fetal lung ultrasound images.Finally,we used ten-fold cross-validation to validate the performance of our model.This new machine learning algorithm automatically extracted and classified lung ultrasound image information related to GA.This was used to establish a grading model.The performance of the grading model was assessed using accuracy,sensitivity,specificity,and receiver operating characteristic curves.Results:A normal fetal lung GA grading model was established and validated.The sensitivity of each class in the independent test set was 91.7%,69.8%,and 86.4%,respectively.The specificity of each class in the independent test set was 76.8%,90.0%,and 83.1%,respectively.The total accuracy was 83.8%.The area under the curve(AUC)of each class was 0.982,0.907,and 0.960,respectively.The micro-average AUC was 0.957,and the macro-average AUC was 0.949.Conclusions:The normal fetal lung GA grading model could accurately identify ultrasound images of the fetal lung at different GAs,which can be used to identify cases of abnormal lung development due to gestational diseases and evaluate lung maturity after antenatal corticosteroid therapy.The results indicate that DL algorithms can be used as a non-invasive method to predict FLM.展开更多
The Covid-19 epidemic poses a serious public health threat to the world,where people with little or no pre-existing human immunity can be more vulnerable to its effects.Thus,developing surveillance systems for predict...The Covid-19 epidemic poses a serious public health threat to the world,where people with little or no pre-existing human immunity can be more vulnerable to its effects.Thus,developing surveillance systems for predicting the Covid-19 pandemic at an early stage could save millions of lives.In this study,a deep learning algorithm and a Holt-trend model are proposed to predict the coronavirus.The Long-Short Term Memory(LSTM)and Holttrend algorithms were applied to predict confirmed numbers and death cases.The real time data used has been collected from theWorld Health Organization(WHO).In the proposed research,we have considered three countries to test the proposed model,namely Saudi Arabia,Spain and Italy.The results suggest that the LSTM models show better performance in predicting the cases of coronavirus patients.Standard measure performance Mean squared Error(MSE),Root Mean Squared Error(RMSE),Mean error and correlation are employed to estimate the results of the proposed models.The empirical results of the LSTM,using the correlation metrics,are 99.94%,99.94%and 99.91%in predicting the number of confirmed cases in the three countries.As far as the results of the LSTM model in predicting the number of death of Covid-19,they are 99.86%,98.876%and 99.16%with respect to Saudi Arabia,Italy and Spain respectively.Similarly,the experiment’s results of the Holt-Trend model in predicting the number of confirmed cases of Covid-19,using the correlation metrics,are 99.06%,99.96%and 99.94%,whereas the results of the Holt-Trend model in predicting the number of death cases are 99.80%,99.96%and 99.94%with respect to the Saudi Arabia,Italy and Spain respectively.The empirical results indicate the efficient performance of the presented model in predicting the number of confirmed and death cases of Covid-19 in these countries.Such findings provide better insights regarding the future of Covid-19 this pandemic in general.The results were obtained by applying time series models,which need to be considered for the sake of saving the lives of many people.展开更多
Background:Deep Learning Algorithms(DLA)have become prominent as an application of Artificial Intelligence(Al)Techniques since 2010.This paper introduces the DLA to predict the relationships between individual tree he...Background:Deep Learning Algorithms(DLA)have become prominent as an application of Artificial Intelligence(Al)Techniques since 2010.This paper introduces the DLA to predict the relationships between individual tree height(ITH)and the diameter at breast height(DBH).Methods:A set of 2024 pairs of individual height and diameter at breast height measurements,originating from 150 sample plots located in stands of even aged and pure Anatolian Crimean Pine(Pinus nigra J.F.Arnold ssp.pallasiana(Lamb.)Holmboe)in Konya Forest Enterprise.The present study primarily investigated the capability and usability of DLA models for predicting the relationships between the ITH and the DBH sampled from some stands with different growth structures.The 80 different DLA models,which involve different the alternatives for the numbers of hidden layers and neuron,have been trained and compared to determine optimum and best predictive DLAs network structure.Results:It was determined that the DLA model with 9 layers and 100 neurons has been the best predictive network model compared as those by other different DLA,Artificial Neural Network,Nonlinear Regression and Nonlinear Mixed Effect models.The alternative of 100#neurons and 9#hidden layers in deep learning algorithms resulted in best predictive ITH values with root mean squared error(RMSE,0.5575),percent of the root mean squared error(RMSE%,4.9504%),Akaike information criterion(AIC,-998.9540),Bayesian information criterion(BIC,884.6591),fit index(Fl,0.9436),average absolute error(AAE,0.4077),maximum absolute error(max.AE,2.5106),Bias(0.0057)and percent Bias(Bias%,0.0502%).In addition,these predictive results with DLAs were further validated by the Equivalence tests that showed the DLA models successfully predicted the tree height in the independent dataset.Conclusion:This study has emphasized the capability of the DLA models,novel artificial intelligence technique,for predicting the relationships between individual tree height and the diameter at breast height that can be required information for the management of forests.展开更多
Due to the inconsistency of rice variety,agricultural industry faces an important challenge of rice grading and classification by the traditional grading system.The existing grading system is manual,which introduces s...Due to the inconsistency of rice variety,agricultural industry faces an important challenge of rice grading and classification by the traditional grading system.The existing grading system is manual,which introduces stress and strain to humans due to visual inspection.Automated rice grading system development has been proposed as a promising research area in computer vision.In this study,an accurate deep learning-based non-contact and cost-effective rice grading system was developed by rice appearance and characteristics.The proposed system provided real-time processing by using a NI-myRIO with a high-resolution camera and user interface.We firstly trained the network by a rice public dataset to extract rice discriminative features.Secondly,by using transfer learning,the pre-trained network was used to locate the region by extracting a feature map.The proposed deep learning model was tested using two public standard datasets and a prototype real-time scanning system.Using AlexNet architecture,we obtained an average accuracy of 98.2%with 97.6%sensitivity and 96.4%specificity.To validate the real-time performance of proposed rice grading classification system,various performance indices were calculated and compared with the existing classifier.Both simulation and real-time experiment evaluations confirmed the robustness and reliability of the proposed rice grading system.展开更多
The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power ...The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power transmission networks.This fact is more noticeable in smart grid-connected systems.The smart grid infrastructure has more renewable energy resources installed for its operation.To overcome this problem,a deep learning widearea controller is proposed for real-time parameter control and smart power grid resilience on oscillations inter-area modes.The proposed Deep Wide Area Controller(DWAC)uses the Deep Belief Network(DBN).The network weights are updated based on real-time data from Phasor measurement units.Resilience assessment based on failure probability,financial impact,and time-series data in grid failure management determine the norm H2.To demonstrate the effectiveness of the proposed framework,a time-domain simulation case study based on the IEEE-39 bus system was performed.For a one-channel attack on the test system,the resiliency index increased to 0.962,and inter-area dampingξwas reduced to 0.005.The obtained results validate the proposed deep learning algorithm’s efficiency on damping inter-area and local oscillation on the 2-channel attack as well.Results also offer robust management of power system resilience and timely control of the operating conditions.展开更多
With the rapid development of sports,the number of sports images has increased dramatically.Intelligent and automatic processing and analysis of moving images are significant,which can not only facilitate users to qui...With the rapid development of sports,the number of sports images has increased dramatically.Intelligent and automatic processing and analysis of moving images are significant,which can not only facilitate users to quickly search and access moving images but also facilitate staff to store and manage moving image data and contribute to the intellectual development of the sports industry.In this paper,a method of table tennis identification and positioning based on a convolutional neural network is proposed,which solves the problem that the identification and positioning method based on color features and contour features is not adaptable in various environments.At the same time,the learning methods and techniques of table tennis detection,positioning,and trajectory prediction are studied.A deep learning framework for recognition learning of rotating flying table tennis is put forward.The mechanism and methods of positioning,trajectory prediction,and intelligent automatic processing of moving images are studied,and the self-built data sets are trained and verified.展开更多
With the advent of Machine and Deep Learning algorithms,medical image diagnosis has a new perception of diagnosis and clinical treatment.Regret-tably,medical images are more susceptible to capturing noises despite the...With the advent of Machine and Deep Learning algorithms,medical image diagnosis has a new perception of diagnosis and clinical treatment.Regret-tably,medical images are more susceptible to capturing noises despite the peak in intelligent imaging techniques.However,the presence of noise images degrades both the diagnosis and clinical treatment processes.The existing intelligent meth-ods suffer from the deficiency in handling the diverse range of noise in the ver-satile medical images.This paper proposes a novel deep learning network which learns from the substantial extent of noise in medical data samples to alle-viate this challenge.The proposed deep learning architecture exploits the advan-tages of the capsule network,which is used to extract correlation features and combine them with redefined residual features.Additionally,thefinal stage of dense learning is replaced with powerful extreme learning machines to achieve a better diagnosis rate,even for noisy and complex images.Extensive experimen-tation has been conducted using different medical images.Various performances such as Peak-Signal-To-Noise Ratio(PSNR)and Structural-Similarity-Index-Metrics(SSIM)are compared with the existing deep learning architectures.Addi-tionally,a comprehensive analysis of individual algorithms is analyzed.The experimental results prove that the proposed model has outperformed the other existing algorithms by a substantial margin and proved its supremacy over the other learning models.展开更多
With the advent of Machine and Deep Learning algorithms,medical image diagnosis has a new perception of diagnosis and clinical treatment.Regret-tably,medical images are more susceptible to capturing noises despite the...With the advent of Machine and Deep Learning algorithms,medical image diagnosis has a new perception of diagnosis and clinical treatment.Regret-tably,medical images are more susceptible to capturing noises despite the peak in intelligent imaging techniques.However,the presence of noise images degrades both the diagnosis and clinical treatment processes.The existing intelligent meth-ods suffer from the deficiency in handling the diverse range of noise in the ver-satile medical images.This paper proposes a novel deep learning network which learns from the substantial extent of noise in medical data samples to alle-viate this challenge.The proposed deep learning architecture exploits the advan-tages of the capsule network,which is used to extract correlation features and combine them with redefined residual features.Additionally,the final stage of dense learning is replaced with powerful extreme learning machines to achieve a better diagnosis rate,even for noisy and complex images.Extensive experimen-tation has been conducted using different medical images.Various performances such as Peak-Signal-To-Noise Ratio(PSNR)and Structural-Similarity-Index-Metrics(SSIM)are compared with the existing deep learning architectures.Addi-tionally,a comprehensive analysis of individual algorithms is analyzed.The experimental results prove that the proposed model has outperformed the other existing algorithms by a substantial margin and proved its supremacy over the other learning models.展开更多
Cervical cancer is a severe threat to women’s health.The majority of cervical cancer cases occur in developing countries.The WHO has proposed screening 70%of women with high-performance tests between 35 and 45 years ...Cervical cancer is a severe threat to women’s health.The majority of cervical cancer cases occur in developing countries.The WHO has proposed screening 70%of women with high-performance tests between 35 and 45 years of age by 2030 to accelerate the elimination of cervical cancer.Due to an inadequate health infrastructure and organized screening strategy,most low-and middle-income countries are still far from achieving this goal.As part of the efforts to increase performance of cervical cancer screening,it is necessary to investigate the most accurate,efficient,and effective methods and strategies.Artificial intelligence(AI)is rapidly expanding its application in cancer screening and diagnosis and deep learning algorithms have offered human-like interpretation capabilities on various medical images.AI will soon have a more significant role in improving the implementation of cervical cancer screening,management,and follow-up.This review aims to report the state of AI with respect to cervical cancer screening.We discuss the primary AI applications and development of AI technology for image recognition applied to detection of abnormal cytology and cervical neoplastic diseases,as well as the challenges that we anticipate in the future.展开更多
The accurate prediction of bearing capacity is crucial in ensuring the structural integrity and safety of pile foundations.This research compares the Deep Neural Networks(DNN),Convolutional Neural Networks(CNN),Recurr...The accurate prediction of bearing capacity is crucial in ensuring the structural integrity and safety of pile foundations.This research compares the Deep Neural Networks(DNN),Convolutional Neural Networks(CNN),Recurrent Neural Networks(RNN),Long Short-Term Memory(LSTM),and Bidirectional LSTM(BiLSTM)algorithms utilizing a data set of 257 dynamic pile load tests for the first time.Also,this research illustrates the multicollinearity effect on DNN,CNN,RNN,LSTM,and BiLSTM models’performance and accuracy for the first time.A comprehensive comparative analysis is conducted,employing various statistical performance parameters,rank analysis,and error matrix to evaluate the performance of these models.The performance is further validated using external validation,and visual interpretation is provided using the regression error characteristics(REC)curve and Taylor diagram.Results from the comparative analysis reveal that the DNN(Coefficient of determination(R^(2))_(training(TR))=0.97,root mean squared error(RMSE)_(TR)=0.0413;R^(2)_(testing(TS))=0.9,RMSE_(TS)=0.08)followed by BiLSTM(R^(2)_(TR)=0.91,RMSE_(TR)=0.782;R^(2)_(TS)=0.89,RMSE_(TS)=0.0862)model demonstrates the highest performance accuracy.It is noted that the BiLSTM model is better than LSTM because the BiLSTM model,which increases the amount of information for the network,is a sequence processing model made up of two LSTMs,one of which takes the input in a forward manner,and the other in a backward direction.The prediction of pile-bearing capacity is strongly influenced by ram weight(having a considerable multicollinearity level),and the effect of the considerable multicollinearity level has been determined for the model based on the recurrent neural network approach.In this study,the recurrent neural network model has the least performance and accuracy in predicting the pile-bearing capacity.展开更多
Reconfigurable Intelligent Surfaces(RIS)have emerged as a promising technology for improving the reliability of massive MIMO communication networks.However,conventional RIS suffer from poor Spectral Efficiency(SE)and ...Reconfigurable Intelligent Surfaces(RIS)have emerged as a promising technology for improving the reliability of massive MIMO communication networks.However,conventional RIS suffer from poor Spectral Efficiency(SE)and high energy consumption,leading to complex Hybrid Precoding(HP)designs.To address these issues,we propose a new low-complexity HP model,named Dynamic Hybrid Relay Reflecting RIS based Hybrid Precoding(DHRR-RIS-HP).Our approach combines active and passive elements to cancel out the downsides of both conventional designs.We first design a DHRR-RIS and optimize the pilot and Channel State Information(CSI)estimation using an adaptive threshold method and Adaptive Back Propagation Neural Network(ABPNN)algorithm,respectively,to reduce the Bit Error Rate(BER)and energy consumption.To optimize the data stream,we cluster them into private and public streams using Enhanced Fuzzy C-Means(EFCM)algorithm,and schedule them based on priority and emergency level.To maximize the sum rate and SE,we perform digital precoder optimization at the Base Station(BS)side using Deep Deterministic Policy Gradient(DDPG)algorithm and analog precoder optimization at the DHRR-RIS using Fire Hawk Optimization(FHO)algorithm.We implement our proposed work using MATLAB R2020a and compare it with existing works using several validation metrics.Our results show that our proposed work outperforms existing works in terms of SE,Weighted Sum Rate(WSR),and BER.展开更多
With the continuous development and utilization of marine resources,the underwater target detection has gradually become a popular research topic in the field of underwater robot operations and target detection.Howeve...With the continuous development and utilization of marine resources,the underwater target detection has gradually become a popular research topic in the field of underwater robot operations and target detection.However,it is difficult to combine the environmental semantic information and the semantic information of targets at different scales by detection algorithms due to the complex underwater environment.In this paper,a cascade model based on the UGC-YOLO network structure with high detection accuracy is proposed.The YOLOv3 convolutional neural network is employed as the baseline structure.By fusing the global semantic information between two residual stages in the parallel structure of the feature extraction network,the perception of underwater targets is improved and the detection rate of hard-to-detect underwater objects is raised.Furthermore,the deformable convolution is applied to capture longrange semantic dependencies and PPM pooling is introduced in the highest layer network for aggregating semantic information.Finally,a multi-scale weighted fusion approach is presented for learning semantic information at different scales.Experiments are conducted on an underwater test dataset and the results have demonstrated that our proposed algorithm could detect aquatic targets in complex degraded underwater images.Compared with the baseline network algorithm,the Common Objects in Context(COCO)evaluation metric has been improved by 4.34%.展开更多
With the recent increase in the utilization of logistics and courier services,it is time for research on logistics systems fused with the fourth industry sector.Algorithm studies related to object recognition have bee...With the recent increase in the utilization of logistics and courier services,it is time for research on logistics systems fused with the fourth industry sector.Algorithm studies related to object recognition have been actively conducted in convergence with the emerging artificial intelligence field,but so far,algorithms suitable for automatic unloading devices that need to identify a number of unstructured cargoes require further development.In this study,the object recognition algorithm of the automatic loading device for cargo was selected as the subject of the study,and a cargo object recognition algorithm applicable to the automatic loading device is proposed to improve the amorphous cargo identification performance.The fuzzy convergence algorithm is an algorithm that applies Fuzzy C Means to existing algorithm forms that fuse YOLO(You Only Look Once)and Mask R-CNN(Regions with Convolutional Neuron Networks).Experiments conducted using the fuzzy convergence algorithm showed an average of 33 FPS(Frames Per Second)and a recognition rate of 95%.In addition,there were significant improvements in the range of actual box recognition.The results of this study can contribute to improving the performance of identifying amorphous cargoes in automatic loading devices.展开更多
In this paper,we present a probabilistic numerical method for a class of forward utilities in a stochastic factor model.For this purpose,we use the representation of forward utilities using the ergodic Backward Stocha...In this paper,we present a probabilistic numerical method for a class of forward utilities in a stochastic factor model.For this purpose,we use the representation of forward utilities using the ergodic Backward Stochastic Differential Equations(eBSDEs)introduced by Liang and Zariphopoulou in[27].We establish a connection between the solution of the ergodic BSDE and the solution of an associated BSDE with random terminal time T,defined as the hitting time of the positive recurrent stochastic factor.The viewpoint based on BSDEs with random horizon yields a new characterization of the ergodic cost^which is a part of the solution of the eBSDEs.In particular,for a certain class of eBSDEs with quadratic generator,the Cole-Hopf transformation leads to a semi-explicit representation of the solution as well as a new expression of the ergodic cost>.The latter can be estimated with Monte Carlo methods.We also propose two new deep learning numerical schemes for eBSDEs.Finally,we present numerical results for different examples of eBSDEs and forward utilities together with the associated investment strategies.展开更多
Background:The rate at which the anticancer drug paclitaxel is cleared from the body markedly impacts its dosage and chemotherapy effectiveness.Importantly,paclitaxel clearance varies among individuals,primarily becau...Background:The rate at which the anticancer drug paclitaxel is cleared from the body markedly impacts its dosage and chemotherapy effectiveness.Importantly,paclitaxel clearance varies among individuals,primarily because of genetic polymorphisms.This metabolic variability arises from a nonlinear process that is influenced by multiple single nucleotide polymorphisms(SNPs).Conventional bioinformatics methods struggle to accurately analyze this complex process and,currently,there is no established efficient algorithm for investigating SNP interactions.Methods:We developed a novel machine‐learning approach called GEP‐CSIs data mining algorithm.This algorithm,an advanced version of GEP,uses linear algebra computations to handle discrete variables.The GEP‐CSI algorithm calculates a fitness function score based on paclitaxel clearance data and genetic polymorphisms in patients with nonsmall cell lung cancer.The data were divided into a primary set and a validation set for the analysis.Results:We identified and validated 1184 three‐SNP combinations that had the highest fitness function values.Notably,SERPINA1,ATF3 and EGF were found to indirectly influence paclitaxel clearance by coordinating the activity of genes previously reported to be significant in paclitaxel clearance.Particularly intriguing was the discovery of a combination of three SNPs in genes FLT1,EGF and MUC16.These SNPs‐related proteins were confirmed to interact with each other in the protein–protein interaction network,which formed the basis for further exploration of their functional roles and mechanisms.Conclusion:We successfully developed an effective deep‐learning algorithm tailored for the nuanced mining of SNP interactions,leveraging data on paclitaxel clearance and individual genetic polymorphisms.展开更多
基金supported by Project No.R-2023-23 of the Deanship of Scientific Research at Majmaah University.
文摘At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns still need enhancement,particularly accuracy,sensitivity,false positive and false negative,to improve the brain tumor prediction system symmetrically.Therefore,this work proposed an Extended Deep Learning Algorithm(EDLA)to measure performance parameters such as accuracy,sensitivity,and false positive and false negative rates.In addition,these iterated measures were analyzed by comparing the EDLA method with the Convolutional Neural Network(CNN)way further using the SPSS tool,and respective graphical illustrations were shown.The results were that the mean performance measures for the proposed EDLA algorithm were calculated,and those measured were accuracy(97.665%),sensitivity(97.939%),false positive(3.012%),and false negative(3.182%)for ten iterations.Whereas in the case of the CNN,the algorithm means accuracy gained was 94.287%,mean sensitivity 95.612%,mean false positive 5.328%,and mean false negative 4.756%.These results show that the proposed EDLA method has outperformed existing algorithms,including CNN,and ensures symmetrically improved parameters.Thus EDLA algorithm introduces novelty concerning its performance and particular activation function.This proposed method will be utilized effectively in brain tumor detection in a precise and accurate manner.This algorithm would apply to brain tumor diagnosis and be involved in various medical diagnoses aftermodification.If the quantity of dataset records is enormous,then themethod’s computation power has to be updated.
基金a grant from the National Key Research and Development Program of China(No.2016YFC1000104).
文摘Background:Prenatal evaluation of fetal lung maturity(FLM)is a challenge,and an effective non-invasive method for prenatal assessment of FLM is needed.The study aimed to establish a normal fetal lung gestational age(GA)grading model based on deep learning(DL)algorithms,validate the effectiveness of the model,and explore the potential value of DL algorithms in assessing FLM.Methods:A total of 7013 ultrasound images obtained from 1023 normal pregnancies between 20 and 41+6 weeks were analyzed in this study.There were no pregnancy-related complications that affected fetal lung development,and all infants were born without neonatal respiratory diseases.The images were divided into three classes based on the gestational week:class I:20 to 29+6 weeks,class II:30 to 36+6 weeks,and class III:37 to 41+6 weeks.There were 3323,2142,and 1548 images in each class,respectively.First,we performed a pre-processing algorithm to remove irrelevant information from each image.Then,a convolutional neural network was designed to identify different categories of fetal lung ultrasound images.Finally,we used ten-fold cross-validation to validate the performance of our model.This new machine learning algorithm automatically extracted and classified lung ultrasound image information related to GA.This was used to establish a grading model.The performance of the grading model was assessed using accuracy,sensitivity,specificity,and receiver operating characteristic curves.Results:A normal fetal lung GA grading model was established and validated.The sensitivity of each class in the independent test set was 91.7%,69.8%,and 86.4%,respectively.The specificity of each class in the independent test set was 76.8%,90.0%,and 83.1%,respectively.The total accuracy was 83.8%.The area under the curve(AUC)of each class was 0.982,0.907,and 0.960,respectively.The micro-average AUC was 0.957,and the macro-average AUC was 0.949.Conclusions:The normal fetal lung GA grading model could accurately identify ultrasound images of the fetal lung at different GAs,which can be used to identify cases of abnormal lung development due to gestational diseases and evaluate lung maturity after antenatal corticosteroid therapy.The results indicate that DL algorithms can be used as a non-invasive method to predict FLM.
文摘The Covid-19 epidemic poses a serious public health threat to the world,where people with little or no pre-existing human immunity can be more vulnerable to its effects.Thus,developing surveillance systems for predicting the Covid-19 pandemic at an early stage could save millions of lives.In this study,a deep learning algorithm and a Holt-trend model are proposed to predict the coronavirus.The Long-Short Term Memory(LSTM)and Holttrend algorithms were applied to predict confirmed numbers and death cases.The real time data used has been collected from theWorld Health Organization(WHO).In the proposed research,we have considered three countries to test the proposed model,namely Saudi Arabia,Spain and Italy.The results suggest that the LSTM models show better performance in predicting the cases of coronavirus patients.Standard measure performance Mean squared Error(MSE),Root Mean Squared Error(RMSE),Mean error and correlation are employed to estimate the results of the proposed models.The empirical results of the LSTM,using the correlation metrics,are 99.94%,99.94%and 99.91%in predicting the number of confirmed cases in the three countries.As far as the results of the LSTM model in predicting the number of death of Covid-19,they are 99.86%,98.876%and 99.16%with respect to Saudi Arabia,Italy and Spain respectively.Similarly,the experiment’s results of the Holt-Trend model in predicting the number of confirmed cases of Covid-19,using the correlation metrics,are 99.06%,99.96%and 99.94%,whereas the results of the Holt-Trend model in predicting the number of death cases are 99.80%,99.96%and 99.94%with respect to the Saudi Arabia,Italy and Spain respectively.The empirical results indicate the efficient performance of the presented model in predicting the number of confirmed and death cases of Covid-19 in these countries.Such findings provide better insights regarding the future of Covid-19 this pandemic in general.The results were obtained by applying time series models,which need to be considered for the sake of saving the lives of many people.
文摘Background:Deep Learning Algorithms(DLA)have become prominent as an application of Artificial Intelligence(Al)Techniques since 2010.This paper introduces the DLA to predict the relationships between individual tree height(ITH)and the diameter at breast height(DBH).Methods:A set of 2024 pairs of individual height and diameter at breast height measurements,originating from 150 sample plots located in stands of even aged and pure Anatolian Crimean Pine(Pinus nigra J.F.Arnold ssp.pallasiana(Lamb.)Holmboe)in Konya Forest Enterprise.The present study primarily investigated the capability and usability of DLA models for predicting the relationships between the ITH and the DBH sampled from some stands with different growth structures.The 80 different DLA models,which involve different the alternatives for the numbers of hidden layers and neuron,have been trained and compared to determine optimum and best predictive DLAs network structure.Results:It was determined that the DLA model with 9 layers and 100 neurons has been the best predictive network model compared as those by other different DLA,Artificial Neural Network,Nonlinear Regression and Nonlinear Mixed Effect models.The alternative of 100#neurons and 9#hidden layers in deep learning algorithms resulted in best predictive ITH values with root mean squared error(RMSE,0.5575),percent of the root mean squared error(RMSE%,4.9504%),Akaike information criterion(AIC,-998.9540),Bayesian information criterion(BIC,884.6591),fit index(Fl,0.9436),average absolute error(AAE,0.4077),maximum absolute error(max.AE,2.5106),Bias(0.0057)and percent Bias(Bias%,0.0502%).In addition,these predictive results with DLAs were further validated by the Equivalence tests that showed the DLA models successfully predicted the tree height in the independent dataset.Conclusion:This study has emphasized the capability of the DLA models,novel artificial intelligence technique,for predicting the relationships between individual tree height and the diameter at breast height that can be required information for the management of forests.
基金the Indian National Academy of Science, New Delhi for providing research fellowship in the Department of Electrical Engineering, Indian Institute of Technology, New Delhi and Department of Electrical and Electronics Engineering, Mepco Schlenk Engineering College, Sivakasi, India for providing the necessary research facilities
文摘Due to the inconsistency of rice variety,agricultural industry faces an important challenge of rice grading and classification by the traditional grading system.The existing grading system is manual,which introduces stress and strain to humans due to visual inspection.Automated rice grading system development has been proposed as a promising research area in computer vision.In this study,an accurate deep learning-based non-contact and cost-effective rice grading system was developed by rice appearance and characteristics.The proposed system provided real-time processing by using a NI-myRIO with a high-resolution camera and user interface.We firstly trained the network by a rice public dataset to extract rice discriminative features.Secondly,by using transfer learning,the pre-trained network was used to locate the region by extracting a feature map.The proposed deep learning model was tested using two public standard datasets and a prototype real-time scanning system.Using AlexNet architecture,we obtained an average accuracy of 98.2%with 97.6%sensitivity and 96.4%specificity.To validate the real-time performance of proposed rice grading classification system,various performance indices were calculated and compared with the existing classifier.Both simulation and real-time experiment evaluations confirmed the robustness and reliability of the proposed rice grading system.
文摘The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power transmission networks.This fact is more noticeable in smart grid-connected systems.The smart grid infrastructure has more renewable energy resources installed for its operation.To overcome this problem,a deep learning widearea controller is proposed for real-time parameter control and smart power grid resilience on oscillations inter-area modes.The proposed Deep Wide Area Controller(DWAC)uses the Deep Belief Network(DBN).The network weights are updated based on real-time data from Phasor measurement units.Resilience assessment based on failure probability,financial impact,and time-series data in grid failure management determine the norm H2.To demonstrate the effectiveness of the proposed framework,a time-domain simulation case study based on the IEEE-39 bus system was performed.For a one-channel attack on the test system,the resiliency index increased to 0.962,and inter-area dampingξwas reduced to 0.005.The obtained results validate the proposed deep learning algorithm’s efficiency on damping inter-area and local oscillation on the 2-channel attack as well.Results also offer robust management of power system resilience and timely control of the operating conditions.
文摘With the rapid development of sports,the number of sports images has increased dramatically.Intelligent and automatic processing and analysis of moving images are significant,which can not only facilitate users to quickly search and access moving images but also facilitate staff to store and manage moving image data and contribute to the intellectual development of the sports industry.In this paper,a method of table tennis identification and positioning based on a convolutional neural network is proposed,which solves the problem that the identification and positioning method based on color features and contour features is not adaptable in various environments.At the same time,the learning methods and techniques of table tennis detection,positioning,and trajectory prediction are studied.A deep learning framework for recognition learning of rotating flying table tennis is put forward.The mechanism and methods of positioning,trajectory prediction,and intelligent automatic processing of moving images are studied,and the self-built data sets are trained and verified.
文摘With the advent of Machine and Deep Learning algorithms,medical image diagnosis has a new perception of diagnosis and clinical treatment.Regret-tably,medical images are more susceptible to capturing noises despite the peak in intelligent imaging techniques.However,the presence of noise images degrades both the diagnosis and clinical treatment processes.The existing intelligent meth-ods suffer from the deficiency in handling the diverse range of noise in the ver-satile medical images.This paper proposes a novel deep learning network which learns from the substantial extent of noise in medical data samples to alle-viate this challenge.The proposed deep learning architecture exploits the advan-tages of the capsule network,which is used to extract correlation features and combine them with redefined residual features.Additionally,thefinal stage of dense learning is replaced with powerful extreme learning machines to achieve a better diagnosis rate,even for noisy and complex images.Extensive experimen-tation has been conducted using different medical images.Various performances such as Peak-Signal-To-Noise Ratio(PSNR)and Structural-Similarity-Index-Metrics(SSIM)are compared with the existing deep learning architectures.Addi-tionally,a comprehensive analysis of individual algorithms is analyzed.The experimental results prove that the proposed model has outperformed the other existing algorithms by a substantial margin and proved its supremacy over the other learning models.
文摘With the advent of Machine and Deep Learning algorithms,medical image diagnosis has a new perception of diagnosis and clinical treatment.Regret-tably,medical images are more susceptible to capturing noises despite the peak in intelligent imaging techniques.However,the presence of noise images degrades both the diagnosis and clinical treatment processes.The existing intelligent meth-ods suffer from the deficiency in handling the diverse range of noise in the ver-satile medical images.This paper proposes a novel deep learning network which learns from the substantial extent of noise in medical data samples to alle-viate this challenge.The proposed deep learning architecture exploits the advan-tages of the capsule network,which is used to extract correlation features and combine them with redefined residual features.Additionally,the final stage of dense learning is replaced with powerful extreme learning machines to achieve a better diagnosis rate,even for noisy and complex images.Extensive experimen-tation has been conducted using different medical images.Various performances such as Peak-Signal-To-Noise Ratio(PSNR)and Structural-Similarity-Index-Metrics(SSIM)are compared with the existing deep learning architectures.Addi-tionally,a comprehensive analysis of individual algorithms is analyzed.The experimental results prove that the proposed model has outperformed the other existing algorithms by a substantial margin and proved its supremacy over the other learning models.
基金supported by grants from CAMS Innovation Fund for Medical Sciences(Grant No.CAMS 2021-I2M-1-004)from the Bill&Melinda Gates Foundation(Grant No.INV-031449).
文摘Cervical cancer is a severe threat to women’s health.The majority of cervical cancer cases occur in developing countries.The WHO has proposed screening 70%of women with high-performance tests between 35 and 45 years of age by 2030 to accelerate the elimination of cervical cancer.Due to an inadequate health infrastructure and organized screening strategy,most low-and middle-income countries are still far from achieving this goal.As part of the efforts to increase performance of cervical cancer screening,it is necessary to investigate the most accurate,efficient,and effective methods and strategies.Artificial intelligence(AI)is rapidly expanding its application in cancer screening and diagnosis and deep learning algorithms have offered human-like interpretation capabilities on various medical images.AI will soon have a more significant role in improving the implementation of cervical cancer screening,management,and follow-up.This review aims to report the state of AI with respect to cervical cancer screening.We discuss the primary AI applications and development of AI technology for image recognition applied to detection of abnormal cytology and cervical neoplastic diseases,as well as the challenges that we anticipate in the future.
文摘The accurate prediction of bearing capacity is crucial in ensuring the structural integrity and safety of pile foundations.This research compares the Deep Neural Networks(DNN),Convolutional Neural Networks(CNN),Recurrent Neural Networks(RNN),Long Short-Term Memory(LSTM),and Bidirectional LSTM(BiLSTM)algorithms utilizing a data set of 257 dynamic pile load tests for the first time.Also,this research illustrates the multicollinearity effect on DNN,CNN,RNN,LSTM,and BiLSTM models’performance and accuracy for the first time.A comprehensive comparative analysis is conducted,employing various statistical performance parameters,rank analysis,and error matrix to evaluate the performance of these models.The performance is further validated using external validation,and visual interpretation is provided using the regression error characteristics(REC)curve and Taylor diagram.Results from the comparative analysis reveal that the DNN(Coefficient of determination(R^(2))_(training(TR))=0.97,root mean squared error(RMSE)_(TR)=0.0413;R^(2)_(testing(TS))=0.9,RMSE_(TS)=0.08)followed by BiLSTM(R^(2)_(TR)=0.91,RMSE_(TR)=0.782;R^(2)_(TS)=0.89,RMSE_(TS)=0.0862)model demonstrates the highest performance accuracy.It is noted that the BiLSTM model is better than LSTM because the BiLSTM model,which increases the amount of information for the network,is a sequence processing model made up of two LSTMs,one of which takes the input in a forward manner,and the other in a backward direction.The prediction of pile-bearing capacity is strongly influenced by ram weight(having a considerable multicollinearity level),and the effect of the considerable multicollinearity level has been determined for the model based on the recurrent neural network approach.In this study,the recurrent neural network model has the least performance and accuracy in predicting the pile-bearing capacity.
文摘Reconfigurable Intelligent Surfaces(RIS)have emerged as a promising technology for improving the reliability of massive MIMO communication networks.However,conventional RIS suffer from poor Spectral Efficiency(SE)and high energy consumption,leading to complex Hybrid Precoding(HP)designs.To address these issues,we propose a new low-complexity HP model,named Dynamic Hybrid Relay Reflecting RIS based Hybrid Precoding(DHRR-RIS-HP).Our approach combines active and passive elements to cancel out the downsides of both conventional designs.We first design a DHRR-RIS and optimize the pilot and Channel State Information(CSI)estimation using an adaptive threshold method and Adaptive Back Propagation Neural Network(ABPNN)algorithm,respectively,to reduce the Bit Error Rate(BER)and energy consumption.To optimize the data stream,we cluster them into private and public streams using Enhanced Fuzzy C-Means(EFCM)algorithm,and schedule them based on priority and emergency level.To maximize the sum rate and SE,we perform digital precoder optimization at the Base Station(BS)side using Deep Deterministic Policy Gradient(DDPG)algorithm and analog precoder optimization at the DHRR-RIS using Fire Hawk Optimization(FHO)algorithm.We implement our proposed work using MATLAB R2020a and compare it with existing works using several validation metrics.Our results show that our proposed work outperforms existing works in terms of SE,Weighted Sum Rate(WSR),and BER.
基金supported by the National Natural Science Foundation of China(No.62271199)the Natural Science Foundation of Hunan Province,China(No.2020JJ5170)the Scientific Research Fund of Hunan Provincial Education Department(No.18C0299)。
文摘With the continuous development and utilization of marine resources,the underwater target detection has gradually become a popular research topic in the field of underwater robot operations and target detection.However,it is difficult to combine the environmental semantic information and the semantic information of targets at different scales by detection algorithms due to the complex underwater environment.In this paper,a cascade model based on the UGC-YOLO network structure with high detection accuracy is proposed.The YOLOv3 convolutional neural network is employed as the baseline structure.By fusing the global semantic information between two residual stages in the parallel structure of the feature extraction network,the perception of underwater targets is improved and the detection rate of hard-to-detect underwater objects is raised.Furthermore,the deformable convolution is applied to capture longrange semantic dependencies and PPM pooling is introduced in the highest layer network for aggregating semantic information.Finally,a multi-scale weighted fusion approach is presented for learning semantic information at different scales.Experiments are conducted on an underwater test dataset and the results have demonstrated that our proposed algorithm could detect aquatic targets in complex degraded underwater images.Compared with the baseline network algorithm,the Common Objects in Context(COCO)evaluation metric has been improved by 4.34%.
基金This work was supported by a grant from R&D program of the Korea Evaluation Institute of Industrial Technology(20015047).
文摘With the recent increase in the utilization of logistics and courier services,it is time for research on logistics systems fused with the fourth industry sector.Algorithm studies related to object recognition have been actively conducted in convergence with the emerging artificial intelligence field,but so far,algorithms suitable for automatic unloading devices that need to identify a number of unstructured cargoes require further development.In this study,the object recognition algorithm of the automatic loading device for cargo was selected as the subject of the study,and a cargo object recognition algorithm applicable to the automatic loading device is proposed to improve the amorphous cargo identification performance.The fuzzy convergence algorithm is an algorithm that applies Fuzzy C Means to existing algorithm forms that fuse YOLO(You Only Look Once)and Mask R-CNN(Regions with Convolutional Neuron Networks).Experiments conducted using the fuzzy convergence algorithm showed an average of 33 FPS(Frames Per Second)and a recognition rate of 95%.In addition,there were significant improvements in the range of actual box recognition.The results of this study can contribute to improving the performance of identifying amorphous cargoes in automatic loading devices.
基金The authors research is part of the ANR project DREAMeS(ANR-21-CE46-0002)and benefited from the support of respectively the "Chair Risques Emergents en Assurance"and"Chair Impact de la Transition Climatique en Assurance"under the aegis of Fondation du Risque,a joint initiative by Risk and Insurance Institute of Le Mans,and MMA-Covea and Groupama respectively.
文摘In this paper,we present a probabilistic numerical method for a class of forward utilities in a stochastic factor model.For this purpose,we use the representation of forward utilities using the ergodic Backward Stochastic Differential Equations(eBSDEs)introduced by Liang and Zariphopoulou in[27].We establish a connection between the solution of the ergodic BSDE and the solution of an associated BSDE with random terminal time T,defined as the hitting time of the positive recurrent stochastic factor.The viewpoint based on BSDEs with random horizon yields a new characterization of the ergodic cost^which is a part of the solution of the eBSDEs.In particular,for a certain class of eBSDEs with quadratic generator,the Cole-Hopf transformation leads to a semi-explicit representation of the solution as well as a new expression of the ergodic cost>.The latter can be estimated with Monte Carlo methods.We also propose two new deep learning numerical schemes for eBSDEs.Finally,we present numerical results for different examples of eBSDEs and forward utilities together with the associated investment strategies.
基金Beijing Hope Run Special Fund of the Cancer Foundation of China,Grant/Award Number:LC2020L03CAMS Innovation Fund for Medical Sciences,Grant/Award Number:2021‐I2M‐1‐014。
文摘Background:The rate at which the anticancer drug paclitaxel is cleared from the body markedly impacts its dosage and chemotherapy effectiveness.Importantly,paclitaxel clearance varies among individuals,primarily because of genetic polymorphisms.This metabolic variability arises from a nonlinear process that is influenced by multiple single nucleotide polymorphisms(SNPs).Conventional bioinformatics methods struggle to accurately analyze this complex process and,currently,there is no established efficient algorithm for investigating SNP interactions.Methods:We developed a novel machine‐learning approach called GEP‐CSIs data mining algorithm.This algorithm,an advanced version of GEP,uses linear algebra computations to handle discrete variables.The GEP‐CSI algorithm calculates a fitness function score based on paclitaxel clearance data and genetic polymorphisms in patients with nonsmall cell lung cancer.The data were divided into a primary set and a validation set for the analysis.Results:We identified and validated 1184 three‐SNP combinations that had the highest fitness function values.Notably,SERPINA1,ATF3 and EGF were found to indirectly influence paclitaxel clearance by coordinating the activity of genes previously reported to be significant in paclitaxel clearance.Particularly intriguing was the discovery of a combination of three SNPs in genes FLT1,EGF and MUC16.These SNPs‐related proteins were confirmed to interact with each other in the protein–protein interaction network,which formed the basis for further exploration of their functional roles and mechanisms.Conclusion:We successfully developed an effective deep‐learning algorithm tailored for the nuanced mining of SNP interactions,leveraging data on paclitaxel clearance and individual genetic polymorphisms.