This letter deals with the frequency domain Blind Source Separation of Convolutive Mixtures (CMBSS). From the frequency representation of the "overlap and save", a Weighted General Discrete Fourier Transform...This letter deals with the frequency domain Blind Source Separation of Convolutive Mixtures (CMBSS). From the frequency representation of the "overlap and save", a Weighted General Discrete Fourier Transform (WGDFT) is derived to replace the traditional Discrete Fourier Transform (DFT). The mixing matrix on each frequency bin could be estimated more precisely from WGDFT coefficients than from DFT coefficients, which improves separation performance. Simulation results verify the validity of WGDFT for frequency domain blind source separation of convolutive mixtures.展开更多
In this paper,a Maximum Likelihood(ML) approach,implemented by Expectation-Maximization(EM) algorithm,is proposed to blind separation of convolutively mixed discrete sources.In order to carry out the expectation proce...In this paper,a Maximum Likelihood(ML) approach,implemented by Expectation-Maximization(EM) algorithm,is proposed to blind separation of convolutively mixed discrete sources.In order to carry out the expectation procedure of the EM algorithm with a less computational load,the algorithm named Iterative Maximum Likelihood algorithm(IML) is proposed to calculate the likelihood and recover the source signals.An important feature of the ML approach is that it has robust performance in noise environments by treating the covariance matrix of the additive Gaussian noise as a parameter.Another striking feature of the ML approach is that it is possible to separate more sources than sensors by exploiting the finite alphabet property of the sources.Simulation results show that the proposed ML approach works well either in determined mixtures or underdetermined mixtures.Furthermore,the performance of the proposed ML algorithm is close to the performance with perfect knowledge of the channel filters.展开更多
Most of the existing algorithms for blind sources separation have a limitation that sources are statistically independent. However, in many practical applications, the source signals are non- negative and mutual stati...Most of the existing algorithms for blind sources separation have a limitation that sources are statistically independent. However, in many practical applications, the source signals are non- negative and mutual statistically dependent signals. When the observations are nonnegative linear combinations of nonnegative sources, the correlation coefficients of the observations are larger than these of source signals. In this letter, a novel Nonnegative Matrix Factorization (NMF) algorithm with least correlated component constraints to blind separation of convolutive mixed sources is proposed. The algorithm relaxes the source independence assumption and has low-complexity algebraic com- putations. Simulation results on blind source separation including real face image data indicate that the sources can be successfully recovered with the algorithm.展开更多
Accurate prediction of formation pore pressure is essential to predict fluid flow and manage hydrocarbon production in petroleum engineering.Recent deep learning technique has been receiving more interest due to the g...Accurate prediction of formation pore pressure is essential to predict fluid flow and manage hydrocarbon production in petroleum engineering.Recent deep learning technique has been receiving more interest due to the great potential to deal with pore pressure prediction.However,most of the traditional deep learning models are less efficient to address generalization problems.To fill this technical gap,in this work,we developed a new adaptive physics-informed deep learning model with high generalization capability to predict pore pressure values directly from seismic data.Specifically,the new model,named CGP-NN,consists of a novel parametric features extraction approach(1DCPP),a stacked multilayer gated recurrent model(multilayer GRU),and an adaptive physics-informed loss function.Through machine training,the developed model can automatically select the optimal physical model to constrain the results for each pore pressure prediction.The CGP-NN model has the best generalization when the physicsrelated metricλ=0.5.A hybrid approach combining Eaton and Bowers methods is also proposed to build machine-learnable labels for solving the problem of few labels.To validate the developed model and methodology,a case study on a complex reservoir in Tarim Basin was further performed to demonstrate the high accuracy on the pore pressure prediction of new wells along with the strong generalization ability.The adaptive physics-informed deep learning approach presented here has potential application in the prediction of pore pressures coupled with multiple genesis mechanisms using seismic data.展开更多
Thunderstorm gusts are a common form of severe convective weather in the warm season in North China,and it is of great importance to correctly forecast them.At present,the forecasting of thunderstorm gusts is mainly b...Thunderstorm gusts are a common form of severe convective weather in the warm season in North China,and it is of great importance to correctly forecast them.At present,the forecasting of thunderstorm gusts is mainly based on traditional subjective methods,which fails to achieve high-resolution and high-frequency gridded forecasts based on multiple observation sources.In this paper,we propose a deep learning method called Thunderstorm Gusts TransU-net(TGTransUnet)to forecast thunderstorm gusts in North China based on multi-source gridded product data from the Institute of Urban Meteorology(IUM)with a lead time of 1 to 6 h.To determine the specific range of thunderstorm gusts,we combine three meteorological variables:radar reflectivity factor,lightning location,and 1-h maximum instantaneous wind speed from automatic weather stations(AWSs),and obtain a reasonable ground truth of thunderstorm gusts.Then,we transform the forecasting problem into an image-to-image problem in deep learning under the TG-TransUnet architecture,which is based on convolutional neural networks and a transformer.The analysis and forecast data of the enriched multi-source gridded comprehensive forecasting system for the period 2021–23 are then used as training,validation,and testing datasets.Finally,the performance of TG-TransUnet is compared with other methods.The results show that TG-TransUnet has the best prediction results at 1–6 h.The IUM is currently using this model to support the forecasting of thunderstorm gusts in North China.展开更多
Watermarks can provide reliable and secure copyright protection for optical coherence tomography(OCT)fundus images.The effective image segmentation is helpful for promoting OCT image watermarking.However,OCT images ha...Watermarks can provide reliable and secure copyright protection for optical coherence tomography(OCT)fundus images.The effective image segmentation is helpful for promoting OCT image watermarking.However,OCT images have a large amount of low-quality data,which seriously affects the performance of segmentationmethods.Therefore,this paper proposes an effective segmentation method for OCT fundus image watermarking using a rough convolutional neural network(RCNN).First,the rough-set-based feature discretization module is designed to preprocess the input data.Second,a dual attention mechanism for feature channels and spatial regions in the CNN is added to enable the model to adaptively select important information for fusion.Finally,the refinement module for enhancing the extraction power of multi-scale information is added to improve the edge accuracy in segmentation.RCNN is compared with CE-Net and MultiResUNet on 83 gold standard 3D retinal OCT data samples.The average dice similarly coefficient(DSC)obtained by RCNN is 6%higher than that of CE-Net.The average 95 percent Hausdorff distance(95HD)and average symmetric surface distance(ASD)obtained by RCNN are 32.4%and 33.3%lower than those of MultiResUNet,respectively.We also evaluate the effect of feature discretization,as well as analyze the initial learning rate of RCNN and conduct ablation experiments with the four different models.The experimental results indicate that our method can improve the segmentation accuracy of OCT fundus images,providing strong support for its application in medical image watermarking.展开更多
Accurately predicting fluid forces acting on the sur-face of a structure is crucial in engineering design.However,this task becomes particularly challenging in turbulent flow,due to the complex and irregular changes i...Accurately predicting fluid forces acting on the sur-face of a structure is crucial in engineering design.However,this task becomes particularly challenging in turbulent flow,due to the complex and irregular changes in the flow field.In this study,we propose a novel deep learning method,named mapping net-work-coordinated stacked gated recurrent units(MSU),for pre-dicting pressure on a circular cylinder from velocity data.Specifi-cally,our coordinated learning strategy is designed to extract the most critical velocity point for prediction,a process that has not been explored before.In our experiments,MSU extracts one point from a velocity field containing 121 points and utilizes this point to accurately predict 100 pressure points on the cylinder.This method significantly reduces the workload of data measure-ment in practical engineering applications.Our experimental results demonstrate that MSU predictions are highly similar to the real turbulent data in both spatio-temporal and individual aspects.Furthermore,the comparison results show that MSU predicts more precise results,even outperforming models that use all velocity field points.Compared with state-of-the-art methods,MSU has an average improvement of more than 45%in various indicators such as root mean square error(RMSE).Through comprehensive and authoritative physical verification,we estab-lished that MSU’s prediction results closely align with pressure field data obtained in real turbulence fields.This confirmation underscores the considerable potential of MSU for practical applications in real engineering scenarios.The code is available at https://github.com/zhangzm0128/MSU.展开更多
Geomechanical assessment using coupled reservoir-geomechanical simulation is becoming increasingly important for analyzing the potential geomechanical risks in subsurface geological developments.However,a robust and e...Geomechanical assessment using coupled reservoir-geomechanical simulation is becoming increasingly important for analyzing the potential geomechanical risks in subsurface geological developments.However,a robust and efficient geomechanical upscaling technique for heterogeneous geological reservoirs is lacking to advance the applications of three-dimensional(3D)reservoir-scale geomechanical simulation considering detailed geological heterogeneities.Here,we develop convolutional neural network(CNN)proxies that reproduce the anisotropic nonlinear geomechanical response caused by lithological heterogeneity,and compute upscaled geomechanical properties from CNN proxies.The CNN proxies are trained using a large dataset of randomly generated spatially correlated sand-shale realizations as inputs and simulation results of their macroscopic geomechanical response as outputs.The trained CNN models can provide the upscaled shear strength(R^(2)>0.949),stress-strain behavior(R^(2)>0.925),and volumetric strain changes(R^(2)>0.958)that highly agree with the numerical simulation results while saving over two orders of magnitude of computational time.This is a major advantage in computing the upscaled geomechanical properties directly from geological realizations without the need to perform local numerical simulations to obtain the geomechanical response.The proposed CNN proxybased upscaling technique has the ability to(1)bridge the gap between the fine-scale geocellular models considering geological uncertainties and computationally efficient geomechanical models used to assess the geomechanical risks of large-scale subsurface development,and(2)improve the efficiency of numerical upscaling techniques that rely on local numerical simulations,leading to significantly increased computational time for uncertainty quantification using numerous geological realizations.展开更多
How to use a few defect samples to complete the defect classification is a key challenge in the production of mobile phone screens.An attention-relation network for the mobile phone screen defect classification is pro...How to use a few defect samples to complete the defect classification is a key challenge in the production of mobile phone screens.An attention-relation network for the mobile phone screen defect classification is proposed in this paper.The architecture of the attention-relation network contains two modules:a feature extract module and a feature metric module.Different from other few-shot models,an attention mechanism is applied to metric learning in our model to measure the distance between features,so as to pay attention to the correlation between features and suppress unwanted information.Besides,we combine dilated convolution and skip connection to extract more feature information for follow-up processing.We validate attention-relation network on the mobile phone screen defect dataset.The experimental results show that the classification accuracy of the attentionrelation network is 0.9486 under the 5-way 1-shot training strategy and 0.9039 under the 5-way 5-shot setting.It achieves the excellent effect of classification for mobile phone screen defects and outperforms with dominant advantages.展开更多
Accurately estimating the ocean subsurface salinity structure(OSSS)is crucial for understanding ocean dynamics and predicting climate variations.We present a convolutional neural network(CNN)model to estimate the OSSS...Accurately estimating the ocean subsurface salinity structure(OSSS)is crucial for understanding ocean dynamics and predicting climate variations.We present a convolutional neural network(CNN)model to estimate the OSSS in the Indian Ocean using satellite data and Argo observations.We evaluated the performance of the CNN model in terms of its vertical and spatial distribution,as well as seasonal variation of OSSS estimation.Results demonstrate that the CNN model accurately estimates the most significant salinity features in the Indian Ocean using sea surface data with no significant differences from Argo-derived OSSS.However,the estimation accuracy of the CNN model varies with depth,with the most challenging depth being approximately 70 m,corresponding to the halocline layer.Validations of the CNN model’s accuracy in estimating OSSS in the Indian Ocean are also conducted by comparing Argo observations and CNN model estimations along two selected sections and four selected boxes.The results show that the CNN model effectively captures the seasonal variability of salinity,demonstrating its high performance in salinity estimation using sea surface data.Our analysis reveals that sea surface salinity has the strongest correlation with OSSS in shallow layers,while sea surface height anomaly plays a more significant role in deeper layers.These preliminary results provide valuable insights into the feasibility of estimating OSSS using satellite observations and have implications for studying upper ocean dynamics using machine learning techniques.展开更多
The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method in...The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos.展开更多
Data-driven approaches such as neural networks are increasingly used for deep excavations due to the growing amount of available monitoring data in practical projects.However,most neural network models only use the da...Data-driven approaches such as neural networks are increasingly used for deep excavations due to the growing amount of available monitoring data in practical projects.However,most neural network models only use the data from a single monitoring point and neglect the spatial relationships between multiple monitoring points.Besides,most models lack flexibility in providing predictions for multiple days after monitoring activity.This study proposes a sequence-to-sequence(seq2seq)two-dimensional(2D)convolutional long short-term memory neural network(S2SCL2D)for predicting the spatiotemporal wall deflections induced by deep excavations.The model utilizes the data from all monitoring points on the entire wall and extracts spatiotemporal features from data by combining the 2D convolutional layers and long short-term memory(LSTM)layers.The S2SCL2D model achieves a long-term prediction of wall deflections through a recursive seq2seq structure.The excavation depth,which has a significant impact on wall deflections,is also considered using a feature fusion method.An excavation project in Hangzhou,China,is used to illustrate the proposed model.The results demonstrate that the S2SCL2D model has superior prediction accuracy and robustness than that of the LSTM and S2SCL1D(one-dimensional)models.The prediction model demonstrates a strong generalizability when applied to an adjacent excavation.Based on the long-term prediction results,practitioners can plan and allocate resources in advance to address the potential engineering issues.展开更多
Artificial intelligence(AI)technology has become integral in the realm of medicine and healthcare,particularly in human activity recognition(HAR)applications such as fitness and rehabilitation tracking.This study intr...Artificial intelligence(AI)technology has become integral in the realm of medicine and healthcare,particularly in human activity recognition(HAR)applications such as fitness and rehabilitation tracking.This study introduces a robust coupling analysis framework that integrates four AI-enabled models,combining both machine learning(ML)and deep learning(DL)approaches to evaluate their effectiveness in HAR.The analytical dataset comprises 561 features sourced from the UCI-HAR database,forming the foundation for training the models.Additionally,the MHEALTH database is employed to replicate the modeling process for comparative purposes,while inclusion of the WISDM database,renowned for its challenging features,supports the framework’s resilience and adaptability.The ML-based models employ the methodologies including adaptive neuro-fuzzy inference system(ANFIS),support vector machine(SVM),and random forest(RF),for data training.In contrast,a DL-based model utilizes one-dimensional convolution neural network(1dCNN)to automate feature extraction.Furthermore,the recursive feature elimination(RFE)algorithm,which drives an ML-based estimator to eliminate low-participation features,helps identify the optimal features for enhancing model performance.The best accuracies of the ANFIS,SVM,RF,and 1dCNN models with meticulous featuring process achieve around 90%,96%,91%,and 93%,respectively.Comparative analysis using the MHEALTH dataset showcases the 1dCNN model’s remarkable perfect accuracy(100%),while the RF,SVM,and ANFIS models equipped with selected features achieve accuracies of 99.8%,99.7%,and 96.5%,respectively.Finally,when applied to the WISDM dataset,the DL-based and ML-based models attain accuracies of 91.4%and 87.3%,respectively,aligning with prior research findings.In conclusion,the proposed framework yields HAR models with commendable performance metrics,exhibiting its suitability for integration into the healthcare services system through AI-driven applications.展开更多
针对景区手写诗词存在背景纹理复杂、字体尺寸及风格多样等特点导致景区游客难以识别手写诗词的问题,首先,分析研究景区手写诗词的识别场景,设计景区诗词检测网络(detection of poetry in scenic areas-network,DPSA-Net)以提取景区手...针对景区手写诗词存在背景纹理复杂、字体尺寸及风格多样等特点导致景区游客难以识别手写诗词的问题,首先,分析研究景区手写诗词的识别场景,设计景区诗词检测网络(detection of poetry in scenic areas-network,DPSA-Net)以提取景区手写诗词不同尺度的特征,并结合手写诗词字符间的链接依赖关系实现景区手写诗词检测;其次,设计了卷积循环聚合网络(convolution recurrent aggregation network,CRA-Net)以对景区手写诗词进行识别,结合卷积神经网络(convolutional neural networks,CNN)和双向长短期记忆网络提取手写诗词图像的序列特征,并通过聚合交叉熵(aggregation cross-entropy,ACE)实现特征向文本的转换;最后,结合景区知识图谱对CRA-Net的输出进行校正,进而提高景区手写诗词的识别准确率。实验结果表明,通过景区手写诗词矫正技术对CRA-Net的识别结果矫正后,识别准确率达到了79.04%,同时,该技术具有较好的抗干扰能力和良好的应用前景。展开更多
Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR ...Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.展开更多
This study addresses the limitations of Transformer models in image feature extraction,particularly their lack of inductive bias for visual structures.Compared to Convolutional Neural Networks(CNNs),the Transformers a...This study addresses the limitations of Transformer models in image feature extraction,particularly their lack of inductive bias for visual structures.Compared to Convolutional Neural Networks(CNNs),the Transformers are more sensitive to different hyperparameters of optimizers,which leads to a lack of stability and slow convergence.To tackle these challenges,we propose the Convolution-based Efficient Transformer Image Feature Extraction Network(CEFormer)as an enhancement of the Transformer architecture.Our model incorporates E-Attention,depthwise separable convolution,and dilated convolution to introduce crucial inductive biases,such as translation invariance,locality,and scale invariance,into the Transformer framework.Additionally,we implement a lightweight convolution module to process the input images,resulting in faster convergence and improved stability.This results in an efficient convolution combined Transformer image feature extraction network.Experimental results on the ImageNet1k Top-1 dataset demonstrate that the proposed network achieves better accuracy while maintaining high computational speed.It achieves up to 85.0%accuracy across various model sizes on image classification,outperforming various baseline models.When integrated into the Mask Region-ConvolutionalNeuralNetwork(R-CNN)framework as a backbone network,CEFormer outperforms other models and achieves the highest mean Average Precision(mAP)scores.This research presents a significant advancement in Transformer-based image feature extraction,balancing performance and computational efficiency.展开更多
The open-circuit fault is one of the most common faults of the automatic ramming drive system(ARDS),and it can be categorized into the open-phase faults of Permanent Magnet Synchronous Motor(PMSM)and the open-circuit ...The open-circuit fault is one of the most common faults of the automatic ramming drive system(ARDS),and it can be categorized into the open-phase faults of Permanent Magnet Synchronous Motor(PMSM)and the open-circuit faults of Voltage Source Inverter(VSI). The stator current serves as a common indicator for detecting open-circuit faults. Due to the identical changes of the stator current between the open-phase faults in the PMSM and failures of double switches within the same leg of the VSI, this paper utilizes the zero-sequence voltage component as an additional diagnostic criterion to differentiate them.Considering the variable conditions and substantial noise of the ARDS, a novel Multi-resolution Network(Mr Net) is proposed, which can extract multi-resolution perceptual information and enhance robustness to the noise. Meanwhile, a feature weighted layer is introduced to allocate higher weights to characteristics situated near the feature frequency. Both simulation and experiment results validate that the proposed fault diagnosis method can diagnose 25 types of open-circuit faults and achieve more than98.28% diagnostic accuracy. In addition, the experiment results also demonstrate that Mr Net has the capability of diagnosing the fault types accurately under the interference of noise signals(Laplace noise and Gaussian noise).展开更多
Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in t...Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in time series forecasting. However, two problems weaken the performance of TCNs. One is that in dilated casual convolution, causal convolution leads to the receptive fields of outputs being concentrated in the earlier part of the input sequence, whereas the recent input information will be severely lost. The other is that the distribution shift problem in time series has not been adequately solved. To address the first problem, we propose a subsequence-based dilated convolution method (SDC). By using multiple convolutional filters to convolve elements of neighboring subsequences, the method extracts temporal features from a growing receptive field via a growing subsequence rather than a single element. Ultimately, the receptive field of each output element can cover the whole input sequence. To address the second problem, we propose a difference and compensation method (DCM). The method reduces the discrepancies between and within the input sequences by difference operations and then compensates the outputs for the information lost due to difference operations. Based on SDC and DCM, we further construct a temporal subsequence-based convolutional network with difference (TSCND) for time series forecasting. The experimental results show that TSCND can reduce prediction mean squared error by 7.3% and save runtime, compared with state-of-the-art models and vanilla TCN.展开更多
The widespread availability of digital multimedia data has led to a new challenge in digital forensics.Traditional source camera identification algorithms usually rely on various traces in the capturing process.Howeve...The widespread availability of digital multimedia data has led to a new challenge in digital forensics.Traditional source camera identification algorithms usually rely on various traces in the capturing process.However,these traces have become increasingly difficult to extract due to wide availability of various image processing algorithms.Convolutional Neural Networks(CNN)-based algorithms have demonstrated good discriminative capabilities for different brands and even different models of camera devices.However,their performances is not ideal in case of distinguishing between individual devices of the same model,because cameras of the same model typically use the same optical lens,image sensor,and image processing algorithms,that result in minimal overall differences.In this paper,we propose a camera forensics algorithm based on multi-scale feature fusion to address these issues.The proposed algorithm extracts different local features from feature maps of different scales and then fuses them to obtain a comprehensive feature representation.This representation is then fed into a subsequent camera fingerprint classification network.Building upon the Swin-T network,we utilize Transformer Blocks and Graph Convolutional Network(GCN)modules to fuse multi-scale features from different stages of the backbone network.Furthermore,we conduct experiments on established datasets to demonstrate the feasibility and effectiveness of the proposed approach.展开更多
The longitudinal dependence of the behavior of ionospheric parameters has been the subject of a number of works where significant variations are discovered.This also applies to the prediction of the ionospheric total ...The longitudinal dependence of the behavior of ionospheric parameters has been the subject of a number of works where significant variations are discovered.This also applies to the prediction of the ionospheric total electron content(TEC),which neural network methods have recently been widely used.However,the results are mainly presented for a limited set of meridians.This paper examines the longitudinal dependence of the TEC forecast accuracy in the equatorial zone.In this case,the methods are used that provided the best accuracy on three meridians:European(30°E),Southeastern(110°E)and American(75°W).Results for the stations considered are analyzed as a function of longitude using the Jet Propulsion Laboratory Global Ionosphere Map(JPL GIM)for 2015.These results are for 2 h ahead and 24 h ahead forecast.It was found that in this case,based on the metric values,three groups of architectures can be distinguished.The first group included long short-term memory(LSTM),gated recurrent unit(GRU),and temporal convolutional networks(TCN)models as a part of unidirectional deep learning models;the second group is based on the recurrent models from the first group,which were supplemented with a bidirectional algorithm,increasing the TEC forecasting accuracy by 2-3 times.The third group,which includes the bidirectional TCN architecture(BiTCN),provided the highest accuracy.For this architecture,according to data obtained for 9 equatorial stations,practical independence of the TEC prediction accuracy from longitude was observed under the following metrics(Mean Absolute Error MAE,Root Mean Square Error RMSE,Mean Absolute Percentage Error MAPE):MAE(2 h)is 0.2 TECU approximately;MAE(24 h)is 0.4 TECU approximately;RMSE(2 h)is less than 0.5 TECU except Niue station(RMSE(2 h)is 1 TECU approximately);RMSE(24 h)is in the range of 1.0-1.7 TECU;MAPE(2 h)<1%except Darwin station,MAPE(24 h)<2%.This result was confirmed by data from additional 5 stations that formed latitudinal chains in the equatorial part of the three meridians.The complete correspondence of the observational and predicted TEC values is illustrated using several stations for disturbed conditions on December 19-22,2015,which included the strongest magnetic storm in the second half of the year(min Dst=-155 nT).展开更多
基金the grant from the Ph.D. Programs Foun-dation of Ministry of Education of China (No. 20060280003)the Shanghai Leading Academic Dis-cipline Project (Project No.T0102).
文摘This letter deals with the frequency domain Blind Source Separation of Convolutive Mixtures (CMBSS). From the frequency representation of the "overlap and save", a Weighted General Discrete Fourier Transform (WGDFT) is derived to replace the traditional Discrete Fourier Transform (DFT). The mixing matrix on each frequency bin could be estimated more precisely from WGDFT coefficients than from DFT coefficients, which improves separation performance. Simulation results verify the validity of WGDFT for frequency domain blind source separation of convolutive mixtures.
基金supportedin part by the National Natural Science Foundation of China under Grant No. 61001106the National Key Basic Research Program of China(973 Program) under Grant No. 2009CB320400
文摘In this paper,a Maximum Likelihood(ML) approach,implemented by Expectation-Maximization(EM) algorithm,is proposed to blind separation of convolutively mixed discrete sources.In order to carry out the expectation procedure of the EM algorithm with a less computational load,the algorithm named Iterative Maximum Likelihood algorithm(IML) is proposed to calculate the likelihood and recover the source signals.An important feature of the ML approach is that it has robust performance in noise environments by treating the covariance matrix of the additive Gaussian noise as a parameter.Another striking feature of the ML approach is that it is possible to separate more sources than sensors by exploiting the finite alphabet property of the sources.Simulation results show that the proposed ML approach works well either in determined mixtures or underdetermined mixtures.Furthermore,the performance of the proposed ML algorithm is close to the performance with perfect knowledge of the channel filters.
基金Supported by the Specialized Research Fund for the Doctoral Program of Higher Education of China (No.20060280003)Shanghai Leading Academic Dis-cipline Project (T0102)
文摘Most of the existing algorithms for blind sources separation have a limitation that sources are statistically independent. However, in many practical applications, the source signals are non- negative and mutual statistically dependent signals. When the observations are nonnegative linear combinations of nonnegative sources, the correlation coefficients of the observations are larger than these of source signals. In this letter, a novel Nonnegative Matrix Factorization (NMF) algorithm with least correlated component constraints to blind separation of convolutive mixed sources is proposed. The algorithm relaxes the source independence assumption and has low-complexity algebraic com- putations. Simulation results on blind source separation including real face image data indicate that the sources can be successfully recovered with the algorithm.
基金funded by the National Natural Science Foundation of China(General Program:No.52074314,No.U19B6003-05)National Key Research and Development Program of China(2019YFA0708303-05)。
文摘Accurate prediction of formation pore pressure is essential to predict fluid flow and manage hydrocarbon production in petroleum engineering.Recent deep learning technique has been receiving more interest due to the great potential to deal with pore pressure prediction.However,most of the traditional deep learning models are less efficient to address generalization problems.To fill this technical gap,in this work,we developed a new adaptive physics-informed deep learning model with high generalization capability to predict pore pressure values directly from seismic data.Specifically,the new model,named CGP-NN,consists of a novel parametric features extraction approach(1DCPP),a stacked multilayer gated recurrent model(multilayer GRU),and an adaptive physics-informed loss function.Through machine training,the developed model can automatically select the optimal physical model to constrain the results for each pore pressure prediction.The CGP-NN model has the best generalization when the physicsrelated metricλ=0.5.A hybrid approach combining Eaton and Bowers methods is also proposed to build machine-learnable labels for solving the problem of few labels.To validate the developed model and methodology,a case study on a complex reservoir in Tarim Basin was further performed to demonstrate the high accuracy on the pore pressure prediction of new wells along with the strong generalization ability.The adaptive physics-informed deep learning approach presented here has potential application in the prediction of pore pressures coupled with multiple genesis mechanisms using seismic data.
基金supported in part by the Beijing Natural Science Foundation(Grant No.8222051)the National Key R&D Program of China(Grant No.2022YFC3004103)+2 种基金the National Natural Foundation of China(Grant Nos.42275003 and 42275012)the China Meteorological Administration Key Innovation Team(Grant Nos.CMA2022ZD04 and CMA2022ZD07)the Beijing Science and Technology Program(Grant No.Z221100005222012).
文摘Thunderstorm gusts are a common form of severe convective weather in the warm season in North China,and it is of great importance to correctly forecast them.At present,the forecasting of thunderstorm gusts is mainly based on traditional subjective methods,which fails to achieve high-resolution and high-frequency gridded forecasts based on multiple observation sources.In this paper,we propose a deep learning method called Thunderstorm Gusts TransU-net(TGTransUnet)to forecast thunderstorm gusts in North China based on multi-source gridded product data from the Institute of Urban Meteorology(IUM)with a lead time of 1 to 6 h.To determine the specific range of thunderstorm gusts,we combine three meteorological variables:radar reflectivity factor,lightning location,and 1-h maximum instantaneous wind speed from automatic weather stations(AWSs),and obtain a reasonable ground truth of thunderstorm gusts.Then,we transform the forecasting problem into an image-to-image problem in deep learning under the TG-TransUnet architecture,which is based on convolutional neural networks and a transformer.The analysis and forecast data of the enriched multi-source gridded comprehensive forecasting system for the period 2021–23 are then used as training,validation,and testing datasets.Finally,the performance of TG-TransUnet is compared with other methods.The results show that TG-TransUnet has the best prediction results at 1–6 h.The IUM is currently using this model to support the forecasting of thunderstorm gusts in North China.
基金the China Postdoctoral Science Foundation under Grant 2021M701838the Natural Science Foundation of Hainan Province of China under Grants 621MS042 and 622MS067the Hainan Medical University Teaching Achievement Award Cultivation under Grant HYjcpx202209.
文摘Watermarks can provide reliable and secure copyright protection for optical coherence tomography(OCT)fundus images.The effective image segmentation is helpful for promoting OCT image watermarking.However,OCT images have a large amount of low-quality data,which seriously affects the performance of segmentationmethods.Therefore,this paper proposes an effective segmentation method for OCT fundus image watermarking using a rough convolutional neural network(RCNN).First,the rough-set-based feature discretization module is designed to preprocess the input data.Second,a dual attention mechanism for feature channels and spatial regions in the CNN is added to enable the model to adaptively select important information for fusion.Finally,the refinement module for enhancing the extraction power of multi-scale information is added to improve the edge accuracy in segmentation.RCNN is compared with CE-Net and MultiResUNet on 83 gold standard 3D retinal OCT data samples.The average dice similarly coefficient(DSC)obtained by RCNN is 6%higher than that of CE-Net.The average 95 percent Hausdorff distance(95HD)and average symmetric surface distance(ASD)obtained by RCNN are 32.4%and 33.3%lower than those of MultiResUNet,respectively.We also evaluate the effect of feature discretization,as well as analyze the initial learning rate of RCNN and conduct ablation experiments with the four different models.The experimental results indicate that our method can improve the segmentation accuracy of OCT fundus images,providing strong support for its application in medical image watermarking.
基金supported by the Japan Society for the Promotion of Science(JSPS)KAKENHI(JP22H03643)Japan Science and Technology Agency(JST)Support for Pioneering Research Initiated by the Next Generation(SPRING)(JPMJSP2145)+2 种基金JST Through the Establishment of University Fellowships Towards the Creation of Science Technology Innovation(JPMJFS2115)the National Natural Science Foundation of China(52078382)the State Key Laboratory of Disaster Reduction in Civil Engineering(CE19-A-01)。
文摘Accurately predicting fluid forces acting on the sur-face of a structure is crucial in engineering design.However,this task becomes particularly challenging in turbulent flow,due to the complex and irregular changes in the flow field.In this study,we propose a novel deep learning method,named mapping net-work-coordinated stacked gated recurrent units(MSU),for pre-dicting pressure on a circular cylinder from velocity data.Specifi-cally,our coordinated learning strategy is designed to extract the most critical velocity point for prediction,a process that has not been explored before.In our experiments,MSU extracts one point from a velocity field containing 121 points and utilizes this point to accurately predict 100 pressure points on the cylinder.This method significantly reduces the workload of data measure-ment in practical engineering applications.Our experimental results demonstrate that MSU predictions are highly similar to the real turbulent data in both spatio-temporal and individual aspects.Furthermore,the comparison results show that MSU predicts more precise results,even outperforming models that use all velocity field points.Compared with state-of-the-art methods,MSU has an average improvement of more than 45%in various indicators such as root mean square error(RMSE).Through comprehensive and authoritative physical verification,we estab-lished that MSU’s prediction results closely align with pressure field data obtained in real turbulence fields.This confirmation underscores the considerable potential of MSU for practical applications in real engineering scenarios.The code is available at https://github.com/zhangzm0128/MSU.
基金financial support provided by the Future Energy System at University of Alberta and NSERC Discovery Grant RGPIN-2023-04084。
文摘Geomechanical assessment using coupled reservoir-geomechanical simulation is becoming increasingly important for analyzing the potential geomechanical risks in subsurface geological developments.However,a robust and efficient geomechanical upscaling technique for heterogeneous geological reservoirs is lacking to advance the applications of three-dimensional(3D)reservoir-scale geomechanical simulation considering detailed geological heterogeneities.Here,we develop convolutional neural network(CNN)proxies that reproduce the anisotropic nonlinear geomechanical response caused by lithological heterogeneity,and compute upscaled geomechanical properties from CNN proxies.The CNN proxies are trained using a large dataset of randomly generated spatially correlated sand-shale realizations as inputs and simulation results of their macroscopic geomechanical response as outputs.The trained CNN models can provide the upscaled shear strength(R^(2)>0.949),stress-strain behavior(R^(2)>0.925),and volumetric strain changes(R^(2)>0.958)that highly agree with the numerical simulation results while saving over two orders of magnitude of computational time.This is a major advantage in computing the upscaled geomechanical properties directly from geological realizations without the need to perform local numerical simulations to obtain the geomechanical response.The proposed CNN proxybased upscaling technique has the ability to(1)bridge the gap between the fine-scale geocellular models considering geological uncertainties and computationally efficient geomechanical models used to assess the geomechanical risks of large-scale subsurface development,and(2)improve the efficiency of numerical upscaling techniques that rely on local numerical simulations,leading to significantly increased computational time for uncertainty quantification using numerous geological realizations.
文摘How to use a few defect samples to complete the defect classification is a key challenge in the production of mobile phone screens.An attention-relation network for the mobile phone screen defect classification is proposed in this paper.The architecture of the attention-relation network contains two modules:a feature extract module and a feature metric module.Different from other few-shot models,an attention mechanism is applied to metric learning in our model to measure the distance between features,so as to pay attention to the correlation between features and suppress unwanted information.Besides,we combine dilated convolution and skip connection to extract more feature information for follow-up processing.We validate attention-relation network on the mobile phone screen defect dataset.The experimental results show that the classification accuracy of the attentionrelation network is 0.9486 under the 5-way 1-shot training strategy and 0.9039 under the 5-way 5-shot setting.It achieves the excellent effect of classification for mobile phone screen defects and outperforms with dominant advantages.
基金Supported by the National Key Research and Development Program of China(No.2022YFF0801400)the National Natural Science Foundation of China(No.42176010)the Natural Science Foundation of Shandong Province,China(No.ZR2021MD022)。
文摘Accurately estimating the ocean subsurface salinity structure(OSSS)is crucial for understanding ocean dynamics and predicting climate variations.We present a convolutional neural network(CNN)model to estimate the OSSS in the Indian Ocean using satellite data and Argo observations.We evaluated the performance of the CNN model in terms of its vertical and spatial distribution,as well as seasonal variation of OSSS estimation.Results demonstrate that the CNN model accurately estimates the most significant salinity features in the Indian Ocean using sea surface data with no significant differences from Argo-derived OSSS.However,the estimation accuracy of the CNN model varies with depth,with the most challenging depth being approximately 70 m,corresponding to the halocline layer.Validations of the CNN model’s accuracy in estimating OSSS in the Indian Ocean are also conducted by comparing Argo observations and CNN model estimations along two selected sections and four selected boxes.The results show that the CNN model effectively captures the seasonal variability of salinity,demonstrating its high performance in salinity estimation using sea surface data.Our analysis reveals that sea surface salinity has the strongest correlation with OSSS in shallow layers,while sea surface height anomaly plays a more significant role in deeper layers.These preliminary results provide valuable insights into the feasibility of estimating OSSS using satellite observations and have implications for studying upper ocean dynamics using machine learning techniques.
基金Science and Technology Funds from the Liaoning Education Department(Serial Number:LJKZ0104).
文摘The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos.
基金supported by the National Natural Science Foundation of China(Grant No.42307218)the Foundation of Key Laboratory of Soft Soils and Geoenvironmental Engineering(Zhejiang University),Ministry of Education(Grant No.2022P08)the Natural Science Foundation of Zhejiang Province(Grant No.LTZ21E080001).
文摘Data-driven approaches such as neural networks are increasingly used for deep excavations due to the growing amount of available monitoring data in practical projects.However,most neural network models only use the data from a single monitoring point and neglect the spatial relationships between multiple monitoring points.Besides,most models lack flexibility in providing predictions for multiple days after monitoring activity.This study proposes a sequence-to-sequence(seq2seq)two-dimensional(2D)convolutional long short-term memory neural network(S2SCL2D)for predicting the spatiotemporal wall deflections induced by deep excavations.The model utilizes the data from all monitoring points on the entire wall and extracts spatiotemporal features from data by combining the 2D convolutional layers and long short-term memory(LSTM)layers.The S2SCL2D model achieves a long-term prediction of wall deflections through a recursive seq2seq structure.The excavation depth,which has a significant impact on wall deflections,is also considered using a feature fusion method.An excavation project in Hangzhou,China,is used to illustrate the proposed model.The results demonstrate that the S2SCL2D model has superior prediction accuracy and robustness than that of the LSTM and S2SCL1D(one-dimensional)models.The prediction model demonstrates a strong generalizability when applied to an adjacent excavation.Based on the long-term prediction results,practitioners can plan and allocate resources in advance to address the potential engineering issues.
基金funded by the National Science and Technology Council,Taiwan(Grant No.NSTC 112-2121-M-039-001)by China Medical University(Grant No.CMU112-MF-79).
文摘Artificial intelligence(AI)technology has become integral in the realm of medicine and healthcare,particularly in human activity recognition(HAR)applications such as fitness and rehabilitation tracking.This study introduces a robust coupling analysis framework that integrates four AI-enabled models,combining both machine learning(ML)and deep learning(DL)approaches to evaluate their effectiveness in HAR.The analytical dataset comprises 561 features sourced from the UCI-HAR database,forming the foundation for training the models.Additionally,the MHEALTH database is employed to replicate the modeling process for comparative purposes,while inclusion of the WISDM database,renowned for its challenging features,supports the framework’s resilience and adaptability.The ML-based models employ the methodologies including adaptive neuro-fuzzy inference system(ANFIS),support vector machine(SVM),and random forest(RF),for data training.In contrast,a DL-based model utilizes one-dimensional convolution neural network(1dCNN)to automate feature extraction.Furthermore,the recursive feature elimination(RFE)algorithm,which drives an ML-based estimator to eliminate low-participation features,helps identify the optimal features for enhancing model performance.The best accuracies of the ANFIS,SVM,RF,and 1dCNN models with meticulous featuring process achieve around 90%,96%,91%,and 93%,respectively.Comparative analysis using the MHEALTH dataset showcases the 1dCNN model’s remarkable perfect accuracy(100%),while the RF,SVM,and ANFIS models equipped with selected features achieve accuracies of 99.8%,99.7%,and 96.5%,respectively.Finally,when applied to the WISDM dataset,the DL-based and ML-based models attain accuracies of 91.4%and 87.3%,respectively,aligning with prior research findings.In conclusion,the proposed framework yields HAR models with commendable performance metrics,exhibiting its suitability for integration into the healthcare services system through AI-driven applications.
基金This research was funded by the National Natural Science Foundation of China(Nos.71762010,62262019,62162025,61966013,12162012)the Hainan Provincial Natural Science Foundation of China(Nos.823RC488,623RC481,620RC603,621QN241,620RC602,121RC536)+1 种基金the Haikou Science and Technology Plan Project of China(No.2022-016)the Project supported by the Education Department of Hainan Province,No.Hnky2021-23.
文摘Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.
基金Support by Sichuan Science and Technology Program(2021YFQ0003,2023YFSY 0026,2023YFH0004).
文摘This study addresses the limitations of Transformer models in image feature extraction,particularly their lack of inductive bias for visual structures.Compared to Convolutional Neural Networks(CNNs),the Transformers are more sensitive to different hyperparameters of optimizers,which leads to a lack of stability and slow convergence.To tackle these challenges,we propose the Convolution-based Efficient Transformer Image Feature Extraction Network(CEFormer)as an enhancement of the Transformer architecture.Our model incorporates E-Attention,depthwise separable convolution,and dilated convolution to introduce crucial inductive biases,such as translation invariance,locality,and scale invariance,into the Transformer framework.Additionally,we implement a lightweight convolution module to process the input images,resulting in faster convergence and improved stability.This results in an efficient convolution combined Transformer image feature extraction network.Experimental results on the ImageNet1k Top-1 dataset demonstrate that the proposed network achieves better accuracy while maintaining high computational speed.It achieves up to 85.0%accuracy across various model sizes on image classification,outperforming various baseline models.When integrated into the Mask Region-ConvolutionalNeuralNetwork(R-CNN)framework as a backbone network,CEFormer outperforms other models and achieves the highest mean Average Precision(mAP)scores.This research presents a significant advancement in Transformer-based image feature extraction,balancing performance and computational efficiency.
基金supported by the Natural Science Foundation of Jiangsu Province (Grant Nos. BK20210347)。
文摘The open-circuit fault is one of the most common faults of the automatic ramming drive system(ARDS),and it can be categorized into the open-phase faults of Permanent Magnet Synchronous Motor(PMSM)and the open-circuit faults of Voltage Source Inverter(VSI). The stator current serves as a common indicator for detecting open-circuit faults. Due to the identical changes of the stator current between the open-phase faults in the PMSM and failures of double switches within the same leg of the VSI, this paper utilizes the zero-sequence voltage component as an additional diagnostic criterion to differentiate them.Considering the variable conditions and substantial noise of the ARDS, a novel Multi-resolution Network(Mr Net) is proposed, which can extract multi-resolution perceptual information and enhance robustness to the noise. Meanwhile, a feature weighted layer is introduced to allocate higher weights to characteristics situated near the feature frequency. Both simulation and experiment results validate that the proposed fault diagnosis method can diagnose 25 types of open-circuit faults and achieve more than98.28% diagnostic accuracy. In addition, the experiment results also demonstrate that Mr Net has the capability of diagnosing the fault types accurately under the interference of noise signals(Laplace noise and Gaussian noise).
基金supported by the National Key Research and Development Program of China(No.2018YFB2101300)the National Natural Science Foundation of China(Grant No.61871186)the Dean’s Fund of Engineering Research Center of Software/Hardware Co-Design Technology and Application,Ministry of Education(East China Normal University).
文摘Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in time series forecasting. However, two problems weaken the performance of TCNs. One is that in dilated casual convolution, causal convolution leads to the receptive fields of outputs being concentrated in the earlier part of the input sequence, whereas the recent input information will be severely lost. The other is that the distribution shift problem in time series has not been adequately solved. To address the first problem, we propose a subsequence-based dilated convolution method (SDC). By using multiple convolutional filters to convolve elements of neighboring subsequences, the method extracts temporal features from a growing receptive field via a growing subsequence rather than a single element. Ultimately, the receptive field of each output element can cover the whole input sequence. To address the second problem, we propose a difference and compensation method (DCM). The method reduces the discrepancies between and within the input sequences by difference operations and then compensates the outputs for the information lost due to difference operations. Based on SDC and DCM, we further construct a temporal subsequence-based convolutional network with difference (TSCND) for time series forecasting. The experimental results show that TSCND can reduce prediction mean squared error by 7.3% and save runtime, compared with state-of-the-art models and vanilla TCN.
基金This work was funded by the National Natural Science Foundation of China(Grant No.62172132)Public Welfare Technology Research Project of Zhejiang Province(Grant No.LGF21F020014)the Opening Project of Key Laboratory of Public Security Information Application Based on Big-Data Architecture,Ministry of Public Security of Zhejiang Police College(Grant No.2021DSJSYS002).
文摘The widespread availability of digital multimedia data has led to a new challenge in digital forensics.Traditional source camera identification algorithms usually rely on various traces in the capturing process.However,these traces have become increasingly difficult to extract due to wide availability of various image processing algorithms.Convolutional Neural Networks(CNN)-based algorithms have demonstrated good discriminative capabilities for different brands and even different models of camera devices.However,their performances is not ideal in case of distinguishing between individual devices of the same model,because cameras of the same model typically use the same optical lens,image sensor,and image processing algorithms,that result in minimal overall differences.In this paper,we propose a camera forensics algorithm based on multi-scale feature fusion to address these issues.The proposed algorithm extracts different local features from feature maps of different scales and then fuses them to obtain a comprehensive feature representation.This representation is then fed into a subsequent camera fingerprint classification network.Building upon the Swin-T network,we utilize Transformer Blocks and Graph Convolutional Network(GCN)modules to fuse multi-scale features from different stages of the backbone network.Furthermore,we conduct experiments on established datasets to demonstrate the feasibility and effectiveness of the proposed approach.
基金financially supported by the Ministry of Science and Higher Education of the Russian Federation(State contract GZ0110/23-10-IF)。
文摘The longitudinal dependence of the behavior of ionospheric parameters has been the subject of a number of works where significant variations are discovered.This also applies to the prediction of the ionospheric total electron content(TEC),which neural network methods have recently been widely used.However,the results are mainly presented for a limited set of meridians.This paper examines the longitudinal dependence of the TEC forecast accuracy in the equatorial zone.In this case,the methods are used that provided the best accuracy on three meridians:European(30°E),Southeastern(110°E)and American(75°W).Results for the stations considered are analyzed as a function of longitude using the Jet Propulsion Laboratory Global Ionosphere Map(JPL GIM)for 2015.These results are for 2 h ahead and 24 h ahead forecast.It was found that in this case,based on the metric values,three groups of architectures can be distinguished.The first group included long short-term memory(LSTM),gated recurrent unit(GRU),and temporal convolutional networks(TCN)models as a part of unidirectional deep learning models;the second group is based on the recurrent models from the first group,which were supplemented with a bidirectional algorithm,increasing the TEC forecasting accuracy by 2-3 times.The third group,which includes the bidirectional TCN architecture(BiTCN),provided the highest accuracy.For this architecture,according to data obtained for 9 equatorial stations,practical independence of the TEC prediction accuracy from longitude was observed under the following metrics(Mean Absolute Error MAE,Root Mean Square Error RMSE,Mean Absolute Percentage Error MAPE):MAE(2 h)is 0.2 TECU approximately;MAE(24 h)is 0.4 TECU approximately;RMSE(2 h)is less than 0.5 TECU except Niue station(RMSE(2 h)is 1 TECU approximately);RMSE(24 h)is in the range of 1.0-1.7 TECU;MAPE(2 h)<1%except Darwin station,MAPE(24 h)<2%.This result was confirmed by data from additional 5 stations that formed latitudinal chains in the equatorial part of the three meridians.The complete correspondence of the observational and predicted TEC values is illustrated using several stations for disturbed conditions on December 19-22,2015,which included the strongest magnetic storm in the second half of the year(min Dst=-155 nT).