In regression, despite being both aimed at estimating the Mean Squared Prediction Error (MSPE), Akaike’s Final Prediction Error (FPE) and the Generalized Cross Validation (GCV) selection criteria are usually derived ...In regression, despite being both aimed at estimating the Mean Squared Prediction Error (MSPE), Akaike’s Final Prediction Error (FPE) and the Generalized Cross Validation (GCV) selection criteria are usually derived from two quite different perspectives. Here, settling on the most commonly accepted definition of the MSPE as the expectation of the squared prediction error loss, we provide theoretical expressions for it, valid for any linear model (LM) fitter, be it under random or non random designs. Specializing these MSPE expressions for each of them, we are able to derive closed formulas of the MSPE for some of the most popular LM fitters: Ordinary Least Squares (OLS), with or without a full column rank design matrix;Ordinary and Generalized Ridge regression, the latter embedding smoothing splines fitting. For each of these LM fitters, we then deduce a computable estimate of the MSPE which turns out to coincide with Akaike’s FPE. Using a slight variation, we similarly get a class of MSPE estimates coinciding with the classical GCV formula for those same LM fitters.展开更多
工业数据由于技术故障和人为因素通常导致数据异常,现有基于约束的方法因约束阈值设置的过于宽松或严格会导致修复错误,基于统计的方法因平滑修复机制导致对时间步长较远的异常值修复准确度较低.针对上述问题,提出了基于奖励机制的最小...工业数据由于技术故障和人为因素通常导致数据异常,现有基于约束的方法因约束阈值设置的过于宽松或严格会导致修复错误,基于统计的方法因平滑修复机制导致对时间步长较远的异常值修复准确度较低.针对上述问题,提出了基于奖励机制的最小迭代修复和改进WGAN混合模型的时序数据修复方法.首先,在预处理阶段,保留异常数据,进行信息标注等处理,从而充分挖掘异常值与真实值之间的特征约束.其次,在噪声模块提出了近邻参数裁剪规则,用于修正最小迭代修复公式生成的噪声向量.将其传递至模拟分布模块的生成器中,同时设计了一个动态时间注意力网络层,用于提取时序特征权重并与门控循环单元串联组合捕捉不同步长的特征依赖,并引入递归多步预测原理共同提升模型的表达能力;在判别器中设计了Abnormal and Truth奖励机制和Weighted Mean Square Error损失函数共同反向优化生成器修复数据的细节和质量.最后,在公开数据集和真实数据集上的实验结果表明,该方法的修复准确度与模型稳定性显著优于现有方法.展开更多
The purpose of this research work is to investigate the numerical solutions of the fractional dengue transmission model(FDTM)in the presence of Wolbachia using the stochastic-based Levenberg-Marquardt neural network(L...The purpose of this research work is to investigate the numerical solutions of the fractional dengue transmission model(FDTM)in the presence of Wolbachia using the stochastic-based Levenberg-Marquardt neural network(LM-NN)technique.The fractional dengue transmission model(FDTM)consists of 12 compartments.The human population is divided into four compartments;susceptible humans(S_(h)),exposed humans(E_(h)),infectious humans(I_(h)),and recovered humans(R_(h)).Wolbachia-infected and Wolbachia-uninfected mosquito population is also divided into four compartments:aquatic(eggs,larvae,pupae),susceptible,exposed,and infectious.We investigated three different cases of vertical transmission probability(η),namely when Wolbachia-free mosquitoes persist only(η=0.6),when both types of mosquitoes persist(η=0.8),and when Wolbachia-carrying mosquitoes persist only(η=1).The objective of this study is to investigate the effectiveness of Wolbachia in reducing dengue and presenting the numerical results by using the stochastic structure LM-NN approach with 10 hidden layers of neurons for three different cases of the fractional order derivatives(α=0.4,0.6,0.8).LM-NN approach includes a training,validation,and testing procedure to minimize the mean square error(MSE)values using the reference dataset(obtained by solving the model using the Adams-Bashforth-Moulton method(ABM).The distribution of data is 80% data for training,10% for validation,and,10% for testing purpose)results.A comprehensive investigation is accessible to observe the competence,precision,capacity,and efficiency of the suggested LM-NN approach by executing the MSE,state transitions findings,and regression analysis.The effectiveness of the LM-NN approach for solving the FDTM is demonstrated by the overlap of the findings with trustworthy measures,which achieves a precision of up to 10^(-4).展开更多
In estimation theory,the researchers have put their efforts to develop some estimators of population mean which may give more precise results when adopting ordinary least squares(OLS)method or robust regression techni...In estimation theory,the researchers have put their efforts to develop some estimators of population mean which may give more precise results when adopting ordinary least squares(OLS)method or robust regression techniques for estimating regression coefficients.But when the correlation is negative and the outliers are presented,the results can be distorted and the OLS-type estimators may give misleading estimates or highly biased estimates.Hence,this paper mainly focuses on such issues through the use of non-conventional measures of dispersion and a robust estimation method.Precisely,we have proposed generalized estimators by using the ancillary information of non-conventional measures of dispersion(Gini’s mean difference,Downton’s method and probabilityweighted moment)using ordinary least squares and then finally adopting the Huber M-estimation technique on the suggested estimators.The proposed estimators are investigated in the presence of outliers in both situations of negative and positive correlation between study and auxiliary variables.Theoretical comparisons and real data application are provided to show the strength of the proposed generalized estimators.It is found that the proposed generalized Huber-M-type estimators are more efficient than the suggested generalized estimators under the OLS estimation method considered in this study.The new proposed estimators will be useful in the future for data analysis and making decisions.展开更多
In this paper, a regression method of estimation has been used to derive the mean estimate of the survey variable using simple random sampling without replacement in the presence of observational errors. Two covariate...In this paper, a regression method of estimation has been used to derive the mean estimate of the survey variable using simple random sampling without replacement in the presence of observational errors. Two covariates were used and a case where the observational errors were in both the survey variable and the covariates was considered. The inclusion of observational errors was due to the fact that data collected through surveys are often not free from errors that occur during observation. These errors can occur due to over-reporting, under-reporting, memory failure by the respondents or use of imprecise tools of data collection. The expression of mean squared error (MSE) based on the obtained estimator has been derived to the first degree of approximation. The results of a simulation study show that the derived modified regression mean estimator under observational errors is more efficient than the mean per unit estimator and some other existing estimators. The proposed estimator can therefore be used in estimating a finite population mean, while considering observational errors that may occur during a study.展开更多
In this paper, we propose a class of estimators for estimating the finite population mean of the study variable under Ranked Set Sampling (RSS) when population mean of the auxiliary variable is known. The bias and Mea...In this paper, we propose a class of estimators for estimating the finite population mean of the study variable under Ranked Set Sampling (RSS) when population mean of the auxiliary variable is known. The bias and Mean Squared Error (MSE) of the proposed class of estimators are obtained to first degree of approximation. It is identified that the proposed class of estimators is more efficient as compared to [1] estimator and several other estimators. A simulation study is carried out to judge the performances of the estimators.展开更多
It is quite often that the theoretic model used in the Kalman filtering may not be sufficiently accurate for practical applications,due to the fact that the covariances of noises are not exactly known.Our previous wor...It is quite often that the theoretic model used in the Kalman filtering may not be sufficiently accurate for practical applications,due to the fact that the covariances of noises are not exactly known.Our previous work reveals that in such scenario the filter calculated mean square errors(FMSE)and the true mean square errors(TMSE)become inconsistent,while FMSE and TMSE are consistent in the Kalman filter with accurate models.This can lead to low credibility of state estimation regardless of using Kalman filters or adaptive Kalman filters.Obviously,it is important to study the inconsistency issue since it is vital to understand the quantitative influence induced by the inaccurate models.Aiming at this,the concept of credibility is adopted to discuss the inconsistency problem in this paper.In order to formulate the degree of the credibility,a trust factor is constructed based on the FMSE and the TMSE.However,the trust factor can not be directly computed since the TMSE cannot be found for practical applications.Based on the definition of trust factor,the estimation of the trust factor is successfully modified to online estimation of the TMSE.More importantly,a necessary and sufficient condition is found,which turns out to be the basis for better design of Kalman filters with high performance.Accordingly,beyond trust factor estimation with Sage-Husa technique(TFE-SHT),three novel trust factor estimation methods,which are directly numerical solving method(TFE-DNS),the particle swarm optimization method(PSO)and expectation maximization-particle swarm optimization method(EM-PSO)are proposed.The analysis and simulation results both show that the proposed TFE-DNS is better than the TFE-SHT for the case of single unknown noise covariance.Meanwhile,the proposed EMPSO performs completely better than the EM and PSO on the estimation of the credibility degree and state when both noise covariances should be estimated online.展开更多
Phasor Measurement Units(PMUs)provide Global Positioning System(GPS)time-stamped synchronized measurements of voltage and current with the phase angle of the system at certain points along with the grid system.Those s...Phasor Measurement Units(PMUs)provide Global Positioning System(GPS)time-stamped synchronized measurements of voltage and current with the phase angle of the system at certain points along with the grid system.Those synchronized data measurements are extracted in the form of amplitude and phase from various locations of the power grid to monitor and control the power system condition.A PMU device is a crucial part of the power equipment in terms of the cost and operative point of view.However,such ongoing development and improvement to PMUs’principal work are essential to the network operators to enhance the grid quality and the operating expenses.This paper introduces a proposed method that led to lowcost and less complex techniques to optimize the performance of PMU using Second-Order Kalman Filter.It is based on the Asyncrhophasor technique resulting in a phase error minimization when receiving the signal from an access point or from the main access point.The MATLAB model has been created to implement the proposed method in the presence of Gaussian and non-Gaussian.The results have shown the proposed method which is Second-Order Kalman Filter outperforms the existing model.The results were tested usingMean Square Error(MSE).The proposed Second-Order Kalman Filter method has been replaced with a synchronization unit into thePMUstructure to clarify the significance of the proposed new PMU.展开更多
Mosquitoes are of great concern for occasionally carrying noxious diseases(dengue,malaria,zika,and yellow fever).To control mosquitoes,it is very crucial to effectively monitor their behavioral trends and presence.Tra...Mosquitoes are of great concern for occasionally carrying noxious diseases(dengue,malaria,zika,and yellow fever).To control mosquitoes,it is very crucial to effectively monitor their behavioral trends and presence.Traditional mosquito repellent works by heating small pads soaked in repellant,which then diffuses a protected area around you,a great alternative to spraying yourself with insecticide.But they have limitations,including the range,turning them on manually,and then waiting for the protection to kick in when the mosquitoes may find you.This research aims to design a fuzzy-based controller to solve the above issues by automatically determining a mosquito repellent’s speed and active time.The speed and active time depend on the repellent cartridge and the number of mosquitoes.The Mamdani model is used in the proposed fuzzy system(FS).The FS consists of identifying unambiguous inputs,a fuzzification process,rule evaluation,and a defuzzification process to produce unambiguous outputs.The input variables used are the repellent cartridge and the number of mosquitoes,and the speed of mosquito repellent is used as the output variable.The whole FS is designed and simulated using MATLAB Simulink R2016b.The proposed FS is executed and verified utilizing a microcontroller using its pulse width modulation capability.Different simulations of the proposed model are performed in many nonlinear processes.Then,a comparative analysis of the outcomes under similar conditions confirms the higher accuracy of the FS,yielding a maximum relative error of 10%.The experimental outcomes show that the root mean square error is reduced by 67.68%,and the mean absolute percentage error is reduced by 52.46%.Using a fuzzy-based mosquito repellent can help maintain the speed of mosquito repellent and control the energy used by the mosquito repellent.展开更多
In order to research brain problems using MRI,PET,and CT neuroimaging,a correct understanding of brain function is required.This has been considered in earlier times with the support of traditional algorithms.Deep lea...In order to research brain problems using MRI,PET,and CT neuroimaging,a correct understanding of brain function is required.This has been considered in earlier times with the support of traditional algorithms.Deep learning process has also been widely considered in these genomics data processing system.In this research,brain disorder illness incliding Alzheimer’s disease,Schizophrenia and Parkinson’s diseaseis is analyzed owing to misdetection of disorders in neuroimaging data examined by means fo traditional methods.Moeover,deep learning approach is incorporated here for classification purpose of brain disorder with the aid of Deep Belief Networks(DBN).Images are stored in a secured manner by using DNA sequence based on JPEG Zig Zag Encryption algorithm(DBNJZZ)approach.The suggested approach is executed and tested by using the performance metric measure such as accuracy,root mean square error,Mean absolute error and mean absolute percentage error.Proposed DBNJZZ gives better performance than previously available methods.展开更多
This study assesses the predictive capabilities of the CMA-GD model for wind speed prediction in two wind farms located in Hubei Province,China.The observed wind speeds at the height of 70m in wind turbines of two win...This study assesses the predictive capabilities of the CMA-GD model for wind speed prediction in two wind farms located in Hubei Province,China.The observed wind speeds at the height of 70m in wind turbines of two wind farms in Suizhou serve as the actual observation data for comparison and testing.At the same time,the wind speed predicted by the EC model is also included for comparative analysis.The results indicate that the CMA-GD model performs better than the EC model in Wind Farm A.The CMA-GD model exhibits a monthly average correlation coefficient of 0.56,root mean square error of 2.72 m s^(-1),and average absolute error of 2.11 m s^(-1).In contrast,the EC model shows a monthly average correlation coefficient of 0.51,root mean square error of 2.83 m s^(-1),and average absolute error of 2.21 m s^(-1).Conversely,in Wind Farm B,the EC model outperforms the CMA-GD model.The CMA-GD model achieves a monthly average correlation coefficient of 0.55,root mean square error of 2.61 m s^(-1),and average absolute error of 2.13 m s^(-1).By contrast,the EC model displays a monthly average correlation coefficient of 0.63,root mean square error of 2.04 m s^(-1),and average absolute error of 1.67 m s^(-1).展开更多
As the scale of software systems expands,maintaining their stable operation has become an extraordinary challenge.System logs are semi-structured text generated by the recording function in the source code and have im...As the scale of software systems expands,maintaining their stable operation has become an extraordinary challenge.System logs are semi-structured text generated by the recording function in the source code and have important research significance in software service anomaly detection.Existing log anomaly detection methods mainly focus on the statistical characteristics of logs,making it difficult to distinguish the semantic differences between normal and abnormal logs,and performing poorly on real-world industrial log data.In this paper,we propose an unsupervised framework for log anomaly detection based on generative pre-training-2(GPT-2).We apply our approach to two industrial systems.The experimental results on two datasets show that our approach outperforms state-of-the-art approaches for log anomaly detection.展开更多
Hybrid beamforming(HBF)has become an attractive and important technology in massive multiple-input multiple-output(MIMO)millimeter-wave(mmWave)systems.There are different hybrid architectures in HBF depending on diffe...Hybrid beamforming(HBF)has become an attractive and important technology in massive multiple-input multiple-output(MIMO)millimeter-wave(mmWave)systems.There are different hybrid architectures in HBF depending on different connection strategies of the phase shifter network between antennas and radio frequency chains.This paper investigates HBF optimization with different hybrid architectures in broadband point-to-point mmWave MIMO systems.The joint hybrid architecture and beamforming optimization problem is divided into two sub-problems.First,we transform the spectral efficiency maximization problem into an equivalent weighted mean squared error minimization problem,and propose an algorithm based on the manifold optimization method for the hybrid beamformer with a fixed hybrid architecture.The overlapped subarray architecture which balances well between hardware costs and system performance is investigated.We further propose an algorithm to dynamically partition antenna subarrays and combine it with the HBF optimization algorithm.Simulation results are presented to demonstrate the performance improvement of our proposed algorithms.展开更多
Local markets in East Africa have been destroyed by raging fires,leading to the loss of life and property in the nearby communities.Electrical circuits,arson,and neglected charcoal stoves are the major causes of these...Local markets in East Africa have been destroyed by raging fires,leading to the loss of life and property in the nearby communities.Electrical circuits,arson,and neglected charcoal stoves are the major causes of these fires.Previous methods,i.e.,satellites,are expensive to maintain and cause unnecessary delays.Also,unit-smoke detectors are highly prone to false alerts.In this paper,an Interval Type-2 TSK fuzzy model for an intelligent lightweight fire intensity detection algorithm with decision-making in low-power devices is proposed using a sparse inference rules approach.A free open–source MATLAB/Simulink fuzzy toolbox integrated into MATLAB 2018a is used to investigate the performance of the Interval Type-2 fuzzy model.Two crisp input parameters,namely:FIT and FIG��are used.Results show that the Interval Type-2 model achieved an accuracy value of FIO�=98.2%,MAE=1.3010,MSE=1.6938 and RMSE=1.3015 using regression analysis.The study shall assist the firefighting personnel in fully understanding and mitigating the current level of fire danger.As a result,the proposed solution can be fully implemented in low-cost,low-power fire detection systems to monitor the state of fire with improved accuracy and reduced false alerts.Through informed decision-making in low-cost fire detection devices,early warning notifications can be provided to aid in the rapid evacuation of people,thereby improving fire safety surveillance,management,and protection for the market community.展开更多
Network planning is essential for the construction and the development of wireless networks. The network planning cannot be possible without an appropriate propagation model which in fact is its foundation. Initially ...Network planning is essential for the construction and the development of wireless networks. The network planning cannot be possible without an appropriate propagation model which in fact is its foundation. Initially used mainly for mobile radio networks, the optimization of propagation model is becoming essential for efficient deployment of the network in different types of environment, namely rural, suburban and urban especially with the emergence of concepts such as digital terrestrial television, smart cities, Internet of Things (IoT) with wide deployment for different use cases such as smart grid, smart metering of electricity, gas and water. In this paper we use an optimization algorithm that is inspired by the principles of magnetic field theory namely Magnetic Optimization Algorithm (MOA) to tune COST231-Hata propagation model. The dataset used is the result of drive tests carry out on field in the town of Limbe in Cameroon. We take into account the standard K-factor model and then use the MOA algorithm in order to set up a propagation model adapted to the physical environment of a town. The town of Limbe is used as an implementation case, but the proposed method can be used everywhere. The calculation of the root mean square error (RMSE) between the real data from the radio measurements and the prediction data obtained after the implementation of MOA allows the validation of the results. A comparative study between the value of the RMSE obtained by the new model and those obtained by the optimization using linear regression, by the standard COST231-Hata models, and the free space model is also done, this allows us to conclude that the new model obtained using MOA for the city of Limbe is better and more representative of this local environment than the standard COST231-Hata model. The new model obtained can be used for radio planning in the city of Limbé in Cameroon.展开更多
Propagation models are the foundation for radio planning in mobile networks. They are widely used during feasibility studies and initial network deployment, or during network extensions, particularly in new cities. Th...Propagation models are the foundation for radio planning in mobile networks. They are widely used during feasibility studies and initial network deployment, or during network extensions, particularly in new cities. They can be used to calculate the power of the signal received by a mobile terminal, evaluate the coverage radius, and calculate the number of cells required to cover a given area. This paper takes into account the standard k factors model and then uses the differential evolution algorithm to set up a propagation model adapted to the physical environment of the Cameroonian cities of Bertoua. Drive tests were made on the LTE TDD network in the city of Bertoua. Differential evolution algorithm is used as the optimization algorithm to deduct a propagation model which fits the environment of the considered town. The calculation of the root mean square error between the actual data from the drive tests and the prediction data from the implemented model allows the validation of the obtained results. A comparative study made between the RMSE value obtained by the new model and those obtained by the Okumura Hata and free space models, allowed us to conclude that the new model obtained is better and more representative of our local environment than the Okumura Hata currently used. The implementation shows that Differential evolution can perform well and solve this kind of optimization problem;the newly obtained models can be used for radio planning in the city of Bertoua in Cameroon.展开更多
Most remote systems require user authentication to access resources.Text-based passwords are still widely used as a standard method of user authentication.Although conventional text-based passwords are rather hard to ...Most remote systems require user authentication to access resources.Text-based passwords are still widely used as a standard method of user authentication.Although conventional text-based passwords are rather hard to remember,users often write their passwords down in order to compromise security.One of the most complex challenges users may face is posting sensitive data on external data centers that are accessible to others and do not be controlled directly by users.Graphical user authentication methods have recently been proposed to verify the user identity.However,the fundamental limitation of a graphi-cal password is that it must have a colorful and rich image to provide an adequate password space to maintain security,and when the user clicks and inputs a pass-word between two possible grids,the fault tolerance is adjusted to avoid this situation.This paper proposes an enhanced graphical authentication scheme,which comprises benefits over both recognition and recall-based graphical techniques besides image steganography.The combination of graphical authentication and steganography technologies reduces the amount of sensitive data shared between users and service providers and improves the security of user accounts.To evaluate the effectiveness of the proposed scheme,peak signal-to-noise ratio and mean squared error parameters have been used.展开更多
The study explores the asymptotic consistency of the James-Stein shrinkage estimator obtained by shrinking a maximum likelihood estimator. We use Hansen’s approach to show that the James-Stein shrinkage estimator con...The study explores the asymptotic consistency of the James-Stein shrinkage estimator obtained by shrinking a maximum likelihood estimator. We use Hansen’s approach to show that the James-Stein shrinkage estimator converges asymptotically to some multivariate normal distribution with shrinkage effect values. We establish that the rate of convergence is of order and rate , hence the James-Stein shrinkage estimator is -consistent. Then visualise its consistency by studying the asymptotic behaviour using simulating plots in R for the mean squared error of the maximum likelihood estimator and the shrinkage estimator. The latter graphically shows lower mean squared error as compared to that of the maximum likelihood estimator.展开更多
In this paper, the estimators of the scale parameter of the exponential distribution obtained by applying four methods, using complete data, are critically examined and compared. These methods are the Maximum Likeliho...In this paper, the estimators of the scale parameter of the exponential distribution obtained by applying four methods, using complete data, are critically examined and compared. These methods are the Maximum Likelihood Estimator (MLE), the Square-Error Loss Function (BSE), the Entropy Loss Function (BEN) and the Composite LINEX Loss Function (BCL). The performance of these four methods was compared based on three criteria: the Mean Square Error (MSE), the Akaike Information Criterion (AIC), and the Bayesian Information Criterion (BIC). Using Monte Carlo simulation based on relevant samples, the comparisons in this study suggest that the Bayesian method is better than the maximum likelihood estimator with respect to the estimation of the parameter that offers the smallest values of MSE, AIC, and BIC. Confidence intervals were then assessed to test the performance of the methods by comparing the 95% CI and average lengths (AL) for all estimation methods, showing that the Bayesian methods still offer the best performance in terms of generating the smallest ALs.展开更多
Salinization is a gradual process that should be monitored.Modelling is a suitable alternative technique that saves time and cost for the field monitoring.But the performance of the models should be evaluated using th...Salinization is a gradual process that should be monitored.Modelling is a suitable alternative technique that saves time and cost for the field monitoring.But the performance of the models should be evaluated using the measured data.Therefore,the aim of this study was to evaluate and compare the SALTMED and HYDRUS-1D models using the measured soil water content,soil salinity and wheat yield data under different levels of saline irrigation water and groundwater depth.The field experiment was conducted in 2013 and in this research three controlled groundwater depths,i.e.,60(CD60),80(CD80)and 100(CD100)cm and two salinity levels of irrigation water,i.e.,4(EC4)and 8(EC8)dS/m were used in a complete randomized design with three replications.Soil water content and soil salinity were measured in soil profile and compared with the predicted values by the SALTMED and HYDRUS-1D models.Calibrations of the SALTMED and HYDRUS-1D models were carried out using the measured data under EC4-CD100 treatment and the data of the other treatments were used for validation.The statistical parameters including normalized root mean square error(NRMSE)and degree of agreement(d)showed that the values for predicting soil water content and soil salinity were more accurate in the HYDRUS-1D model than in the SALTMED model.The NRMSE and d values of the HYDRUS-1D model were 9.6%and 0.64 for the predicted soil water content and 6.2%and 0.98 for the predicted soil salinity,respectively.These indices of the SALTMED model were 10.6%and 0.81 for the predicted soil water content and 11.0%and 0.97 for the predicted soil salinity,respectively.According to the NRMSE and d values for the predicted wheat yield(9.8%and 0.91,respectively)and dry matter(2.9%and 0.99,respectively),we concluded that the SALTMED model predicted the wheat yield and dry matter accurately.展开更多
文摘In regression, despite being both aimed at estimating the Mean Squared Prediction Error (MSPE), Akaike’s Final Prediction Error (FPE) and the Generalized Cross Validation (GCV) selection criteria are usually derived from two quite different perspectives. Here, settling on the most commonly accepted definition of the MSPE as the expectation of the squared prediction error loss, we provide theoretical expressions for it, valid for any linear model (LM) fitter, be it under random or non random designs. Specializing these MSPE expressions for each of them, we are able to derive closed formulas of the MSPE for some of the most popular LM fitters: Ordinary Least Squares (OLS), with or without a full column rank design matrix;Ordinary and Generalized Ridge regression, the latter embedding smoothing splines fitting. For each of these LM fitters, we then deduce a computable estimate of the MSPE which turns out to coincide with Akaike’s FPE. Using a slight variation, we similarly get a class of MSPE estimates coinciding with the classical GCV formula for those same LM fitters.
文摘工业数据由于技术故障和人为因素通常导致数据异常,现有基于约束的方法因约束阈值设置的过于宽松或严格会导致修复错误,基于统计的方法因平滑修复机制导致对时间步长较远的异常值修复准确度较低.针对上述问题,提出了基于奖励机制的最小迭代修复和改进WGAN混合模型的时序数据修复方法.首先,在预处理阶段,保留异常数据,进行信息标注等处理,从而充分挖掘异常值与真实值之间的特征约束.其次,在噪声模块提出了近邻参数裁剪规则,用于修正最小迭代修复公式生成的噪声向量.将其传递至模拟分布模块的生成器中,同时设计了一个动态时间注意力网络层,用于提取时序特征权重并与门控循环单元串联组合捕捉不同步长的特征依赖,并引入递归多步预测原理共同提升模型的表达能力;在判别器中设计了Abnormal and Truth奖励机制和Weighted Mean Square Error损失函数共同反向优化生成器修复数据的细节和质量.最后,在公开数据集和真实数据集上的实验结果表明,该方法的修复准确度与模型稳定性显著优于现有方法.
文摘The purpose of this research work is to investigate the numerical solutions of the fractional dengue transmission model(FDTM)in the presence of Wolbachia using the stochastic-based Levenberg-Marquardt neural network(LM-NN)technique.The fractional dengue transmission model(FDTM)consists of 12 compartments.The human population is divided into four compartments;susceptible humans(S_(h)),exposed humans(E_(h)),infectious humans(I_(h)),and recovered humans(R_(h)).Wolbachia-infected and Wolbachia-uninfected mosquito population is also divided into four compartments:aquatic(eggs,larvae,pupae),susceptible,exposed,and infectious.We investigated three different cases of vertical transmission probability(η),namely when Wolbachia-free mosquitoes persist only(η=0.6),when both types of mosquitoes persist(η=0.8),and when Wolbachia-carrying mosquitoes persist only(η=1).The objective of this study is to investigate the effectiveness of Wolbachia in reducing dengue and presenting the numerical results by using the stochastic structure LM-NN approach with 10 hidden layers of neurons for three different cases of the fractional order derivatives(α=0.4,0.6,0.8).LM-NN approach includes a training,validation,and testing procedure to minimize the mean square error(MSE)values using the reference dataset(obtained by solving the model using the Adams-Bashforth-Moulton method(ABM).The distribution of data is 80% data for training,10% for validation,and,10% for testing purpose)results.A comprehensive investigation is accessible to observe the competence,precision,capacity,and efficiency of the suggested LM-NN approach by executing the MSE,state transitions findings,and regression analysis.The effectiveness of the LM-NN approach for solving the FDTM is demonstrated by the overlap of the findings with trustworthy measures,which achieves a precision of up to 10^(-4).
基金The authors extend their appreciation to Deanship of Scientific Research at King Khalid University for funding this work through Research Groups Program under grant number R.G.P.2/82/42.I.M.A.who received the grant,www.kku.edu.sa.
文摘In estimation theory,the researchers have put their efforts to develop some estimators of population mean which may give more precise results when adopting ordinary least squares(OLS)method or robust regression techniques for estimating regression coefficients.But when the correlation is negative and the outliers are presented,the results can be distorted and the OLS-type estimators may give misleading estimates or highly biased estimates.Hence,this paper mainly focuses on such issues through the use of non-conventional measures of dispersion and a robust estimation method.Precisely,we have proposed generalized estimators by using the ancillary information of non-conventional measures of dispersion(Gini’s mean difference,Downton’s method and probabilityweighted moment)using ordinary least squares and then finally adopting the Huber M-estimation technique on the suggested estimators.The proposed estimators are investigated in the presence of outliers in both situations of negative and positive correlation between study and auxiliary variables.Theoretical comparisons and real data application are provided to show the strength of the proposed generalized estimators.It is found that the proposed generalized Huber-M-type estimators are more efficient than the suggested generalized estimators under the OLS estimation method considered in this study.The new proposed estimators will be useful in the future for data analysis and making decisions.
文摘In this paper, a regression method of estimation has been used to derive the mean estimate of the survey variable using simple random sampling without replacement in the presence of observational errors. Two covariates were used and a case where the observational errors were in both the survey variable and the covariates was considered. The inclusion of observational errors was due to the fact that data collected through surveys are often not free from errors that occur during observation. These errors can occur due to over-reporting, under-reporting, memory failure by the respondents or use of imprecise tools of data collection. The expression of mean squared error (MSE) based on the obtained estimator has been derived to the first degree of approximation. The results of a simulation study show that the derived modified regression mean estimator under observational errors is more efficient than the mean per unit estimator and some other existing estimators. The proposed estimator can therefore be used in estimating a finite population mean, while considering observational errors that may occur during a study.
文摘In this paper, we propose a class of estimators for estimating the finite population mean of the study variable under Ranked Set Sampling (RSS) when population mean of the auxiliary variable is known. The bias and Mean Squared Error (MSE) of the proposed class of estimators are obtained to first degree of approximation. It is identified that the proposed class of estimators is more efficient as compared to [1] estimator and several other estimators. A simulation study is carried out to judge the performances of the estimators.
基金supported by the National Natural Science Foundation of China(62033010)Aeronautical Science Foundation of China(2019460T5001)。
文摘It is quite often that the theoretic model used in the Kalman filtering may not be sufficiently accurate for practical applications,due to the fact that the covariances of noises are not exactly known.Our previous work reveals that in such scenario the filter calculated mean square errors(FMSE)and the true mean square errors(TMSE)become inconsistent,while FMSE and TMSE are consistent in the Kalman filter with accurate models.This can lead to low credibility of state estimation regardless of using Kalman filters or adaptive Kalman filters.Obviously,it is important to study the inconsistency issue since it is vital to understand the quantitative influence induced by the inaccurate models.Aiming at this,the concept of credibility is adopted to discuss the inconsistency problem in this paper.In order to formulate the degree of the credibility,a trust factor is constructed based on the FMSE and the TMSE.However,the trust factor can not be directly computed since the TMSE cannot be found for practical applications.Based on the definition of trust factor,the estimation of the trust factor is successfully modified to online estimation of the TMSE.More importantly,a necessary and sufficient condition is found,which turns out to be the basis for better design of Kalman filters with high performance.Accordingly,beyond trust factor estimation with Sage-Husa technique(TFE-SHT),three novel trust factor estimation methods,which are directly numerical solving method(TFE-DNS),the particle swarm optimization method(PSO)and expectation maximization-particle swarm optimization method(EM-PSO)are proposed.The analysis and simulation results both show that the proposed TFE-DNS is better than the TFE-SHT for the case of single unknown noise covariance.Meanwhile,the proposed EMPSO performs completely better than the EM and PSO on the estimation of the credibility degree and state when both noise covariances should be estimated online.
文摘Phasor Measurement Units(PMUs)provide Global Positioning System(GPS)time-stamped synchronized measurements of voltage and current with the phase angle of the system at certain points along with the grid system.Those synchronized data measurements are extracted in the form of amplitude and phase from various locations of the power grid to monitor and control the power system condition.A PMU device is a crucial part of the power equipment in terms of the cost and operative point of view.However,such ongoing development and improvement to PMUs’principal work are essential to the network operators to enhance the grid quality and the operating expenses.This paper introduces a proposed method that led to lowcost and less complex techniques to optimize the performance of PMU using Second-Order Kalman Filter.It is based on the Asyncrhophasor technique resulting in a phase error minimization when receiving the signal from an access point or from the main access point.The MATLAB model has been created to implement the proposed method in the presence of Gaussian and non-Gaussian.The results have shown the proposed method which is Second-Order Kalman Filter outperforms the existing model.The results were tested usingMean Square Error(MSE).The proposed Second-Order Kalman Filter method has been replaced with a synchronization unit into thePMUstructure to clarify the significance of the proposed new PMU.
文摘Mosquitoes are of great concern for occasionally carrying noxious diseases(dengue,malaria,zika,and yellow fever).To control mosquitoes,it is very crucial to effectively monitor their behavioral trends and presence.Traditional mosquito repellent works by heating small pads soaked in repellant,which then diffuses a protected area around you,a great alternative to spraying yourself with insecticide.But they have limitations,including the range,turning them on manually,and then waiting for the protection to kick in when the mosquitoes may find you.This research aims to design a fuzzy-based controller to solve the above issues by automatically determining a mosquito repellent’s speed and active time.The speed and active time depend on the repellent cartridge and the number of mosquitoes.The Mamdani model is used in the proposed fuzzy system(FS).The FS consists of identifying unambiguous inputs,a fuzzification process,rule evaluation,and a defuzzification process to produce unambiguous outputs.The input variables used are the repellent cartridge and the number of mosquitoes,and the speed of mosquito repellent is used as the output variable.The whole FS is designed and simulated using MATLAB Simulink R2016b.The proposed FS is executed and verified utilizing a microcontroller using its pulse width modulation capability.Different simulations of the proposed model are performed in many nonlinear processes.Then,a comparative analysis of the outcomes under similar conditions confirms the higher accuracy of the FS,yielding a maximum relative error of 10%.The experimental outcomes show that the root mean square error is reduced by 67.68%,and the mean absolute percentage error is reduced by 52.46%.Using a fuzzy-based mosquito repellent can help maintain the speed of mosquito repellent and control the energy used by the mosquito repellent.
文摘In order to research brain problems using MRI,PET,and CT neuroimaging,a correct understanding of brain function is required.This has been considered in earlier times with the support of traditional algorithms.Deep learning process has also been widely considered in these genomics data processing system.In this research,brain disorder illness incliding Alzheimer’s disease,Schizophrenia and Parkinson’s diseaseis is analyzed owing to misdetection of disorders in neuroimaging data examined by means fo traditional methods.Moeover,deep learning approach is incorporated here for classification purpose of brain disorder with the aid of Deep Belief Networks(DBN).Images are stored in a secured manner by using DNA sequence based on JPEG Zig Zag Encryption algorithm(DBNJZZ)approach.The suggested approach is executed and tested by using the performance metric measure such as accuracy,root mean square error,Mean absolute error and mean absolute percentage error.Proposed DBNJZZ gives better performance than previously available methods.
基金National Key Research and Development Program of the Ministry of Science(2018YFB1502801)Hubei Provincial Natural Science Foundation(2022CFD017)Innovation and Development Project of China Meteorological Administration(CXFZ2023J044)。
文摘This study assesses the predictive capabilities of the CMA-GD model for wind speed prediction in two wind farms located in Hubei Province,China.The observed wind speeds at the height of 70m in wind turbines of two wind farms in Suizhou serve as the actual observation data for comparison and testing.At the same time,the wind speed predicted by the EC model is also included for comparative analysis.The results indicate that the CMA-GD model performs better than the EC model in Wind Farm A.The CMA-GD model exhibits a monthly average correlation coefficient of 0.56,root mean square error of 2.72 m s^(-1),and average absolute error of 2.11 m s^(-1).In contrast,the EC model shows a monthly average correlation coefficient of 0.51,root mean square error of 2.83 m s^(-1),and average absolute error of 2.21 m s^(-1).Conversely,in Wind Farm B,the EC model outperforms the CMA-GD model.The CMA-GD model achieves a monthly average correlation coefficient of 0.55,root mean square error of 2.61 m s^(-1),and average absolute error of 2.13 m s^(-1).By contrast,the EC model displays a monthly average correlation coefficient of 0.63,root mean square error of 2.04 m s^(-1),and average absolute error of 1.67 m s^(-1).
文摘As the scale of software systems expands,maintaining their stable operation has become an extraordinary challenge.System logs are semi-structured text generated by the recording function in the source code and have important research significance in software service anomaly detection.Existing log anomaly detection methods mainly focus on the statistical characteristics of logs,making it difficult to distinguish the semantic differences between normal and abnormal logs,and performing poorly on real-world industrial log data.In this paper,we propose an unsupervised framework for log anomaly detection based on generative pre-training-2(GPT-2).We apply our approach to two industrial systems.The experimental results on two datasets show that our approach outperforms state-of-the-art approaches for log anomaly detection.
基金supported by ZTE Industry-University-Institute Cooperation Funds,the Natural Science Foundation of Shanghai under Grant No.23ZR1407300the National Natural Science Foundation of China un⁃der Grant No.61771147.
文摘Hybrid beamforming(HBF)has become an attractive and important technology in massive multiple-input multiple-output(MIMO)millimeter-wave(mmWave)systems.There are different hybrid architectures in HBF depending on different connection strategies of the phase shifter network between antennas and radio frequency chains.This paper investigates HBF optimization with different hybrid architectures in broadband point-to-point mmWave MIMO systems.The joint hybrid architecture and beamforming optimization problem is divided into two sub-problems.First,we transform the spectral efficiency maximization problem into an equivalent weighted mean squared error minimization problem,and propose an algorithm based on the manifold optimization method for the hybrid beamformer with a fixed hybrid architecture.The overlapped subarray architecture which balances well between hardware costs and system performance is investigated.We further propose an algorithm to dynamically partition antenna subarrays and combine it with the HBF optimization algorithm.Simulation results are presented to demonstrate the performance improvement of our proposed algorithms.
文摘Local markets in East Africa have been destroyed by raging fires,leading to the loss of life and property in the nearby communities.Electrical circuits,arson,and neglected charcoal stoves are the major causes of these fires.Previous methods,i.e.,satellites,are expensive to maintain and cause unnecessary delays.Also,unit-smoke detectors are highly prone to false alerts.In this paper,an Interval Type-2 TSK fuzzy model for an intelligent lightweight fire intensity detection algorithm with decision-making in low-power devices is proposed using a sparse inference rules approach.A free open–source MATLAB/Simulink fuzzy toolbox integrated into MATLAB 2018a is used to investigate the performance of the Interval Type-2 fuzzy model.Two crisp input parameters,namely:FIT and FIG��are used.Results show that the Interval Type-2 model achieved an accuracy value of FIO�=98.2%,MAE=1.3010,MSE=1.6938 and RMSE=1.3015 using regression analysis.The study shall assist the firefighting personnel in fully understanding and mitigating the current level of fire danger.As a result,the proposed solution can be fully implemented in low-cost,low-power fire detection systems to monitor the state of fire with improved accuracy and reduced false alerts.Through informed decision-making in low-cost fire detection devices,early warning notifications can be provided to aid in the rapid evacuation of people,thereby improving fire safety surveillance,management,and protection for the market community.
文摘Network planning is essential for the construction and the development of wireless networks. The network planning cannot be possible without an appropriate propagation model which in fact is its foundation. Initially used mainly for mobile radio networks, the optimization of propagation model is becoming essential for efficient deployment of the network in different types of environment, namely rural, suburban and urban especially with the emergence of concepts such as digital terrestrial television, smart cities, Internet of Things (IoT) with wide deployment for different use cases such as smart grid, smart metering of electricity, gas and water. In this paper we use an optimization algorithm that is inspired by the principles of magnetic field theory namely Magnetic Optimization Algorithm (MOA) to tune COST231-Hata propagation model. The dataset used is the result of drive tests carry out on field in the town of Limbe in Cameroon. We take into account the standard K-factor model and then use the MOA algorithm in order to set up a propagation model adapted to the physical environment of a town. The town of Limbe is used as an implementation case, but the proposed method can be used everywhere. The calculation of the root mean square error (RMSE) between the real data from the radio measurements and the prediction data obtained after the implementation of MOA allows the validation of the results. A comparative study between the value of the RMSE obtained by the new model and those obtained by the optimization using linear regression, by the standard COST231-Hata models, and the free space model is also done, this allows us to conclude that the new model obtained using MOA for the city of Limbe is better and more representative of this local environment than the standard COST231-Hata model. The new model obtained can be used for radio planning in the city of Limbé in Cameroon.
文摘Propagation models are the foundation for radio planning in mobile networks. They are widely used during feasibility studies and initial network deployment, or during network extensions, particularly in new cities. They can be used to calculate the power of the signal received by a mobile terminal, evaluate the coverage radius, and calculate the number of cells required to cover a given area. This paper takes into account the standard k factors model and then uses the differential evolution algorithm to set up a propagation model adapted to the physical environment of the Cameroonian cities of Bertoua. Drive tests were made on the LTE TDD network in the city of Bertoua. Differential evolution algorithm is used as the optimization algorithm to deduct a propagation model which fits the environment of the considered town. The calculation of the root mean square error between the actual data from the drive tests and the prediction data from the implemented model allows the validation of the obtained results. A comparative study made between the RMSE value obtained by the new model and those obtained by the Okumura Hata and free space models, allowed us to conclude that the new model obtained is better and more representative of our local environment than the Okumura Hata currently used. The implementation shows that Differential evolution can perform well and solve this kind of optimization problem;the newly obtained models can be used for radio planning in the city of Bertoua in Cameroon.
基金The researcher would like to thank the Deanship of Scientific Research,Qassim University for funding the publication of this project.
文摘Most remote systems require user authentication to access resources.Text-based passwords are still widely used as a standard method of user authentication.Although conventional text-based passwords are rather hard to remember,users often write their passwords down in order to compromise security.One of the most complex challenges users may face is posting sensitive data on external data centers that are accessible to others and do not be controlled directly by users.Graphical user authentication methods have recently been proposed to verify the user identity.However,the fundamental limitation of a graphi-cal password is that it must have a colorful and rich image to provide an adequate password space to maintain security,and when the user clicks and inputs a pass-word between two possible grids,the fault tolerance is adjusted to avoid this situation.This paper proposes an enhanced graphical authentication scheme,which comprises benefits over both recognition and recall-based graphical techniques besides image steganography.The combination of graphical authentication and steganography technologies reduces the amount of sensitive data shared between users and service providers and improves the security of user accounts.To evaluate the effectiveness of the proposed scheme,peak signal-to-noise ratio and mean squared error parameters have been used.
文摘The study explores the asymptotic consistency of the James-Stein shrinkage estimator obtained by shrinking a maximum likelihood estimator. We use Hansen’s approach to show that the James-Stein shrinkage estimator converges asymptotically to some multivariate normal distribution with shrinkage effect values. We establish that the rate of convergence is of order and rate , hence the James-Stein shrinkage estimator is -consistent. Then visualise its consistency by studying the asymptotic behaviour using simulating plots in R for the mean squared error of the maximum likelihood estimator and the shrinkage estimator. The latter graphically shows lower mean squared error as compared to that of the maximum likelihood estimator.
文摘In this paper, the estimators of the scale parameter of the exponential distribution obtained by applying four methods, using complete data, are critically examined and compared. These methods are the Maximum Likelihood Estimator (MLE), the Square-Error Loss Function (BSE), the Entropy Loss Function (BEN) and the Composite LINEX Loss Function (BCL). The performance of these four methods was compared based on three criteria: the Mean Square Error (MSE), the Akaike Information Criterion (AIC), and the Bayesian Information Criterion (BIC). Using Monte Carlo simulation based on relevant samples, the comparisons in this study suggest that the Bayesian method is better than the maximum likelihood estimator with respect to the estimation of the parameter that offers the smallest values of MSE, AIC, and BIC. Confidence intervals were then assessed to test the performance of the methods by comparing the 95% CI and average lengths (AL) for all estimation methods, showing that the Bayesian methods still offer the best performance in terms of generating the smallest ALs.
基金This research was supported in part by the Project of the Shiraz University Research Council,Iran(94GCU5M1923)。
文摘Salinization is a gradual process that should be monitored.Modelling is a suitable alternative technique that saves time and cost for the field monitoring.But the performance of the models should be evaluated using the measured data.Therefore,the aim of this study was to evaluate and compare the SALTMED and HYDRUS-1D models using the measured soil water content,soil salinity and wheat yield data under different levels of saline irrigation water and groundwater depth.The field experiment was conducted in 2013 and in this research three controlled groundwater depths,i.e.,60(CD60),80(CD80)and 100(CD100)cm and two salinity levels of irrigation water,i.e.,4(EC4)and 8(EC8)dS/m were used in a complete randomized design with three replications.Soil water content and soil salinity were measured in soil profile and compared with the predicted values by the SALTMED and HYDRUS-1D models.Calibrations of the SALTMED and HYDRUS-1D models were carried out using the measured data under EC4-CD100 treatment and the data of the other treatments were used for validation.The statistical parameters including normalized root mean square error(NRMSE)and degree of agreement(d)showed that the values for predicting soil water content and soil salinity were more accurate in the HYDRUS-1D model than in the SALTMED model.The NRMSE and d values of the HYDRUS-1D model were 9.6%and 0.64 for the predicted soil water content and 6.2%and 0.98 for the predicted soil salinity,respectively.These indices of the SALTMED model were 10.6%and 0.81 for the predicted soil water content and 11.0%and 0.97 for the predicted soil salinity,respectively.According to the NRMSE and d values for the predicted wheat yield(9.8%and 0.91,respectively)and dry matter(2.9%and 0.99,respectively),we concluded that the SALTMED model predicted the wheat yield and dry matter accurately.