Purpose:The purpose of this study is to develop and compare model choice strategies in context of logistic regression.Model choice means the choice of the covariates to be included in the model.Design/methodology/appr...Purpose:The purpose of this study is to develop and compare model choice strategies in context of logistic regression.Model choice means the choice of the covariates to be included in the model.Design/methodology/approach:The study is based on Monte Carlo simulations.The methods are compared in terms of three measures of accuracy:specificity and two kinds of sensitivity.A loss function combining sensitivity and specificity is introduced and used for a final comparison.Findings:The choice of method depends on how much the users emphasize sensitivity against specificity.It also depends on the sample size.For a typical logistic regression setting with a moderate sample size and a small to moderate effect size,either BIC,BICc or Lasso seems to be optimal.Research limitations:Numerical simulations cannot cover the whole range of data-generating processes occurring with real-world data.Thus,more simulations are needed.Practical implications:Researchers can refer to these results if they believe that their data-generating process is somewhat similar to some of the scenarios presented in this paper.Alternatively,they could run their own simulations and calculate the loss function.Originality/value:This is a systematic comparison of model choice algorithms and heuristics in context of logistic regression.The distinction between two types of sensitivity and a comparison based on a loss function are methodological novelties.展开更多
Concentrate copper grade(CCG)is one of the important production indicators of copper flotation processes,and keeping the CCG at the set value is of great significance to the economic benefit of copper flotation indust...Concentrate copper grade(CCG)is one of the important production indicators of copper flotation processes,and keeping the CCG at the set value is of great significance to the economic benefit of copper flotation industrial processes.This paper addresses the fluctuation problem of CCG through an operational optimization method.Firstly,a density-based affinity propagationalgorithm is proposed so that more ideal working condition categories can be obtained for the complex raw ore properties.Next,a Bayesian network(BN)is applied to explore the relationship between the operational variables and the CCG.Based on the analysis results of BN,a weighted Gaussian process regression model is constructed to predict the CCG that a higher prediction accuracy can be obtained.To ensure the predicted CCG is close to the set value with a smaller magnitude of the operation adjustments and a smaller uncertainty of the prediction results,an index-oriented adaptive differential evolution(IOADE)algorithm is proposed,and the convergence performance of IOADE is superior to the traditional differential evolution and adaptive differential evolution methods.Finally,the effectiveness and feasibility of the proposed methods are verified by the experiments on a copper flotation industrial process.展开更多
Accurately estimating blasting vibration during rock blasting is the foundation of blasting vibration management.In this study,Tuna Swarm Optimization(TSO),Whale Optimization Algorithm(WOA),and Cuckoo Search(CS)were u...Accurately estimating blasting vibration during rock blasting is the foundation of blasting vibration management.In this study,Tuna Swarm Optimization(TSO),Whale Optimization Algorithm(WOA),and Cuckoo Search(CS)were used to optimize two hyperparameters in support vector regression(SVR).Based on these methods,three hybrid models to predict peak particle velocity(PPV)for bench blasting were developed.Eighty-eight samples were collected to establish the PPV database,eight initial blasting parameters were chosen as input parameters for the predictionmodel,and the PPV was the output parameter.As predictive performance evaluation indicators,the coefficient of determination(R2),rootmean square error(RMSE),mean absolute error(MAE),and a10-index were selected.The normalizedmutual information value is then used to evaluate the impact of various input parameters on the PPV prediction outcomes.According to the research findings,TSO,WOA,and CS can all enhance the predictive performance of the SVR model.The TSO-SVR model provides the most accurate predictions.The performances of the optimized hybrid SVR models are superior to the unoptimized traditional prediction model.The maximum charge per delay impacts the PPV prediction value the most.展开更多
The extended kernel ridge regression(EKRR)method with odd-even effects was adopted to improve the description of the nuclear charge radius using five commonly used nuclear models.These are:(i)the isospin-dependent A^(...The extended kernel ridge regression(EKRR)method with odd-even effects was adopted to improve the description of the nuclear charge radius using five commonly used nuclear models.These are:(i)the isospin-dependent A^(1∕3) formula,(ii)relativistic continuum Hartree-Bogoliubov(RCHB)theory,(iii)Hartree-Fock-Bogoliubov(HFB)model HFB25,(iv)the Weizsacker-Skyrme(WS)model WS*,and(v)HFB25*model.In the last two models,the charge radii were calculated using a five-parameter formula with the nuclear shell corrections and deformations obtained from the WS and HFB25 models,respectively.For each model,the resultant root-mean-square deviation for the 1014 nuclei with proton number Z≥8 can be significantly reduced to 0.009-0.013 fm after considering the modification with the EKRR method.The best among them was the RCHB model,with a root-mean-square deviation of 0.0092 fm.The extrapolation abilities of the KRR and EKRR methods for the neutron-rich region were examined,and it was found that after considering the odd-even effects,the extrapolation power was improved compared with that of the original KRR method.The strong odd-even staggering of nuclear charge radii of Ca and Cu isotopes and the abrupt kinks across the neutron N=126 and 82 shell closures were also calculated and could be reproduced quite well by calculations using the EKRR method.展开更多
In this present time,Human Activity Recognition(HAR)has been of considerable aid in the case of health monitoring and recovery.The exploitation of machine learning with an intelligent agent in the area of health infor...In this present time,Human Activity Recognition(HAR)has been of considerable aid in the case of health monitoring and recovery.The exploitation of machine learning with an intelligent agent in the area of health informatics gathered using HAR augments the decision-making quality and significance.Although many research works conducted on Smart Healthcare Monitoring,there remain a certain number of pitfalls such as time,overhead,and falsification involved during analysis.Therefore,this paper proposes a Statistical Partial Regression and Support Vector Intelligent Agent Learning(SPR-SVIAL)for Smart Healthcare Monitoring.At first,the Statistical Partial Regression Feature Extraction model is used for data preprocessing along with the dimensionality-reduced features extraction process.Here,the input dataset the continuous beat-to-beat heart data,triaxial accelerometer data,and psychological characteristics were acquired from IoT wearable devices.To attain highly accurate Smart Healthcare Monitoring with less time,Partial Least Square helps extract the dimensionality-reduced features.After that,with these resulting features,SVIAL is proposed for Smart Healthcare Monitoring with the help of Machine Learning and Intelligent Agents to minimize both analysis falsification and overhead.Experimental evaluation is carried out for factors such as time,overhead,and false positive rate accuracy concerning several instances.The quantitatively analyzed results indicate the better performance of our proposed SPR-SVIAL method when compared with two state-of-the-art methods.展开更多
The noise that comes from finite element simulation often causes the model to fall into the local optimal solution and over fitting during optimization of generator.Thus,this paper proposes a Gaussian Process Regressi...The noise that comes from finite element simulation often causes the model to fall into the local optimal solution and over fitting during optimization of generator.Thus,this paper proposes a Gaussian Process Regression(GPR)model based on Conditional Likelihood Lower Bound Search(CLLBS)to optimize the design of the generator,which can filter the noise in the data and search for global optimization by combining the Conditional Likelihood Lower Bound Search method.Taking the efficiency optimization of 15 kW Permanent Magnet Synchronous Motor as an example.Firstly,this method uses the elementary effect analysis to choose the sensitive variables,combining the evolutionary algorithm to design the super Latin cube sampling plan;Then the generator-converter system is simulated by establishing a co-simulation platform to obtain data.A Gaussian process regression model combing the method of the conditional likelihood lower bound search is established,which combined the chi-square test to optimize the accuracy of the model globally.Secondly,after the model reaches the accuracy,the Pareto frontier is obtained through the NSGA-II algorithm by considering the maximum output torque as a constraint.Last,the constrained optimization is transformed into an unconstrained optimizing problem by introducing maximum constrained improvement expectation(CEI)optimization method based on the re-interpolation model,which cross-validated the optimization results of the Gaussian process regression model.The above method increase the efficiency of generator by 0.76%and 0.5%respectively;And this method can be used for rapid modeling and multi-objective optimization of generator systems.展开更多
The Extensible Markup Language(XML)files,widely used for storing and exchanging information on the web require efficient parsing mechanisms to improve the performance of the applications.With the existing Document Obj...The Extensible Markup Language(XML)files,widely used for storing and exchanging information on the web require efficient parsing mechanisms to improve the performance of the applications.With the existing Document Object Model(DOM)based parsing,the performance degrades due to sequential processing and large memory requirements,thereby requiring an efficient XML parser to mitigate these issues.In this paper,we propose a Parallel XML Tree Generator(PXTG)algorithm for accelerating the parsing of XML files and a Regression-based XML Parsing Framework(RXPF)that analyzes and predicts performance through profiling,regression,and code generation for efficient parsing.The PXTG algorithm is based on dividing the XML file into n parts and producing n trees in parallel.The profiling phase of the RXPF framework produces a dataset by measuring the performance of various parsing models including StAX,SAX,DOM,JDOM,and PXTG on different cores by using multiple file sizes.The regression phase produces the prediction model,based on which the final code for efficient parsing of XML files is produced through the code generation phase.The RXPF framework has shown a significant improvement in performance varying from 9.54%to 32.34%over other existing models used for parsing XML files.展开更多
In view of the composition analysis and identification of ancient glass products, L1 regularization, K-Means cluster analysis, elbow rule and other methods were comprehensively used to build logical regression, cluste...In view of the composition analysis and identification of ancient glass products, L1 regularization, K-Means cluster analysis, elbow rule and other methods were comprehensively used to build logical regression, cluster analysis, hyper-parameter test and other models, and SPSS, Python and other tools were used to obtain the classification rules of glass products under different fluxes, sub classification under different chemical compositions, hyper-parameter K value test and rationality analysis. Research can provide theoretical support for the protection and restoration of ancient glass relics.展开更多
BACKGROUND The spread of the severe acute respiratory syndrome coronavirus 2 outbreak worldwide has caused concern regarding the mortality rate caused by the infection.The determinants of mortality on a global scale c...BACKGROUND The spread of the severe acute respiratory syndrome coronavirus 2 outbreak worldwide has caused concern regarding the mortality rate caused by the infection.The determinants of mortality on a global scale cannot be fully understood due to lack of information.AIM To identify key factors that may explain the variability in case lethality across countries.METHODS We identified 21 Potential risk factors for coronavirus disease 2019(COVID-19)case fatality rate for all the countries with available data.We examined univariate relationships of each variable with case fatality rate(CFR),and all independent variables to identify candidate variables for our final multiple model.Multiple regression analysis technique was used to assess the strength of relationship.RESULTS The mean of COVID-19 mortality was 1.52±1.72%.There was a statistically significant inverse correlation between health expenditure,and number of computed tomography scanners per 1 million with CFR,and significant direct correlation was found between literacy,and air pollution with CFR.This final model can predict approximately 97%of the changes in CFR.CONCLUSION The current study recommends some new predictors explaining affect mortality rate.Thus,it could help decision-makers develop health policies to fight COVID-19.展开更多
In this paper, a logistical regression statistical analysis (LR) is presented for a set of variables used in experimental measurements in reversed field pinch (RFP) machines, commonly known as “slinky mode” (SM), ob...In this paper, a logistical regression statistical analysis (LR) is presented for a set of variables used in experimental measurements in reversed field pinch (RFP) machines, commonly known as “slinky mode” (SM), observed to travel around the torus in Madison Symmetric Torus (MST). The LR analysis is used to utilize the modified Sine-Gordon dynamic equation model to predict with high confidence whether the slinky mode will lock or not lock when compared to the experimentally measured motion of the slinky mode. It is observed that under certain conditions, the slinky mode “locks” at or near the intersection of poloidal and/or toroidal gaps in MST. However, locked mode cease to travel around the torus;while unlocked mode keeps traveling without a change in the energy, making it hard to determine an exact set of conditions to predict locking/unlocking behaviour. The significant key model parameters determined by LR analysis are shown to improve the Sine-Gordon model’s ability to determine the locking/unlocking of magnetohydrodyamic (MHD) modes. The LR analysis of measured variables provides high confidence in anticipating locking versus unlocking of slinky mode proven by relational comparisons between simulations and the experimentally measured motion of the slinky mode in MST.展开更多
Machine Learning(ML)has changed clinical diagnostic procedures drastically.Especially in Cardiovascular Diseases(CVD),the use of ML is indispensable to reducing human errors.Enormous studies focused on disease predict...Machine Learning(ML)has changed clinical diagnostic procedures drastically.Especially in Cardiovascular Diseases(CVD),the use of ML is indispensable to reducing human errors.Enormous studies focused on disease prediction but depending on multiple parameters,further investigations are required to upgrade the clinical procedures.Multi-layered implementation of ML also called Deep Learning(DL)has unfolded new horizons in the field of clinical diagnostics.DL formulates reliable accuracy with big datasets but the reverse is the case with small datasets.This paper proposed a novel method that deals with the issue of less data dimensionality.Inspired by the regression analysis,the proposed method classifies the data by going through three different stages.In the first stage,feature representation is converted into probabilities using multiple regression techniques,the second stage grasps the probability conclusions from the previous stage and the third stage fabricates the final classifications.Extensive experiments were carried out on the Cleveland heart disease dataset.The results show significant improvement in classification accuracy.It is evident from the comparative results of the paper that the prevailing statistical ML methods are no more stagnant disease prediction techniques in demand in the future.展开更多
In the era of big data,traditional regression models cannot deal with uncertain big data efficiently and accurately.In order to make up for this deficiency,this paper proposes a quantum fuzzy regression model,which us...In the era of big data,traditional regression models cannot deal with uncertain big data efficiently and accurately.In order to make up for this deficiency,this paper proposes a quantum fuzzy regression model,which uses fuzzy theory to describe the uncertainty in big data sets and uses quantum computing to exponentially improve the efficiency of data set preprocessing and parameter estimation.In this paper,data envelopment analysis(DEA)is used to calculate the degree of importance of each data point.Meanwhile,Harrow,Hassidim and Lloyd(HHL)algorithm and quantum swap circuits are used to improve the efficiency of high-dimensional data matrix calculation.The application of the quantum fuzzy regression model to smallscale financial data proves that its accuracy is greatly improved compared with the quantum regression model.Moreover,due to the introduction of quantum computing,the speed of dealing with high-dimensional data matrix has an exponential improvement compared with the fuzzy regression model.The quantum fuzzy regression model proposed in this paper combines the advantages of fuzzy theory and quantum computing which can efficiently calculate high-dimensional data matrix and complete parameter estimation using quantum computing while retaining the uncertainty in big data.Thus,it is a new model for efficient and accurate big data processing in uncertain environments.展开更多
Air quality is a critical concern for public health and environmental regulation. The Air Quality Index (AQI), a widely adopted index by the US Environmental Protection Agency (EPA), serves as a crucial metric for rep...Air quality is a critical concern for public health and environmental regulation. The Air Quality Index (AQI), a widely adopted index by the US Environmental Protection Agency (EPA), serves as a crucial metric for reporting site-specific air pollution levels. Accurately predicting air quality, as measured by the AQI, is essential for effective air pollution management. In this study, we aim to identify the most reliable regression model among linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), logistic regression, and K-nearest neighbors (KNN). We conducted four different regression analyses using a machine learning approach to determine the model with the best performance. By employing the confusion matrix and error percentages, we selected the best-performing model, which yielded prediction error rates of 22%, 23%, 20%, and 27%, respectively, for LDA, QDA, logistic regression, and KNN models. The logistic regression model outperformed the other three statistical models in predicting AQI. Understanding these models' performance can help address an existing gap in air quality research and contribute to the integration of regression techniques in AQI studies, ultimately benefiting stakeholders like environmental regulators, healthcare professionals, urban planners, and researchers.展开更多
The global pandemic,coronavirus disease 2019(COVID-19),has significantly affected tourism,especially in Spain,as it was among the first countries to be affected by the pandemic and is among the world’s biggest touris...The global pandemic,coronavirus disease 2019(COVID-19),has significantly affected tourism,especially in Spain,as it was among the first countries to be affected by the pandemic and is among the world’s biggest tourist destinations.Stock market values are responding to the evolution of the pandemic,especially in the case of tourist companies.Therefore,being able to quantify this relationship allows us to predict the effect of the pandemic on shares in the tourism sector,thereby improving the response to the crisis by policymakers and investors.Accordingly,a dynamic regression model was developed to predict the behavior of shares in the Spanish tourism sector according to the evolution of the COVID-19 pandemic in the medium term.It has been confirmed that both the number of deaths and cases are good predictors of abnormal stock prices in the tourism sector.展开更多
Remaining useful life(RUL)prediction is one of the most crucial components in prognostics and health management(PHM)of aero-engines.This paper proposes an RUL prediction method of aero-engines considering the randomne...Remaining useful life(RUL)prediction is one of the most crucial components in prognostics and health management(PHM)of aero-engines.This paper proposes an RUL prediction method of aero-engines considering the randomness of failure threshold.Firstly,a random-coefficient regression(RCR)model is used to model the degradation process of aeroengines.Then,the RUL distribution based on fixed failure threshold is derived.The prior parameters of the degradation model are calculated by a two-step maximum likelihood estimation(MLE)method and the random coefficient is updated in real time under the Bayesian framework.The failure threshold in this paper is defined by the actual degradation process of aeroengines.After that,a expectation maximization(EM)algorithm is proposed to estimate the underlying failure threshold of aeroengines.In addition,the conditional probability is used to satisfy the limitation of failure threshold.Then,based on above results,an analytical expression of RUL distribution of aero-engines based on the RCR model considering random failure threshold(RFT)is derived in a closed-form.Finally,a case study of turbofan engine is used to demonstrate the effectiveness and superiority of the RUL prediction method and the parameters estimation method of failure threshold proposed.展开更多
It remains challenging to effectively estimate the remaining capacity of the secondary lithium-ion batteries that have been widely adopted for consumer electronics,energy storage,and electric vehicles.Herein,by integr...It remains challenging to effectively estimate the remaining capacity of the secondary lithium-ion batteries that have been widely adopted for consumer electronics,energy storage,and electric vehicles.Herein,by integrating regular real-time current short pulse tests with data-driven Gaussian process regression algorithm,an efficient battery estimation has been successfully developed and validated for batteries with capacity ranging from 100%of the state of health(SOH)to below 50%,reaching an average accuracy as high as 95%.Interestingly,the proposed pulse test strategy for battery capacity measurement could reduce test time by more than 80%compared with regular long charge/discharge tests.The short-term features of the current pulse test were selected for an optimal training process.Data at different voltage stages and state of charge(SOC)are collected and explored to find the most suitable estimation model.In particular,we explore the validity of five different machine-learning methods for estimating capacity driven by pulse features,whereas Gaussian process regression with Matern kernel performs the best,providing guidance for future exploration.The new strategy of combining short pulse tests with machine-learning algorithms could further open window for efficiently forecasting lithium-ion battery remaining capacity.展开更多
Remaining useful life(RUL) prediction is one of the most crucial elements in prognostics and health management(PHM). Aiming at the imperfect prior information, this paper proposes an RUL prediction method based on a n...Remaining useful life(RUL) prediction is one of the most crucial elements in prognostics and health management(PHM). Aiming at the imperfect prior information, this paper proposes an RUL prediction method based on a nonlinear random coefficient regression(RCR) model with fusing failure time data.Firstly, some interesting natures of parameters estimation based on the nonlinear RCR model are given. Based on these natures,the failure time data can be fused as the prior information reasonably. Specifically, the fixed parameters are calculated by the field degradation data of the evaluated equipment and the prior information of random coefficient is estimated with fusing the failure time data of congeneric equipment. Then, the prior information of the random coefficient is updated online under the Bayesian framework, the probability density function(PDF) of the RUL with considering the limitation of the failure threshold is performed. Finally, two case studies are used for experimental verification. Compared with the traditional Bayesian method, the proposed method can effectively reduce the influence of imperfect prior information and improve the accuracy of RUL prediction.展开更多
Background: Vegetation distribution maps are of great significance for nature protection and management. In diverse tropical forests, accurate spatial mapping of vegetation types is challenging;the high species divers...Background: Vegetation distribution maps are of great significance for nature protection and management. In diverse tropical forests, accurate spatial mapping of vegetation types is challenging;the high species diversity and abundance of rare species challenge classification concepts, while remote sensing signals may not vary systematically with species composition, complicating the technical capability for delineating vegetation types in the landscape.Methods: We used a combination of field-based compositional data and their relations to environmental variables to predict the distribution of forest types in the Wuzhishan National Natural Reserve(WNNR), Hainan Island,China, using multivariate regression trees(MRT). The MRT was based on arboreal vegetation composition in 132plots of 20 m×20 m with a regular spacing of 1 km. Apart from the MRT, non-metric multidimensional scaling(NMDS) was used to evaluate vegetation-environment relationships.Results: The MRT model worked best when using 14 key environmental variables including topography, climate,latitude and soil, although the difference with the simpler model including only topographical variables was small. The full model classified the 132 plots into 3 vegetation types, 6 formation groups, 20 formations and 65associations at different hierarchical syntaxonomic levels. This model was the basis for forest vegetation maps for the WNNR. MRT and NMDS showed that elevation was the main driving force for the distribution of vegetation types and formation groups. Climate, latitude, and soil(especially available P), together with topographic variables, all influenced the distribution of formations and associations.Conclusions: While elevation determines forest-type distributions, lower-level syntaxonomic forest classes respond to the topographic diversity typical for mountains. Apart from providing the first detailed forest vegetation map for any part of WNNR, we show how, in spite of limitations, MRT with existing environmental data can be a useful method for mapping diverse and remote tropical forests.展开更多
This research presents a reputation-based blockchain consensus mechanism called Proof of Intelligent Reputation(PoIR)as an alternative to traditional Proof of Work(PoW).PoIR addresses the limitations of existing reput...This research presents a reputation-based blockchain consensus mechanism called Proof of Intelligent Reputation(PoIR)as an alternative to traditional Proof of Work(PoW).PoIR addresses the limitations of existing reputationbased consensus mechanisms by proposing a more decentralized and fair node selection process.The proposed PoIR consensus combines Bidirectional Long Short-Term Memory(BiLSTM)with the Network Entity Reputation Database(NERD)to generate reputation scores for network entities and select authoritative nodes.NERD records network entity profiles based on various sources,i.e.,Warden,Blacklists,DShield,AlienVault Open Threat Exchange(OTX),and MISP(Malware Information Sharing Platform).It summarizes these profile records into a reputation score value.The PoIR consensus mechanism utilizes these reputation scores to select authoritative nodes.The evaluation demonstrates that PoIR exhibits higher centralization resistance than PoS and PoW.Authoritative nodes were selected fairly during the 1000-block proposal round,ensuring a more decentralized blockchain ecosystem.In contrast,malicious nodes successfully monopolized 58%and 32%of transaction processes in PoS and PoW,respectively,but failed to do so in PoIR.The findings also indicate that PoIR offers efficient transaction times of 12 s,outperforms reputation-based consensus such as PoW,and is comparable to reputation-based consensus such as PoS.Furthermore,the model evaluation shows that BiLSTM outperforms other Recurrent Neural Network models,i.e.,BiGRU(Bidirectional Gated Recurrent Unit),UniLSTM(Unidirectional Long Short-Term Memory),and UniGRU(Unidirectional Gated Recurrent Unit)with 0.022 Root Mean Squared Error(RMSE).This study concludes that the PoIR consensus mechanism is more resistant to centralization than PoS and PoW.Integrating BiLSTM and NERD enhances the fairness and efficiency of blockchain applications.展开更多
In this paper,we define the curve rλ=r+λd at a constant distance from the edge of regression on a curve r(s)with arc length parameter s in Galilean 3-space.Here,d is a non-isotropic or isotropic vector defined as a ...In this paper,we define the curve rλ=r+λd at a constant distance from the edge of regression on a curve r(s)with arc length parameter s in Galilean 3-space.Here,d is a non-isotropic or isotropic vector defined as a vector tightly fastened to Frenet trihedron of the curve r(s)in 3-dimensional Galilean space.We build the Frenet frame{Tλ,Nλ,Bλ}of the constructed curve rλwith respect to two types of the vector d and we indicate the properties related to the curvatures of the curve rλ.Also,for the curve rλ,we give the conditions to be a circular helix.Furthermore,we discuss ruled surfaces of type A generated via the curve rλand the vector D which is defined as tangent of the curve rλin 3-dimensional Galilean space.The constructed ruled surfaces also appear in two ways.The first is constructed with the curve rλ(s)=r(s)+λT(s)and the non-isotropic vector D.The second is formed by the curve rλ=r(s)+λ2N+λ3B and the non-isotropic vector D.We calculate the distribution parameters of the constructed ruled surfaces and we show that the ruled surfaces are developable.Finally,we provide examples and visuals to back up our research.展开更多
文摘Purpose:The purpose of this study is to develop and compare model choice strategies in context of logistic regression.Model choice means the choice of the covariates to be included in the model.Design/methodology/approach:The study is based on Monte Carlo simulations.The methods are compared in terms of three measures of accuracy:specificity and two kinds of sensitivity.A loss function combining sensitivity and specificity is introduced and used for a final comparison.Findings:The choice of method depends on how much the users emphasize sensitivity against specificity.It also depends on the sample size.For a typical logistic regression setting with a moderate sample size and a small to moderate effect size,either BIC,BICc or Lasso seems to be optimal.Research limitations:Numerical simulations cannot cover the whole range of data-generating processes occurring with real-world data.Thus,more simulations are needed.Practical implications:Researchers can refer to these results if they believe that their data-generating process is somewhat similar to some of the scenarios presented in this paper.Alternatively,they could run their own simulations and calculate the loss function.Originality/value:This is a systematic comparison of model choice algorithms and heuristics in context of logistic regression.The distinction between two types of sensitivity and a comparison based on a loss function are methodological novelties.
基金supported in part by the National Key Research and Development Program of China(2021YFC2902703)the National Natural Science Foundation of China(62173078,61773105,61533007,61873049,61873053,61703085,61374147)。
文摘Concentrate copper grade(CCG)is one of the important production indicators of copper flotation processes,and keeping the CCG at the set value is of great significance to the economic benefit of copper flotation industrial processes.This paper addresses the fluctuation problem of CCG through an operational optimization method.Firstly,a density-based affinity propagationalgorithm is proposed so that more ideal working condition categories can be obtained for the complex raw ore properties.Next,a Bayesian network(BN)is applied to explore the relationship between the operational variables and the CCG.Based on the analysis results of BN,a weighted Gaussian process regression model is constructed to predict the CCG that a higher prediction accuracy can be obtained.To ensure the predicted CCG is close to the set value with a smaller magnitude of the operation adjustments and a smaller uncertainty of the prediction results,an index-oriented adaptive differential evolution(IOADE)algorithm is proposed,and the convergence performance of IOADE is superior to the traditional differential evolution and adaptive differential evolution methods.Finally,the effectiveness and feasibility of the proposed methods are verified by the experiments on a copper flotation industrial process.
基金financially supported by the NationalNatural Science Foundation of China(Grant No.42072309)the Fundamental Research Funds for National University,China University of Geosciences(Wuhan)(Grant No.CUGDCJJ202217)+1 种基金the Knowledge Innovation Program of Wuhan-Basic Research(Grant No.2022020801010199)the Hubei Key Laboratory of Blasting Engineering Foundation(HKLBEF202002).
文摘Accurately estimating blasting vibration during rock blasting is the foundation of blasting vibration management.In this study,Tuna Swarm Optimization(TSO),Whale Optimization Algorithm(WOA),and Cuckoo Search(CS)were used to optimize two hyperparameters in support vector regression(SVR).Based on these methods,three hybrid models to predict peak particle velocity(PPV)for bench blasting were developed.Eighty-eight samples were collected to establish the PPV database,eight initial blasting parameters were chosen as input parameters for the predictionmodel,and the PPV was the output parameter.As predictive performance evaluation indicators,the coefficient of determination(R2),rootmean square error(RMSE),mean absolute error(MAE),and a10-index were selected.The normalizedmutual information value is then used to evaluate the impact of various input parameters on the PPV prediction outcomes.According to the research findings,TSO,WOA,and CS can all enhance the predictive performance of the SVR model.The TSO-SVR model provides the most accurate predictions.The performances of the optimized hybrid SVR models are superior to the unoptimized traditional prediction model.The maximum charge per delay impacts the PPV prediction value the most.
基金This work was supported by the National Natural Science Foundation of China(Nos.11875027,11975096).
文摘The extended kernel ridge regression(EKRR)method with odd-even effects was adopted to improve the description of the nuclear charge radius using five commonly used nuclear models.These are:(i)the isospin-dependent A^(1∕3) formula,(ii)relativistic continuum Hartree-Bogoliubov(RCHB)theory,(iii)Hartree-Fock-Bogoliubov(HFB)model HFB25,(iv)the Weizsacker-Skyrme(WS)model WS*,and(v)HFB25*model.In the last two models,the charge radii were calculated using a five-parameter formula with the nuclear shell corrections and deformations obtained from the WS and HFB25 models,respectively.For each model,the resultant root-mean-square deviation for the 1014 nuclei with proton number Z≥8 can be significantly reduced to 0.009-0.013 fm after considering the modification with the EKRR method.The best among them was the RCHB model,with a root-mean-square deviation of 0.0092 fm.The extrapolation abilities of the KRR and EKRR methods for the neutron-rich region were examined,and it was found that after considering the odd-even effects,the extrapolation power was improved compared with that of the original KRR method.The strong odd-even staggering of nuclear charge radii of Ca and Cu isotopes and the abrupt kinks across the neutron N=126 and 82 shell closures were also calculated and could be reproduced quite well by calculations using the EKRR method.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R194)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘In this present time,Human Activity Recognition(HAR)has been of considerable aid in the case of health monitoring and recovery.The exploitation of machine learning with an intelligent agent in the area of health informatics gathered using HAR augments the decision-making quality and significance.Although many research works conducted on Smart Healthcare Monitoring,there remain a certain number of pitfalls such as time,overhead,and falsification involved during analysis.Therefore,this paper proposes a Statistical Partial Regression and Support Vector Intelligent Agent Learning(SPR-SVIAL)for Smart Healthcare Monitoring.At first,the Statistical Partial Regression Feature Extraction model is used for data preprocessing along with the dimensionality-reduced features extraction process.Here,the input dataset the continuous beat-to-beat heart data,triaxial accelerometer data,and psychological characteristics were acquired from IoT wearable devices.To attain highly accurate Smart Healthcare Monitoring with less time,Partial Least Square helps extract the dimensionality-reduced features.After that,with these resulting features,SVIAL is proposed for Smart Healthcare Monitoring with the help of Machine Learning and Intelligent Agents to minimize both analysis falsification and overhead.Experimental evaluation is carried out for factors such as time,overhead,and false positive rate accuracy concerning several instances.The quantitatively analyzed results indicate the better performance of our proposed SPR-SVIAL method when compared with two state-of-the-art methods.
基金supported in part by the National Key Research and Development Program of China(2019YFB1503700)the Hunan Natural Science Foundation-Science and Education Joint Project(2019JJ70063)。
文摘The noise that comes from finite element simulation often causes the model to fall into the local optimal solution and over fitting during optimization of generator.Thus,this paper proposes a Gaussian Process Regression(GPR)model based on Conditional Likelihood Lower Bound Search(CLLBS)to optimize the design of the generator,which can filter the noise in the data and search for global optimization by combining the Conditional Likelihood Lower Bound Search method.Taking the efficiency optimization of 15 kW Permanent Magnet Synchronous Motor as an example.Firstly,this method uses the elementary effect analysis to choose the sensitive variables,combining the evolutionary algorithm to design the super Latin cube sampling plan;Then the generator-converter system is simulated by establishing a co-simulation platform to obtain data.A Gaussian process regression model combing the method of the conditional likelihood lower bound search is established,which combined the chi-square test to optimize the accuracy of the model globally.Secondly,after the model reaches the accuracy,the Pareto frontier is obtained through the NSGA-II algorithm by considering the maximum output torque as a constraint.Last,the constrained optimization is transformed into an unconstrained optimizing problem by introducing maximum constrained improvement expectation(CEI)optimization method based on the re-interpolation model,which cross-validated the optimization results of the Gaussian process regression model.The above method increase the efficiency of generator by 0.76%and 0.5%respectively;And this method can be used for rapid modeling and multi-objective optimization of generator systems.
文摘The Extensible Markup Language(XML)files,widely used for storing and exchanging information on the web require efficient parsing mechanisms to improve the performance of the applications.With the existing Document Object Model(DOM)based parsing,the performance degrades due to sequential processing and large memory requirements,thereby requiring an efficient XML parser to mitigate these issues.In this paper,we propose a Parallel XML Tree Generator(PXTG)algorithm for accelerating the parsing of XML files and a Regression-based XML Parsing Framework(RXPF)that analyzes and predicts performance through profiling,regression,and code generation for efficient parsing.The PXTG algorithm is based on dividing the XML file into n parts and producing n trees in parallel.The profiling phase of the RXPF framework produces a dataset by measuring the performance of various parsing models including StAX,SAX,DOM,JDOM,and PXTG on different cores by using multiple file sizes.The regression phase produces the prediction model,based on which the final code for efficient parsing of XML files is produced through the code generation phase.The RXPF framework has shown a significant improvement in performance varying from 9.54%to 32.34%over other existing models used for parsing XML files.
文摘In view of the composition analysis and identification of ancient glass products, L1 regularization, K-Means cluster analysis, elbow rule and other methods were comprehensively used to build logical regression, cluster analysis, hyper-parameter test and other models, and SPSS, Python and other tools were used to obtain the classification rules of glass products under different fluxes, sub classification under different chemical compositions, hyper-parameter K value test and rationality analysis. Research can provide theoretical support for the protection and restoration of ancient glass relics.
文摘BACKGROUND The spread of the severe acute respiratory syndrome coronavirus 2 outbreak worldwide has caused concern regarding the mortality rate caused by the infection.The determinants of mortality on a global scale cannot be fully understood due to lack of information.AIM To identify key factors that may explain the variability in case lethality across countries.METHODS We identified 21 Potential risk factors for coronavirus disease 2019(COVID-19)case fatality rate for all the countries with available data.We examined univariate relationships of each variable with case fatality rate(CFR),and all independent variables to identify candidate variables for our final multiple model.Multiple regression analysis technique was used to assess the strength of relationship.RESULTS The mean of COVID-19 mortality was 1.52±1.72%.There was a statistically significant inverse correlation between health expenditure,and number of computed tomography scanners per 1 million with CFR,and significant direct correlation was found between literacy,and air pollution with CFR.This final model can predict approximately 97%of the changes in CFR.CONCLUSION The current study recommends some new predictors explaining affect mortality rate.Thus,it could help decision-makers develop health policies to fight COVID-19.
文摘In this paper, a logistical regression statistical analysis (LR) is presented for a set of variables used in experimental measurements in reversed field pinch (RFP) machines, commonly known as “slinky mode” (SM), observed to travel around the torus in Madison Symmetric Torus (MST). The LR analysis is used to utilize the modified Sine-Gordon dynamic equation model to predict with high confidence whether the slinky mode will lock or not lock when compared to the experimentally measured motion of the slinky mode. It is observed that under certain conditions, the slinky mode “locks” at or near the intersection of poloidal and/or toroidal gaps in MST. However, locked mode cease to travel around the torus;while unlocked mode keeps traveling without a change in the energy, making it hard to determine an exact set of conditions to predict locking/unlocking behaviour. The significant key model parameters determined by LR analysis are shown to improve the Sine-Gordon model’s ability to determine the locking/unlocking of magnetohydrodyamic (MHD) modes. The LR analysis of measured variables provides high confidence in anticipating locking versus unlocking of slinky mode proven by relational comparisons between simulations and the experimentally measured motion of the slinky mode in MST.
文摘Machine Learning(ML)has changed clinical diagnostic procedures drastically.Especially in Cardiovascular Diseases(CVD),the use of ML is indispensable to reducing human errors.Enormous studies focused on disease prediction but depending on multiple parameters,further investigations are required to upgrade the clinical procedures.Multi-layered implementation of ML also called Deep Learning(DL)has unfolded new horizons in the field of clinical diagnostics.DL formulates reliable accuracy with big datasets but the reverse is the case with small datasets.This paper proposed a novel method that deals with the issue of less data dimensionality.Inspired by the regression analysis,the proposed method classifies the data by going through three different stages.In the first stage,feature representation is converted into probabilities using multiple regression techniques,the second stage grasps the probability conclusions from the previous stage and the third stage fabricates the final classifications.Extensive experiments were carried out on the Cleveland heart disease dataset.The results show significant improvement in classification accuracy.It is evident from the comparative results of the paper that the prevailing statistical ML methods are no more stagnant disease prediction techniques in demand in the future.
基金This work is supported by the NationalNatural Science Foundation of China(No.62076042)the Key Research and Development Project of Sichuan Province(Nos.2021YFSY0012,2020YFG0307,2021YFG0332)+3 种基金the Science and Technology Innovation Project of Sichuan(No.2020017)the Key Research and Development Project of Chengdu(No.2019-YF05-02028-GX)the Innovation Team of Quantum Security Communication of Sichuan Province(No.17TD0009)the Academic and Technical Leaders Training Funding Support Projects of Sichuan Province(No.2016120080102643).
文摘In the era of big data,traditional regression models cannot deal with uncertain big data efficiently and accurately.In order to make up for this deficiency,this paper proposes a quantum fuzzy regression model,which uses fuzzy theory to describe the uncertainty in big data sets and uses quantum computing to exponentially improve the efficiency of data set preprocessing and parameter estimation.In this paper,data envelopment analysis(DEA)is used to calculate the degree of importance of each data point.Meanwhile,Harrow,Hassidim and Lloyd(HHL)algorithm and quantum swap circuits are used to improve the efficiency of high-dimensional data matrix calculation.The application of the quantum fuzzy regression model to smallscale financial data proves that its accuracy is greatly improved compared with the quantum regression model.Moreover,due to the introduction of quantum computing,the speed of dealing with high-dimensional data matrix has an exponential improvement compared with the fuzzy regression model.The quantum fuzzy regression model proposed in this paper combines the advantages of fuzzy theory and quantum computing which can efficiently calculate high-dimensional data matrix and complete parameter estimation using quantum computing while retaining the uncertainty in big data.Thus,it is a new model for efficient and accurate big data processing in uncertain environments.
文摘Air quality is a critical concern for public health and environmental regulation. The Air Quality Index (AQI), a widely adopted index by the US Environmental Protection Agency (EPA), serves as a crucial metric for reporting site-specific air pollution levels. Accurately predicting air quality, as measured by the AQI, is essential for effective air pollution management. In this study, we aim to identify the most reliable regression model among linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), logistic regression, and K-nearest neighbors (KNN). We conducted four different regression analyses using a machine learning approach to determine the model with the best performance. By employing the confusion matrix and error percentages, we selected the best-performing model, which yielded prediction error rates of 22%, 23%, 20%, and 27%, respectively, for LDA, QDA, logistic regression, and KNN models. The logistic regression model outperformed the other three statistical models in predicting AQI. Understanding these models' performance can help address an existing gap in air quality research and contribute to the integration of regression techniques in AQI studies, ultimately benefiting stakeholders like environmental regulators, healthcare professionals, urban planners, and researchers.
文摘The global pandemic,coronavirus disease 2019(COVID-19),has significantly affected tourism,especially in Spain,as it was among the first countries to be affected by the pandemic and is among the world’s biggest tourist destinations.Stock market values are responding to the evolution of the pandemic,especially in the case of tourist companies.Therefore,being able to quantify this relationship allows us to predict the effect of the pandemic on shares in the tourism sector,thereby improving the response to the crisis by policymakers and investors.Accordingly,a dynamic regression model was developed to predict the behavior of shares in the Spanish tourism sector according to the evolution of the COVID-19 pandemic in the medium term.It has been confirmed that both the number of deaths and cases are good predictors of abnormal stock prices in the tourism sector.
基金supported by the National Natural Science Foundation of China(61703410,61873175,62073336,61873273,61773386,61922-089)the Basic Research Plan of Shaanxi Natural Science Foundation of China(2022JM-376).
文摘Remaining useful life(RUL)prediction is one of the most crucial components in prognostics and health management(PHM)of aero-engines.This paper proposes an RUL prediction method of aero-engines considering the randomness of failure threshold.Firstly,a random-coefficient regression(RCR)model is used to model the degradation process of aeroengines.Then,the RUL distribution based on fixed failure threshold is derived.The prior parameters of the degradation model are calculated by a two-step maximum likelihood estimation(MLE)method and the random coefficient is updated in real time under the Bayesian framework.The failure threshold in this paper is defined by the actual degradation process of aeroengines.After that,a expectation maximization(EM)algorithm is proposed to estimate the underlying failure threshold of aeroengines.In addition,the conditional probability is used to satisfy the limitation of failure threshold.Then,based on above results,an analytical expression of RUL distribution of aero-engines based on the RCR model considering random failure threshold(RFT)is derived in a closed-form.Finally,a case study of turbofan engine is used to demonstrate the effectiveness and superiority of the RUL prediction method and the parameters estimation method of failure threshold proposed.
基金support from Shenzhen Municipal Development and Reform Commission(Grant Number:SDRC[2016]172)Shenzhen Science and Technology Program(Grant No.KQTD20170810150821146)Interdisciplinary Research and Innovation Fund of Tsinghua Shenzhen International Graduate School,and Shanghai Shun Feng Machinery Co.,Ltd.
文摘It remains challenging to effectively estimate the remaining capacity of the secondary lithium-ion batteries that have been widely adopted for consumer electronics,energy storage,and electric vehicles.Herein,by integrating regular real-time current short pulse tests with data-driven Gaussian process regression algorithm,an efficient battery estimation has been successfully developed and validated for batteries with capacity ranging from 100%of the state of health(SOH)to below 50%,reaching an average accuracy as high as 95%.Interestingly,the proposed pulse test strategy for battery capacity measurement could reduce test time by more than 80%compared with regular long charge/discharge tests.The short-term features of the current pulse test were selected for an optimal training process.Data at different voltage stages and state of charge(SOC)are collected and explored to find the most suitable estimation model.In particular,we explore the validity of five different machine-learning methods for estimating capacity driven by pulse features,whereas Gaussian process regression with Matern kernel performs the best,providing guidance for future exploration.The new strategy of combining short pulse tests with machine-learning algorithms could further open window for efficiently forecasting lithium-ion battery remaining capacity.
基金supported by National Natural Science Foundation of China (61703410,61873175,62073336,61873273,61773386,61922089)。
文摘Remaining useful life(RUL) prediction is one of the most crucial elements in prognostics and health management(PHM). Aiming at the imperfect prior information, this paper proposes an RUL prediction method based on a nonlinear random coefficient regression(RCR) model with fusing failure time data.Firstly, some interesting natures of parameters estimation based on the nonlinear RCR model are given. Based on these natures,the failure time data can be fused as the prior information reasonably. Specifically, the fixed parameters are calculated by the field degradation data of the evaluated equipment and the prior information of random coefficient is estimated with fusing the failure time data of congeneric equipment. Then, the prior information of the random coefficient is updated online under the Bayesian framework, the probability density function(PDF) of the RUL with considering the limitation of the failure threshold is performed. Finally, two case studies are used for experimental verification. Compared with the traditional Bayesian method, the proposed method can effectively reduce the influence of imperfect prior information and improve the accuracy of RUL prediction.
基金financially supported by National Key R&D Program of China(2021YFD220040403 and 2021YFD220040304)the China Scholarship Council(202107565021).
文摘Background: Vegetation distribution maps are of great significance for nature protection and management. In diverse tropical forests, accurate spatial mapping of vegetation types is challenging;the high species diversity and abundance of rare species challenge classification concepts, while remote sensing signals may not vary systematically with species composition, complicating the technical capability for delineating vegetation types in the landscape.Methods: We used a combination of field-based compositional data and their relations to environmental variables to predict the distribution of forest types in the Wuzhishan National Natural Reserve(WNNR), Hainan Island,China, using multivariate regression trees(MRT). The MRT was based on arboreal vegetation composition in 132plots of 20 m×20 m with a regular spacing of 1 km. Apart from the MRT, non-metric multidimensional scaling(NMDS) was used to evaluate vegetation-environment relationships.Results: The MRT model worked best when using 14 key environmental variables including topography, climate,latitude and soil, although the difference with the simpler model including only topographical variables was small. The full model classified the 132 plots into 3 vegetation types, 6 formation groups, 20 formations and 65associations at different hierarchical syntaxonomic levels. This model was the basis for forest vegetation maps for the WNNR. MRT and NMDS showed that elevation was the main driving force for the distribution of vegetation types and formation groups. Climate, latitude, and soil(especially available P), together with topographic variables, all influenced the distribution of formations and associations.Conclusions: While elevation determines forest-type distributions, lower-level syntaxonomic forest classes respond to the topographic diversity typical for mountains. Apart from providing the first detailed forest vegetation map for any part of WNNR, we show how, in spite of limitations, MRT with existing environmental data can be a useful method for mapping diverse and remote tropical forests.
基金funded by the Ministry of Education,Culture,Research,and Technology(Kemendikbudristek)of Indonesia under PDD Grant with Grant Number NKB1016/UN2.RST/HKP.05.00/2022.
文摘This research presents a reputation-based blockchain consensus mechanism called Proof of Intelligent Reputation(PoIR)as an alternative to traditional Proof of Work(PoW).PoIR addresses the limitations of existing reputationbased consensus mechanisms by proposing a more decentralized and fair node selection process.The proposed PoIR consensus combines Bidirectional Long Short-Term Memory(BiLSTM)with the Network Entity Reputation Database(NERD)to generate reputation scores for network entities and select authoritative nodes.NERD records network entity profiles based on various sources,i.e.,Warden,Blacklists,DShield,AlienVault Open Threat Exchange(OTX),and MISP(Malware Information Sharing Platform).It summarizes these profile records into a reputation score value.The PoIR consensus mechanism utilizes these reputation scores to select authoritative nodes.The evaluation demonstrates that PoIR exhibits higher centralization resistance than PoS and PoW.Authoritative nodes were selected fairly during the 1000-block proposal round,ensuring a more decentralized blockchain ecosystem.In contrast,malicious nodes successfully monopolized 58%and 32%of transaction processes in PoS and PoW,respectively,but failed to do so in PoIR.The findings also indicate that PoIR offers efficient transaction times of 12 s,outperforms reputation-based consensus such as PoW,and is comparable to reputation-based consensus such as PoS.Furthermore,the model evaluation shows that BiLSTM outperforms other Recurrent Neural Network models,i.e.,BiGRU(Bidirectional Gated Recurrent Unit),UniLSTM(Unidirectional Long Short-Term Memory),and UniGRU(Unidirectional Gated Recurrent Unit)with 0.022 Root Mean Squared Error(RMSE).This study concludes that the PoIR consensus mechanism is more resistant to centralization than PoS and PoW.Integrating BiLSTM and NERD enhances the fairness and efficiency of blockchain applications.
文摘In this paper,we define the curve rλ=r+λd at a constant distance from the edge of regression on a curve r(s)with arc length parameter s in Galilean 3-space.Here,d is a non-isotropic or isotropic vector defined as a vector tightly fastened to Frenet trihedron of the curve r(s)in 3-dimensional Galilean space.We build the Frenet frame{Tλ,Nλ,Bλ}of the constructed curve rλwith respect to two types of the vector d and we indicate the properties related to the curvatures of the curve rλ.Also,for the curve rλ,we give the conditions to be a circular helix.Furthermore,we discuss ruled surfaces of type A generated via the curve rλand the vector D which is defined as tangent of the curve rλin 3-dimensional Galilean space.The constructed ruled surfaces also appear in two ways.The first is constructed with the curve rλ(s)=r(s)+λT(s)and the non-isotropic vector D.The second is formed by the curve rλ=r(s)+λ2N+λ3B and the non-isotropic vector D.We calculate the distribution parameters of the constructed ruled surfaces and we show that the ruled surfaces are developable.Finally,we provide examples and visuals to back up our research.