In this paper, auxiliary information is used to determine an estimator of finite population total using nonparametric regression under stratified random sampling. To achieve this, a model-based approach is adopted by ...In this paper, auxiliary information is used to determine an estimator of finite population total using nonparametric regression under stratified random sampling. To achieve this, a model-based approach is adopted by making use of the local polynomial regression estimation to predict the nonsampled values of the survey variable y. The performance of the proposed estimator is investigated against some design-based and model-based regression estimators. The simulation experiments show that the resulting estimator exhibits good properties. Generally, good confidence intervals are seen for the nonparametric regression estimators, and use of the proposed estimator leads to relatively smaller values of RE compared to other estimators.展开更多
In this paper, we try to find numerical solution of y'(x)= p(x)y(x)+g(x)+λ∫ba K(x, t)y(t)dt, y(a)=α. a≤x≤b, a≤t≤b or y'(x)= p(x)y(x)+g(x)+λ∫xa K(x, t)y(t)dt, y(a)=α. a≤x≤b, a≤t≤b by using Local p...In this paper, we try to find numerical solution of y'(x)= p(x)y(x)+g(x)+λ∫ba K(x, t)y(t)dt, y(a)=α. a≤x≤b, a≤t≤b or y'(x)= p(x)y(x)+g(x)+λ∫xa K(x, t)y(t)dt, y(a)=α. a≤x≤b, a≤t≤b by using Local polynomial regression (LPR) method. The numerical solution shows that this method is powerful in solving integro-differential equations. The method will be tested on three model problems in order to demonstrate its usefulness and accuracy.展开更多
In this study, a multivariate local quadratic polynomial regression(MLQPR) method is proposed to design a model for the sludge volume index(SVI). In MLQPR, a quadratic polynomial regression function is established to ...In this study, a multivariate local quadratic polynomial regression(MLQPR) method is proposed to design a model for the sludge volume index(SVI). In MLQPR, a quadratic polynomial regression function is established to describe the relationship between SVI and the relative variables, and the important terms of the quadratic polynomial regression function are determined by the significant test of the corresponding coefficients. Moreover, a local estimation method is introduced to adjust the weights of the quadratic polynomial regression function to improve the model accuracy. Finally, the proposed method is applied to predict the SVI values in a real wastewater treatment process(WWTP). The experimental results demonstrate that the proposed MLQPR method has faster testing speed and more accurate results than some existing methods.展开更多
The toxicity of heroin (the drug) which affects the health of mice was studied by using the regression analysis method based on the results of experiment.We found that after the heroin was injected into the mice,the b...The toxicity of heroin (the drug) which affects the health of mice was studied by using the regression analysis method based on the results of experiment.We found that after the heroin was injected into the mice,the blood leucocyte number and body weight were decreased significantly,the bleeding time was prolonged,the activity of glutamic pyruvic tronsaminase(GPT) in mice hepatic tissue and the weight of heart raised with increasing of herion dose.展开更多
Fermat’s Last Theorem is a famous theorem in number theory which is difficult to prove.However,it is known that the version of polynomials with one variable of Fermat’s Last Theorem over C can be proved very concisely...Fermat’s Last Theorem is a famous theorem in number theory which is difficult to prove.However,it is known that the version of polynomials with one variable of Fermat’s Last Theorem over C can be proved very concisely.The aim of this paper is to study the similar problems about Fermat’s Last Theorem for multivariate(skew)-polynomials with any characteristic.展开更多
Purpose:The purpose of this study is to develop and compare model choice strategies in context of logistic regression.Model choice means the choice of the covariates to be included in the model.Design/methodology/appr...Purpose:The purpose of this study is to develop and compare model choice strategies in context of logistic regression.Model choice means the choice of the covariates to be included in the model.Design/methodology/approach:The study is based on Monte Carlo simulations.The methods are compared in terms of three measures of accuracy:specificity and two kinds of sensitivity.A loss function combining sensitivity and specificity is introduced and used for a final comparison.Findings:The choice of method depends on how much the users emphasize sensitivity against specificity.It also depends on the sample size.For a typical logistic regression setting with a moderate sample size and a small to moderate effect size,either BIC,BICc or Lasso seems to be optimal.Research limitations:Numerical simulations cannot cover the whole range of data-generating processes occurring with real-world data.Thus,more simulations are needed.Practical implications:Researchers can refer to these results if they believe that their data-generating process is somewhat similar to some of the scenarios presented in this paper.Alternatively,they could run their own simulations and calculate the loss function.Originality/value:This is a systematic comparison of model choice algorithms and heuristics in context of logistic regression.The distinction between two types of sensitivity and a comparison based on a loss function are methodological novelties.展开更多
The Extensible Markup Language(XML)files,widely used for storing and exchanging information on the web require efficient parsing mechanisms to improve the performance of the applications.With the existing Document Obj...The Extensible Markup Language(XML)files,widely used for storing and exchanging information on the web require efficient parsing mechanisms to improve the performance of the applications.With the existing Document Object Model(DOM)based parsing,the performance degrades due to sequential processing and large memory requirements,thereby requiring an efficient XML parser to mitigate these issues.In this paper,we propose a Parallel XML Tree Generator(PXTG)algorithm for accelerating the parsing of XML files and a Regression-based XML Parsing Framework(RXPF)that analyzes and predicts performance through profiling,regression,and code generation for efficient parsing.The PXTG algorithm is based on dividing the XML file into n parts and producing n trees in parallel.The profiling phase of the RXPF framework produces a dataset by measuring the performance of various parsing models including StAX,SAX,DOM,JDOM,and PXTG on different cores by using multiple file sizes.The regression phase produces the prediction model,based on which the final code for efficient parsing of XML files is produced through the code generation phase.The RXPF framework has shown a significant improvement in performance varying from 9.54%to 32.34%over other existing models used for parsing XML files.展开更多
Video watermarking plays a crucial role in protecting intellectual property rights and ensuring content authenticity.This study delves into the integration of Galois Field(GF)multiplication tables,especially GF(2^(4))...Video watermarking plays a crucial role in protecting intellectual property rights and ensuring content authenticity.This study delves into the integration of Galois Field(GF)multiplication tables,especially GF(2^(4)),and their interaction with distinct irreducible polynomials.The primary aim is to enhance watermarking techniques for achieving imperceptibility,robustness,and efficient execution time.The research employs scene selection and adaptive thresholding techniques to streamline the watermarking process.Scene selection is used strategically to embed watermarks in the most vital frames of the video,while adaptive thresholding methods ensure that the watermarking process adheres to imperceptibility criteria,maintaining the video's visual quality.Concurrently,careful consideration is given to execution time,crucial in real-world scenarios,to balance efficiency and efficacy.The Peak Signal-to-Noise Ratio(PSNR)serves as a pivotal metric to gauge the watermark's imperceptibility and video quality.The study explores various irreducible polynomials,navigating the trade-offs between computational efficiency and watermark imperceptibility.In parallel,the study pays careful attention to the execution time,a paramount consideration in real-world scenarios,to strike a balance between efficiency and efficacy.This comprehensive analysis provides valuable insights into the interplay of GF multiplication tables,diverse irreducible polynomials,scene selection,adaptive thresholding,imperceptibility,and execution time.The evaluation of the proposed algorithm's robustness was conducted using PSNR and NC metrics,and it was subjected to assessment under the impact of five distinct attack scenarios.These findings contribute to the development of watermarking strategies that balance imperceptibility,robustness,and processing efficiency,enhancing the field's practicality and effectiveness.展开更多
In view of the composition analysis and identification of ancient glass products, L1 regularization, K-Means cluster analysis, elbow rule and other methods were comprehensively used to build logical regression, cluste...In view of the composition analysis and identification of ancient glass products, L1 regularization, K-Means cluster analysis, elbow rule and other methods were comprehensively used to build logical regression, cluster analysis, hyper-parameter test and other models, and SPSS, Python and other tools were used to obtain the classification rules of glass products under different fluxes, sub classification under different chemical compositions, hyper-parameter K value test and rationality analysis. Research can provide theoretical support for the protection and restoration of ancient glass relics.展开更多
Concentrate copper grade(CCG)is one of the important production indicators of copper flotation processes,and keeping the CCG at the set value is of great significance to the economic benefit of copper flotation indust...Concentrate copper grade(CCG)is one of the important production indicators of copper flotation processes,and keeping the CCG at the set value is of great significance to the economic benefit of copper flotation industrial processes.This paper addresses the fluctuation problem of CCG through an operational optimization method.Firstly,a density-based affinity propagationalgorithm is proposed so that more ideal working condition categories can be obtained for the complex raw ore properties.Next,a Bayesian network(BN)is applied to explore the relationship between the operational variables and the CCG.Based on the analysis results of BN,a weighted Gaussian process regression model is constructed to predict the CCG that a higher prediction accuracy can be obtained.To ensure the predicted CCG is close to the set value with a smaller magnitude of the operation adjustments and a smaller uncertainty of the prediction results,an index-oriented adaptive differential evolution(IOADE)algorithm is proposed,and the convergence performance of IOADE is superior to the traditional differential evolution and adaptive differential evolution methods.Finally,the effectiveness and feasibility of the proposed methods are verified by the experiments on a copper flotation industrial process.展开更多
Accurately estimating blasting vibration during rock blasting is the foundation of blasting vibration management.In this study,Tuna Swarm Optimization(TSO),Whale Optimization Algorithm(WOA),and Cuckoo Search(CS)were u...Accurately estimating blasting vibration during rock blasting is the foundation of blasting vibration management.In this study,Tuna Swarm Optimization(TSO),Whale Optimization Algorithm(WOA),and Cuckoo Search(CS)were used to optimize two hyperparameters in support vector regression(SVR).Based on these methods,three hybrid models to predict peak particle velocity(PPV)for bench blasting were developed.Eighty-eight samples were collected to establish the PPV database,eight initial blasting parameters were chosen as input parameters for the predictionmodel,and the PPV was the output parameter.As predictive performance evaluation indicators,the coefficient of determination(R2),rootmean square error(RMSE),mean absolute error(MAE),and a10-index were selected.The normalizedmutual information value is then used to evaluate the impact of various input parameters on the PPV prediction outcomes.According to the research findings,TSO,WOA,and CS can all enhance the predictive performance of the SVR model.The TSO-SVR model provides the most accurate predictions.The performances of the optimized hybrid SVR models are superior to the unoptimized traditional prediction model.The maximum charge per delay impacts the PPV prediction value the most.展开更多
A certain variety of non-switched polynomials provides a uni-figure representation for a wide range of linear functional equations. This is properly adapted for the calculations. We reinterpret from this point of view...A certain variety of non-switched polynomials provides a uni-figure representation for a wide range of linear functional equations. This is properly adapted for the calculations. We reinterpret from this point of view a number of algorithms.展开更多
In this present time,Human Activity Recognition(HAR)has been of considerable aid in the case of health monitoring and recovery.The exploitation of machine learning with an intelligent agent in the area of health infor...In this present time,Human Activity Recognition(HAR)has been of considerable aid in the case of health monitoring and recovery.The exploitation of machine learning with an intelligent agent in the area of health informatics gathered using HAR augments the decision-making quality and significance.Although many research works conducted on Smart Healthcare Monitoring,there remain a certain number of pitfalls such as time,overhead,and falsification involved during analysis.Therefore,this paper proposes a Statistical Partial Regression and Support Vector Intelligent Agent Learning(SPR-SVIAL)for Smart Healthcare Monitoring.At first,the Statistical Partial Regression Feature Extraction model is used for data preprocessing along with the dimensionality-reduced features extraction process.Here,the input dataset the continuous beat-to-beat heart data,triaxial accelerometer data,and psychological characteristics were acquired from IoT wearable devices.To attain highly accurate Smart Healthcare Monitoring with less time,Partial Least Square helps extract the dimensionality-reduced features.After that,with these resulting features,SVIAL is proposed for Smart Healthcare Monitoring with the help of Machine Learning and Intelligent Agents to minimize both analysis falsification and overhead.Experimental evaluation is carried out for factors such as time,overhead,and false positive rate accuracy concerning several instances.The quantitatively analyzed results indicate the better performance of our proposed SPR-SVIAL method when compared with two state-of-the-art methods.展开更多
BACKGROUND The spread of the severe acute respiratory syndrome coronavirus 2 outbreak worldwide has caused concern regarding the mortality rate caused by the infection.The determinants of mortality on a global scale c...BACKGROUND The spread of the severe acute respiratory syndrome coronavirus 2 outbreak worldwide has caused concern regarding the mortality rate caused by the infection.The determinants of mortality on a global scale cannot be fully understood due to lack of information.AIM To identify key factors that may explain the variability in case lethality across countries.METHODS We identified 21 Potential risk factors for coronavirus disease 2019(COVID-19)case fatality rate for all the countries with available data.We examined univariate relationships of each variable with case fatality rate(CFR),and all independent variables to identify candidate variables for our final multiple model.Multiple regression analysis technique was used to assess the strength of relationship.RESULTS The mean of COVID-19 mortality was 1.52±1.72%.There was a statistically significant inverse correlation between health expenditure,and number of computed tomography scanners per 1 million with CFR,and significant direct correlation was found between literacy,and air pollution with CFR.This final model can predict approximately 97%of the changes in CFR.CONCLUSION The current study recommends some new predictors explaining affect mortality rate.Thus,it could help decision-makers develop health policies to fight COVID-19.展开更多
The noise that comes from finite element simulation often causes the model to fall into the local optimal solution and over fitting during optimization of generator.Thus,this paper proposes a Gaussian Process Regressi...The noise that comes from finite element simulation often causes the model to fall into the local optimal solution and over fitting during optimization of generator.Thus,this paper proposes a Gaussian Process Regression(GPR)model based on Conditional Likelihood Lower Bound Search(CLLBS)to optimize the design of the generator,which can filter the noise in the data and search for global optimization by combining the Conditional Likelihood Lower Bound Search method.Taking the efficiency optimization of 15 kW Permanent Magnet Synchronous Motor as an example.Firstly,this method uses the elementary effect analysis to choose the sensitive variables,combining the evolutionary algorithm to design the super Latin cube sampling plan;Then the generator-converter system is simulated by establishing a co-simulation platform to obtain data.A Gaussian process regression model combing the method of the conditional likelihood lower bound search is established,which combined the chi-square test to optimize the accuracy of the model globally.Secondly,after the model reaches the accuracy,the Pareto frontier is obtained through the NSGA-II algorithm by considering the maximum output torque as a constraint.Last,the constrained optimization is transformed into an unconstrained optimizing problem by introducing maximum constrained improvement expectation(CEI)optimization method based on the re-interpolation model,which cross-validated the optimization results of the Gaussian process regression model.The above method increase the efficiency of generator by 0.76%and 0.5%respectively;And this method can be used for rapid modeling and multi-objective optimization of generator systems.展开更多
In this paper, a logistical regression statistical analysis (LR) is presented for a set of variables used in experimental measurements in reversed field pinch (RFP) machines, commonly known as “slinky mode” (SM), ob...In this paper, a logistical regression statistical analysis (LR) is presented for a set of variables used in experimental measurements in reversed field pinch (RFP) machines, commonly known as “slinky mode” (SM), observed to travel around the torus in Madison Symmetric Torus (MST). The LR analysis is used to utilize the modified Sine-Gordon dynamic equation model to predict with high confidence whether the slinky mode will lock or not lock when compared to the experimentally measured motion of the slinky mode. It is observed that under certain conditions, the slinky mode “locks” at or near the intersection of poloidal and/or toroidal gaps in MST. However, locked mode cease to travel around the torus;while unlocked mode keeps traveling without a change in the energy, making it hard to determine an exact set of conditions to predict locking/unlocking behaviour. The significant key model parameters determined by LR analysis are shown to improve the Sine-Gordon model’s ability to determine the locking/unlocking of magnetohydrodyamic (MHD) modes. The LR analysis of measured variables provides high confidence in anticipating locking versus unlocking of slinky mode proven by relational comparisons between simulations and the experimentally measured motion of the slinky mode in MST.展开更多
Studies on drug combinations are becoming more and more popular in the past few decades, with the development of computer and algorithms. One of the most common methods in optimizing drug combinations is regression of...Studies on drug combinations are becoming more and more popular in the past few decades, with the development of computer and algorithms. One of the most common methods in optimizing drug combinations is regression of a polynomial model based on certain number of experimental observations. In this paper, we study how to determine the degree of polynomials in different circumstances of drug combination optimization. Using cross-validation, we have found that in most cases, a high degree results in failures of accurate prediction, named overfitting. An anti-noise test has also revealed that polynomial model with high degree tends to be less resistant to random errors in the observations.展开更多
A method based on polynomial regression algorithm(PRA) is proposed in this paper to compensate the nonlinear phase noise in optical frequency domain reflection(OFDR) systems. In this method, the nonlinear phase of OFD...A method based on polynomial regression algorithm(PRA) is proposed in this paper to compensate the nonlinear phase noise in optical frequency domain reflection(OFDR) systems. In this method, the nonlinear phase of OFDR systems is represented by the polynomial phase function, and then the coefficients of the polynomial phase function are estimated by PRA. Finally, the nonlinearity is compensated by match Fourier transform(MFT). Simulation results demonstrate that the proposed algorithm has good performance in compensating both weak and strong nonlinear phase noises of OFDR systems.展开更多
In this paper, the Schwarz Information Criterion (SIC) is used to detect the change points in polynomial regression models. Switching quadratic regression models with same amount of model deviation and switching polyn...In this paper, the Schwarz Information Criterion (SIC) is used to detect the change points in polynomial regression models. Switching quadratic regression models with same amount of model deviation and switching polynomial regression models with different amount of model deviation for different segments of regression are considered. The number of separate regimes and their corresponding regression orders are assume to be known. The method is then applied to cable data sets and the change points are successfully detected.展开更多
The global pandemic,coronavirus disease 2019(COVID-19),has significantly affected tourism,especially in Spain,as it was among the first countries to be affected by the pandemic and is among the world’s biggest touris...The global pandemic,coronavirus disease 2019(COVID-19),has significantly affected tourism,especially in Spain,as it was among the first countries to be affected by the pandemic and is among the world’s biggest tourist destinations.Stock market values are responding to the evolution of the pandemic,especially in the case of tourist companies.Therefore,being able to quantify this relationship allows us to predict the effect of the pandemic on shares in the tourism sector,thereby improving the response to the crisis by policymakers and investors.Accordingly,a dynamic regression model was developed to predict the behavior of shares in the Spanish tourism sector according to the evolution of the COVID-19 pandemic in the medium term.It has been confirmed that both the number of deaths and cases are good predictors of abnormal stock prices in the tourism sector.展开更多
文摘In this paper, auxiliary information is used to determine an estimator of finite population total using nonparametric regression under stratified random sampling. To achieve this, a model-based approach is adopted by making use of the local polynomial regression estimation to predict the nonsampled values of the survey variable y. The performance of the proposed estimator is investigated against some design-based and model-based regression estimators. The simulation experiments show that the resulting estimator exhibits good properties. Generally, good confidence intervals are seen for the nonparametric regression estimators, and use of the proposed estimator leads to relatively smaller values of RE compared to other estimators.
文摘In this paper, we try to find numerical solution of y'(x)= p(x)y(x)+g(x)+λ∫ba K(x, t)y(t)dt, y(a)=α. a≤x≤b, a≤t≤b or y'(x)= p(x)y(x)+g(x)+λ∫xa K(x, t)y(t)dt, y(a)=α. a≤x≤b, a≤t≤b by using Local polynomial regression (LPR) method. The numerical solution shows that this method is powerful in solving integro-differential equations. The method will be tested on three model problems in order to demonstrate its usefulness and accuracy.
文摘In this study, a multivariate local quadratic polynomial regression(MLQPR) method is proposed to design a model for the sludge volume index(SVI). In MLQPR, a quadratic polynomial regression function is established to describe the relationship between SVI and the relative variables, and the important terms of the quadratic polynomial regression function are determined by the significant test of the corresponding coefficients. Moreover, a local estimation method is introduced to adjust the weights of the quadratic polynomial regression function to improve the model accuracy. Finally, the proposed method is applied to predict the SVI values in a real wastewater treatment process(WWTP). The experimental results demonstrate that the proposed MLQPR method has faster testing speed and more accurate results than some existing methods.
文摘The toxicity of heroin (the drug) which affects the health of mice was studied by using the regression analysis method based on the results of experiment.We found that after the heroin was injected into the mice,the blood leucocyte number and body weight were decreased significantly,the bleeding time was prolonged,the activity of glutamic pyruvic tronsaminase(GPT) in mice hepatic tissue and the weight of heart raised with increasing of herion dose.
基金supported by the National Natural Science Foundation of China(12131015,12071422).
文摘Fermat’s Last Theorem is a famous theorem in number theory which is difficult to prove.However,it is known that the version of polynomials with one variable of Fermat’s Last Theorem over C can be proved very concisely.The aim of this paper is to study the similar problems about Fermat’s Last Theorem for multivariate(skew)-polynomials with any characteristic.
文摘Purpose:The purpose of this study is to develop and compare model choice strategies in context of logistic regression.Model choice means the choice of the covariates to be included in the model.Design/methodology/approach:The study is based on Monte Carlo simulations.The methods are compared in terms of three measures of accuracy:specificity and two kinds of sensitivity.A loss function combining sensitivity and specificity is introduced and used for a final comparison.Findings:The choice of method depends on how much the users emphasize sensitivity against specificity.It also depends on the sample size.For a typical logistic regression setting with a moderate sample size and a small to moderate effect size,either BIC,BICc or Lasso seems to be optimal.Research limitations:Numerical simulations cannot cover the whole range of data-generating processes occurring with real-world data.Thus,more simulations are needed.Practical implications:Researchers can refer to these results if they believe that their data-generating process is somewhat similar to some of the scenarios presented in this paper.Alternatively,they could run their own simulations and calculate the loss function.Originality/value:This is a systematic comparison of model choice algorithms and heuristics in context of logistic regression.The distinction between two types of sensitivity and a comparison based on a loss function are methodological novelties.
文摘The Extensible Markup Language(XML)files,widely used for storing and exchanging information on the web require efficient parsing mechanisms to improve the performance of the applications.With the existing Document Object Model(DOM)based parsing,the performance degrades due to sequential processing and large memory requirements,thereby requiring an efficient XML parser to mitigate these issues.In this paper,we propose a Parallel XML Tree Generator(PXTG)algorithm for accelerating the parsing of XML files and a Regression-based XML Parsing Framework(RXPF)that analyzes and predicts performance through profiling,regression,and code generation for efficient parsing.The PXTG algorithm is based on dividing the XML file into n parts and producing n trees in parallel.The profiling phase of the RXPF framework produces a dataset by measuring the performance of various parsing models including StAX,SAX,DOM,JDOM,and PXTG on different cores by using multiple file sizes.The regression phase produces the prediction model,based on which the final code for efficient parsing of XML files is produced through the code generation phase.The RXPF framework has shown a significant improvement in performance varying from 9.54%to 32.34%over other existing models used for parsing XML files.
文摘Video watermarking plays a crucial role in protecting intellectual property rights and ensuring content authenticity.This study delves into the integration of Galois Field(GF)multiplication tables,especially GF(2^(4)),and their interaction with distinct irreducible polynomials.The primary aim is to enhance watermarking techniques for achieving imperceptibility,robustness,and efficient execution time.The research employs scene selection and adaptive thresholding techniques to streamline the watermarking process.Scene selection is used strategically to embed watermarks in the most vital frames of the video,while adaptive thresholding methods ensure that the watermarking process adheres to imperceptibility criteria,maintaining the video's visual quality.Concurrently,careful consideration is given to execution time,crucial in real-world scenarios,to balance efficiency and efficacy.The Peak Signal-to-Noise Ratio(PSNR)serves as a pivotal metric to gauge the watermark's imperceptibility and video quality.The study explores various irreducible polynomials,navigating the trade-offs between computational efficiency and watermark imperceptibility.In parallel,the study pays careful attention to the execution time,a paramount consideration in real-world scenarios,to strike a balance between efficiency and efficacy.This comprehensive analysis provides valuable insights into the interplay of GF multiplication tables,diverse irreducible polynomials,scene selection,adaptive thresholding,imperceptibility,and execution time.The evaluation of the proposed algorithm's robustness was conducted using PSNR and NC metrics,and it was subjected to assessment under the impact of five distinct attack scenarios.These findings contribute to the development of watermarking strategies that balance imperceptibility,robustness,and processing efficiency,enhancing the field's practicality and effectiveness.
文摘In view of the composition analysis and identification of ancient glass products, L1 regularization, K-Means cluster analysis, elbow rule and other methods were comprehensively used to build logical regression, cluster analysis, hyper-parameter test and other models, and SPSS, Python and other tools were used to obtain the classification rules of glass products under different fluxes, sub classification under different chemical compositions, hyper-parameter K value test and rationality analysis. Research can provide theoretical support for the protection and restoration of ancient glass relics.
基金supported in part by the National Key Research and Development Program of China(2021YFC2902703)the National Natural Science Foundation of China(62173078,61773105,61533007,61873049,61873053,61703085,61374147)。
文摘Concentrate copper grade(CCG)is one of the important production indicators of copper flotation processes,and keeping the CCG at the set value is of great significance to the economic benefit of copper flotation industrial processes.This paper addresses the fluctuation problem of CCG through an operational optimization method.Firstly,a density-based affinity propagationalgorithm is proposed so that more ideal working condition categories can be obtained for the complex raw ore properties.Next,a Bayesian network(BN)is applied to explore the relationship between the operational variables and the CCG.Based on the analysis results of BN,a weighted Gaussian process regression model is constructed to predict the CCG that a higher prediction accuracy can be obtained.To ensure the predicted CCG is close to the set value with a smaller magnitude of the operation adjustments and a smaller uncertainty of the prediction results,an index-oriented adaptive differential evolution(IOADE)algorithm is proposed,and the convergence performance of IOADE is superior to the traditional differential evolution and adaptive differential evolution methods.Finally,the effectiveness and feasibility of the proposed methods are verified by the experiments on a copper flotation industrial process.
基金financially supported by the NationalNatural Science Foundation of China(Grant No.42072309)the Fundamental Research Funds for National University,China University of Geosciences(Wuhan)(Grant No.CUGDCJJ202217)+1 种基金the Knowledge Innovation Program of Wuhan-Basic Research(Grant No.2022020801010199)the Hubei Key Laboratory of Blasting Engineering Foundation(HKLBEF202002).
文摘Accurately estimating blasting vibration during rock blasting is the foundation of blasting vibration management.In this study,Tuna Swarm Optimization(TSO),Whale Optimization Algorithm(WOA),and Cuckoo Search(CS)were used to optimize two hyperparameters in support vector regression(SVR).Based on these methods,three hybrid models to predict peak particle velocity(PPV)for bench blasting were developed.Eighty-eight samples were collected to establish the PPV database,eight initial blasting parameters were chosen as input parameters for the predictionmodel,and the PPV was the output parameter.As predictive performance evaluation indicators,the coefficient of determination(R2),rootmean square error(RMSE),mean absolute error(MAE),and a10-index were selected.The normalizedmutual information value is then used to evaluate the impact of various input parameters on the PPV prediction outcomes.According to the research findings,TSO,WOA,and CS can all enhance the predictive performance of the SVR model.The TSO-SVR model provides the most accurate predictions.The performances of the optimized hybrid SVR models are superior to the unoptimized traditional prediction model.The maximum charge per delay impacts the PPV prediction value the most.
文摘A certain variety of non-switched polynomials provides a uni-figure representation for a wide range of linear functional equations. This is properly adapted for the calculations. We reinterpret from this point of view a number of algorithms.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R194)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘In this present time,Human Activity Recognition(HAR)has been of considerable aid in the case of health monitoring and recovery.The exploitation of machine learning with an intelligent agent in the area of health informatics gathered using HAR augments the decision-making quality and significance.Although many research works conducted on Smart Healthcare Monitoring,there remain a certain number of pitfalls such as time,overhead,and falsification involved during analysis.Therefore,this paper proposes a Statistical Partial Regression and Support Vector Intelligent Agent Learning(SPR-SVIAL)for Smart Healthcare Monitoring.At first,the Statistical Partial Regression Feature Extraction model is used for data preprocessing along with the dimensionality-reduced features extraction process.Here,the input dataset the continuous beat-to-beat heart data,triaxial accelerometer data,and psychological characteristics were acquired from IoT wearable devices.To attain highly accurate Smart Healthcare Monitoring with less time,Partial Least Square helps extract the dimensionality-reduced features.After that,with these resulting features,SVIAL is proposed for Smart Healthcare Monitoring with the help of Machine Learning and Intelligent Agents to minimize both analysis falsification and overhead.Experimental evaluation is carried out for factors such as time,overhead,and false positive rate accuracy concerning several instances.The quantitatively analyzed results indicate the better performance of our proposed SPR-SVIAL method when compared with two state-of-the-art methods.
文摘BACKGROUND The spread of the severe acute respiratory syndrome coronavirus 2 outbreak worldwide has caused concern regarding the mortality rate caused by the infection.The determinants of mortality on a global scale cannot be fully understood due to lack of information.AIM To identify key factors that may explain the variability in case lethality across countries.METHODS We identified 21 Potential risk factors for coronavirus disease 2019(COVID-19)case fatality rate for all the countries with available data.We examined univariate relationships of each variable with case fatality rate(CFR),and all independent variables to identify candidate variables for our final multiple model.Multiple regression analysis technique was used to assess the strength of relationship.RESULTS The mean of COVID-19 mortality was 1.52±1.72%.There was a statistically significant inverse correlation between health expenditure,and number of computed tomography scanners per 1 million with CFR,and significant direct correlation was found between literacy,and air pollution with CFR.This final model can predict approximately 97%of the changes in CFR.CONCLUSION The current study recommends some new predictors explaining affect mortality rate.Thus,it could help decision-makers develop health policies to fight COVID-19.
基金supported in part by the National Key Research and Development Program of China(2019YFB1503700)the Hunan Natural Science Foundation-Science and Education Joint Project(2019JJ70063)。
文摘The noise that comes from finite element simulation often causes the model to fall into the local optimal solution and over fitting during optimization of generator.Thus,this paper proposes a Gaussian Process Regression(GPR)model based on Conditional Likelihood Lower Bound Search(CLLBS)to optimize the design of the generator,which can filter the noise in the data and search for global optimization by combining the Conditional Likelihood Lower Bound Search method.Taking the efficiency optimization of 15 kW Permanent Magnet Synchronous Motor as an example.Firstly,this method uses the elementary effect analysis to choose the sensitive variables,combining the evolutionary algorithm to design the super Latin cube sampling plan;Then the generator-converter system is simulated by establishing a co-simulation platform to obtain data.A Gaussian process regression model combing the method of the conditional likelihood lower bound search is established,which combined the chi-square test to optimize the accuracy of the model globally.Secondly,after the model reaches the accuracy,the Pareto frontier is obtained through the NSGA-II algorithm by considering the maximum output torque as a constraint.Last,the constrained optimization is transformed into an unconstrained optimizing problem by introducing maximum constrained improvement expectation(CEI)optimization method based on the re-interpolation model,which cross-validated the optimization results of the Gaussian process regression model.The above method increase the efficiency of generator by 0.76%and 0.5%respectively;And this method can be used for rapid modeling and multi-objective optimization of generator systems.
文摘In this paper, a logistical regression statistical analysis (LR) is presented for a set of variables used in experimental measurements in reversed field pinch (RFP) machines, commonly known as “slinky mode” (SM), observed to travel around the torus in Madison Symmetric Torus (MST). The LR analysis is used to utilize the modified Sine-Gordon dynamic equation model to predict with high confidence whether the slinky mode will lock or not lock when compared to the experimentally measured motion of the slinky mode. It is observed that under certain conditions, the slinky mode “locks” at or near the intersection of poloidal and/or toroidal gaps in MST. However, locked mode cease to travel around the torus;while unlocked mode keeps traveling without a change in the energy, making it hard to determine an exact set of conditions to predict locking/unlocking behaviour. The significant key model parameters determined by LR analysis are shown to improve the Sine-Gordon model’s ability to determine the locking/unlocking of magnetohydrodyamic (MHD) modes. The LR analysis of measured variables provides high confidence in anticipating locking versus unlocking of slinky mode proven by relational comparisons between simulations and the experimentally measured motion of the slinky mode in MST.
文摘Studies on drug combinations are becoming more and more popular in the past few decades, with the development of computer and algorithms. One of the most common methods in optimizing drug combinations is regression of a polynomial model based on certain number of experimental observations. In this paper, we study how to determine the degree of polynomials in different circumstances of drug combination optimization. Using cross-validation, we have found that in most cases, a high degree results in failures of accurate prediction, named overfitting. An anti-noise test has also revealed that polynomial model with high degree tends to be less resistant to random errors in the observations.
基金supported by the National Natural Science Foundation of China(No.51077037)the Natural Science Foundation of Tianjin,China(No.15JCYBJC17000)the Science and Technology Research Project of Hebei Higher Education,China(No.ZD2017021).
文摘A method based on polynomial regression algorithm(PRA) is proposed in this paper to compensate the nonlinear phase noise in optical frequency domain reflection(OFDR) systems. In this method, the nonlinear phase of OFDR systems is represented by the polynomial phase function, and then the coefficients of the polynomial phase function are estimated by PRA. Finally, the nonlinearity is compensated by match Fourier transform(MFT). Simulation results demonstrate that the proposed algorithm has good performance in compensating both weak and strong nonlinear phase noises of OFDR systems.
文摘In this paper, the Schwarz Information Criterion (SIC) is used to detect the change points in polynomial regression models. Switching quadratic regression models with same amount of model deviation and switching polynomial regression models with different amount of model deviation for different segments of regression are considered. The number of separate regimes and their corresponding regression orders are assume to be known. The method is then applied to cable data sets and the change points are successfully detected.
文摘The global pandemic,coronavirus disease 2019(COVID-19),has significantly affected tourism,especially in Spain,as it was among the first countries to be affected by the pandemic and is among the world’s biggest tourist destinations.Stock market values are responding to the evolution of the pandemic,especially in the case of tourist companies.Therefore,being able to quantify this relationship allows us to predict the effect of the pandemic on shares in the tourism sector,thereby improving the response to the crisis by policymakers and investors.Accordingly,a dynamic regression model was developed to predict the behavior of shares in the Spanish tourism sector according to the evolution of the COVID-19 pandemic in the medium term.It has been confirmed that both the number of deaths and cases are good predictors of abnormal stock prices in the tourism sector.