It is quite common in statistical modeling to select a model and make inference as if the model had been known in advance;i.e. ignoring model selection uncertainty. The resulted estimator is called post-model selectio...It is quite common in statistical modeling to select a model and make inference as if the model had been known in advance;i.e. ignoring model selection uncertainty. The resulted estimator is called post-model selection estimator (PMSE) whose properties are hard to derive. Conditioning on data at hand (as it is usually the case), Bayesian model selection is free of this phenomenon. This paper is concerned with the properties of Bayesian estimator obtained after model selection when the frequentist (long run) performances of the resulted Bayesian estimator are of interest. The proposed method, using Bayesian decision theory, is based on the well known Bayesian model averaging (BMA)’s machinery;and outperforms PMSE and BMA. It is shown that if the unconditional model selection probability is equal to model prior, then the proposed approach reduces BMA. The method is illustrated using Bernoulli trials.展开更多
To solve the medium and long term power load forecasting problem,the combination forecasting method is further expanded and a weighted combination forecasting model for power load is put forward.This model is divided ...To solve the medium and long term power load forecasting problem,the combination forecasting method is further expanded and a weighted combination forecasting model for power load is put forward.This model is divided into two stages which are forecasting model selection and weighted combination forecasting.Based on Markov chain conversion and cloud model,the forecasting model selection is implanted and several outstanding models are selected for the combination forecasting.For the weighted combination forecasting,a fuzzy scale joint evaluation method is proposed to determine the weight of selected forecasting model.The percentage error and mean absolute percentage error of weighted combination forecasting result of the power consumption in a certain area of China are 0.7439%and 0.3198%,respectively,while the maximum values of these two indexes of single forecasting models are 5.2278%and 1.9497%.It shows that the forecasting indexes of proposed model are improved significantly compared with the single forecasting models.展开更多
Regional climate change impact assessments are becoming increasingly important for developing adaptation strategies in an uncertain future with respect to hydro-climatic extremes. There are a number of Global Climate ...Regional climate change impact assessments are becoming increasingly important for developing adaptation strategies in an uncertain future with respect to hydro-climatic extremes. There are a number of Global Climate Models (GCMs) and emission scenarios providing predictions of future changes in climate. As a result, there is a level of uncertainty associated with the decision of which climate models to use for the assessment of climate change impacts. The IPCC has recommended using as many global climate model scenarios as possible;however, this approach may be impractical for regional assessments that are computationally demanding. Methods have been developed to select climate model scenarios, generally consisting of selecting a model with the highest skill (validation), creating an ensemble, or selecting one or more extremes. Validation methods limit analyses to models with higher skill in simulating historical climate, ensemble methods typically take multi model means, median, or percentiles, and extremes methods tend to use scenarios which bound the projected changes in precipitation and temperature. In this paper a quantile regression based validation method is developed and applied to generate a reduced set of GCM-scenarios to analyze daily maximum streamflow uncertainty in the Upper Thames River Basin, Canada, while extremes and percentile ensemble approaches are also used for comparison. Results indicate that the validation method was able to effectively rank and reduce the set of scenarios, while the extremes and percentile ensemble methods were found not to necessarily correlate well with the range of extreme flows for all calendar months and return periods.展开更多
We focus on the development of model selection criteria in linear mixed models. In particular, we propose the model selection criteria following the Mallows’ Conceptual Predictive Statistic (Cp) [1] [2] in linear mix...We focus on the development of model selection criteria in linear mixed models. In particular, we propose the model selection criteria following the Mallows’ Conceptual Predictive Statistic (Cp) [1] [2] in linear mixed models. When correlation exists between the observations in data, the normal Gauss discrepancy in univariate case is not appropriate to measure the distance between the true model and a candidate model. Instead, we define a marginal Gauss discrepancy which takes the correlation into account in the mixed models. The model selection criterion, marginal Cp, called MCp, serves as an asymptotically unbiased estimator of the expected marginal Gauss discrepancy. An improvement of MCp, called IMCp, is then derived and proved to be a more accurate estimator of the expected marginal Gauss discrepancy than MCp. The performance of the proposed criteria is investigated in a simulation study. The simulation results show that in small samples, the proposed criteria outperform the Akaike Information Criteria (AIC) [3] [4] and Bayesian Information Criterion (BIC) [5] in selecting the correct model;in large samples, their performance is competitive. Further, the proposed criteria perform significantly better for highly correlated response data than for weakly correlated data.展开更多
The spatial and spatiotemporal autoregressive conditional heteroscedasticity(STARCH) models receive increasing attention. In this paper, we introduce a spatiotemporal autoregressive(STAR) model with STARCH errors, whi...The spatial and spatiotemporal autoregressive conditional heteroscedasticity(STARCH) models receive increasing attention. In this paper, we introduce a spatiotemporal autoregressive(STAR) model with STARCH errors, which can capture the spatiotemporal dependence in mean and variance simultaneously. The Bayesian estimation and model selection are considered for our model. By Monte Carlo simulations, it is shown that the Bayesian estimator performs better than the corresponding maximum-likelihood estimator, and the Bayesian model selection can select out the true model in most times. Finally, two empirical examples are given to illustrate the superiority of our models in fitting those data.展开更多
The optimal selection of radar clutter model is the premise of target detection,tracking,recognition,and cognitive waveform design in clutter background.Clutter characterization models are usually derived by mathemati...The optimal selection of radar clutter model is the premise of target detection,tracking,recognition,and cognitive waveform design in clutter background.Clutter characterization models are usually derived by mathematical simplification or empirical data fitting.However,the lack of standard model labels is a challenge in the optimal selection process.To solve this problem,a general three-level evaluation system for the model selection performance is proposed,including model selection accuracy index based on simulation data,fit goodness indexs based on the optimally selected model,and evaluation index based on the supporting performance to its third-party.The three-level evaluation system can more comprehensively and accurately describe the selection performance of the radar clutter model in different ways,and can be popularized and applied to the evaluation of other similar characterization model selection.展开更多
Parkinson's disease(PD)is a neurodegenerative disorder characterized by motor and non-motor symptoms that significantly impact an individual's quality of life.Voice changes have shown promise as early indicato...Parkinson's disease(PD)is a neurodegenerative disorder characterized by motor and non-motor symptoms that significantly impact an individual's quality of life.Voice changes have shown promise as early indicators of PD,making voice analysis a valuable tool for early detection and intervention.This study aims to assess and detect the severity of PD through voice analysis using the mobile device voice recordings dataset.The dataset consisted of recordings from PD patients at different stages of the disease and healthy control subjects.A novel approach was employed,incorporating a voice activity detection algorithm for speech segmentation and the wavelet scattering transform for feature extraction.A Bayesian optimization technique is used to fine-tune the hyperparameters of seven commonly used classifiers and optimize the performance of machine learning classifiers for PD severity detection.AdaBoost and K-nearest neighbor consistently demonstrated superior performance across various evaluation metrics among the classifiers.Furthermore,a weighted majority voting(WMV)technique is implemented,leveraging the predictions of multiple models to achieve a near-perfect accuracy of 98.62%,improving classification accuracy.The results highlight the promising potential of voice analysis in PD diagnosis and monitoring.Integrating advanced signal processing techniques and machine learning models provides reliable and accessible tools for PD assessment,facilitating early intervention and improving patient outcomes.This study contributes to the field by demonstrating the effectiveness of the proposed methodology and the significant role of WMV in enhancing classification accuracy for PD severity detection.展开更多
In a competitive digital age where data volumes are increasing with time, the ability to extract meaningful knowledge from high-dimensional data using machine learning (ML) and data mining (DM) techniques and making d...In a competitive digital age where data volumes are increasing with time, the ability to extract meaningful knowledge from high-dimensional data using machine learning (ML) and data mining (DM) techniques and making decisions based on the extracted knowledge is becoming increasingly important in all business domains. Nevertheless, high-dimensional data remains a major challenge for classification algorithms due to its high computational cost and storage requirements. The 2016 Demographic and Health Survey of Ethiopia (EDHS 2016) used as the data source for this study which is publicly available contains several features that may not be relevant to the prediction task. In this paper, we developed a hybrid multidimensional metrics framework for predictive modeling for both model performance evaluation and feature selection to overcome the feature selection challenges and select the best model among the available models in DM and ML. The proposed hybrid metrics were used to measure the efficiency of the predictive models. Experimental results show that the decision tree algorithm is the most efficient model. The higher score of HMM (m, r) = 0.47 illustrates the overall significant model that encompasses almost all the user’s requirements, unlike the classical metrics that use a criterion to select the most appropriate model. On the other hand, the ANNs were found to be the most computationally intensive for our prediction task. Moreover, the type of data and the class size of the dataset (unbalanced data) have a significant impact on the efficiency of the model, especially on the computational cost, and the interpretability of the parameters of the model would be hampered. And the efficiency of the predictive model could be improved with other feature selection algorithms (especially hybrid metrics) considering the experts of the knowledge domain, as the understanding of the business domain has a significant impact.展开更多
Soybean frogeye leaf spot(FLS) disease is a global disease affecting soybean yield, especially in the soybean growing area of Heilongjiang Province. In order to realize genomic selection breeding for FLS resistance of...Soybean frogeye leaf spot(FLS) disease is a global disease affecting soybean yield, especially in the soybean growing area of Heilongjiang Province. In order to realize genomic selection breeding for FLS resistance of soybean, least absolute shrinkage and selection operator(LASSO) regression and stepwise regression were combined, and a genomic selection model was established for 40 002 SNP markers covering soybean genome and relative lesion area of soybean FLS. As a result, 68 molecular markers controlling soybean FLS were detected accurately, and the phenotypic contribution rate of these markers reached 82.45%. In this study, a model was established, which could be used directly to evaluate the resistance of soybean FLS and to select excellent offspring. This research method could also provide ideas and methods for other plants to breeding in disease resistance.展开更多
A powerful investigative tool in biology is to consider not a single mathematical model but a collection of models designed to explore different working hypotheses and select the best model in that collection.In these...A powerful investigative tool in biology is to consider not a single mathematical model but a collection of models designed to explore different working hypotheses and select the best model in that collection.In these lecture notes,the usual workflow of the use of mathematical models to investigate a biological problem is described and the use of a collection of model is motivated.Models depend on parameters that must be estimated using observations;and when a collection of models is considered,the best model has then to be identified based on available observations.Hence,model calibration and selection,which are intrinsically linked,are essential steps of the workflow.Here,some procedures for model calibration and a criterion,the Akaike Information Criterion,of model selection based on experimental data are described.Rough derivation,practical technique of computation and use of this criterion are detailed.展开更多
We study the law of the iterated logarithm (LIL) for the maximum likelihood estimation of the parameters (as a convex optimization problem) in the generalized linear models with independent or weakly dependent (ρ-mix...We study the law of the iterated logarithm (LIL) for the maximum likelihood estimation of the parameters (as a convex optimization problem) in the generalized linear models with independent or weakly dependent (ρ-mixing) responses under mild conditions. The LIL is useful to derive the asymptotic bounds for the discrepancy between the empirical process of the log-likelihood function and the true log-likelihood. The strong consistency of some penalized likelihood-based model selection criteria can be shown as an application of the LIL. Under some regularity conditions, the model selection criterion will be helpful to select the simplest correct model almost surely when the penalty term increases with the model dimension, and the penalty term has an order higher than O(log log n) but lower than O(n). Simulation studies are implemented to verify the selection consistency of Bayesian information criterion.展开更多
Based on the problem of detecting the number of signals,this paper provides a systematic empirical investigation on model selection performances of several classical criteria and recently developed methods(including A...Based on the problem of detecting the number of signals,this paper provides a systematic empirical investigation on model selection performances of several classical criteria and recently developed methods(including Akaike’s information criterion(AIC),Schwarz’s Bayesian information criterion,Bozdogan’s consistent AIC,Hannan-Quinn information criterion,Minka’s(MK)principal component analysis(PCA)criterion,Kritchman&Nadler’s hypothesis tests(KN),Perry&Wolfe’s minimax rank estimation thresholding algorithm(MM),and Bayesian Ying-Yang(BYY)harmony learning),by varying signal-to-noise ratio(SNR)and training sample size N.A family of model selection indifference curves is defined by the contour lines of model selection accuracies,such that we can examine the joint effect of N and SNR rather than merely the effect of either of SNR and N with the other fixed as usually done in the literature.The indifference curves visually reveal that all methods demonstrate relative advantages obviously within a region of moderate N and SNR.Moreover,the importance of studying this region is also confirmed by an alternative reference criterion by maximizing the testing likelihood.It has been shown via extensive simulations that AIC and BYY harmony learning,as well as MK,KN,and MM,are relatively more robust than the others against decreasing N and SNR,and BYY is superior for a small sample size.展开更多
Three Bayesian related approaches,namely,variational Bayesian(VB),minimum message length(MML)and Bayesian Ying-Yang(BYY)harmony learning,have been applied to automatically determining an appropriate number of componen...Three Bayesian related approaches,namely,variational Bayesian(VB),minimum message length(MML)and Bayesian Ying-Yang(BYY)harmony learning,have been applied to automatically determining an appropriate number of components during learning Gaussian mixture model(GMM).This paper aims to provide a comparative investigation on these approaches with not only a Jeffreys prior but also a conjugate Dirichlet-Normal-Wishart(DNW)prior on GMM.In addition to adopting the existing algorithms either directly or with some modifications,the algorithm for VB with Jeffreys prior and the algorithm for BYY with DNW prior are developed in this paper to fill the missing gap.The performances of automatic model selection are evaluated through extensive experiments,with several empirical findings:1)Considering priors merely on the mixing weights,each of three approaches makes biased mistakes,while considering priors on all the parameters of GMM makes each approach reduce its bias and also improve its performance.2)As Jeffreys prior is replaced by the DNW prior,all the three approaches improve their performances.Moreover,Jeffreys prior makes MML slightly better than VB,while the DNW prior makes VB better than MML.3)As the hyperparameters of DNW prior are further optimized by each of its own learning principle,BYY improves its performances while VB and MML deteriorate their performances when there are too many free hyper-parameters.Actually,VB and MML lack a good guide for optimizing the hyper-parameters of DNW prior.4)BYY considerably outperforms both VB and MML for any type of priors and whether hyper-parameters are optimized.Being different from VB and MML that rely on appropriate priors to perform model selection,BYY does not highly depend on the type of priors.It has model selection ability even without priors and performs already very well with Jeffreys prior,and incrementally improves as Jeffreys prior is replaced by the DNW prior.Finally,all algorithms are applied on the Berkeley segmentation database of real world images.Again,BYY considerably outperforms both VB and MML,especially in detecting the objects of interest from a confusing background.展开更多
Selecting the optimal one from similar schemes is a paramount work in equipment design.In consideration of similarity of schemes and repetition of characteristic indices,the theory of set pair analysis(SPA)is proposed...Selecting the optimal one from similar schemes is a paramount work in equipment design.In consideration of similarity of schemes and repetition of characteristic indices,the theory of set pair analysis(SPA)is proposed,and then an optimal selection model is established.In order to improve the accuracy and flexibility,the model is modified by the contribution degree.At last,this model has been validated by an example,and the result demonstrates the method is feasible and valuable for practical usage.展开更多
This article attempted to construct a multi-factor quantitative stock selection model,analyze the financial indicators and transaction data of listed companies in detail via the big data statistical test method,and to...This article attempted to construct a multi-factor quantitative stock selection model,analyze the financial indicators and transaction data of listed companies in detail via the big data statistical test method,and to find out the alpha excess return relative to the market in the case of short stock index futures as a hedge in the Chinese market.展开更多
This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while ...This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while maintaining data quality.We contributed to meeting the challenges of big data visualization using the embedded method based“Select from model(SFM)”method by using“Random forest Importance algorithm(RFI)”and comparing it with the filter method by using“Select percentile(SP)”method based chi square“Chi2”tool for selecting the most important features,which are then fed into a classification process using the logistic regression(LR)algorithm and the k-nearest neighbor(KNN)algorithm.Thus,the classification accuracy(AC)performance of LRis also compared to theKNN approach in python on eight data sets to see which method produces the best rating when feature selection methods are applied.Consequently,the study concluded that the feature selection methods have a significant impact on the analysis and visualization of the data after removing the repetitive data and the data that do not affect the goal.After making several comparisons,the study suggests(SFMLR)using SFM based on RFI algorithm for feature selection,with LR algorithm for data classify.The proposal proved its efficacy by comparing its results with recent literature.展开更多
Widely used deep neural networks currently face limitations in achieving optimal performance for purchase intention prediction due to constraints on data volume and hyperparameter selection.To address this issue,based...Widely used deep neural networks currently face limitations in achieving optimal performance for purchase intention prediction due to constraints on data volume and hyperparameter selection.To address this issue,based on the deep forest algorithm and further integrating evolutionary ensemble learning methods,this paper proposes a novel Deep Adaptive Evolutionary Ensemble(DAEE)model.This model introduces model diversity into the cascade layer,allowing it to adaptively adjust its structure to accommodate complex and evolving purchasing behavior patterns.Moreover,this paper optimizes the methods of obtaining feature vectors,enhancement vectors,and prediction results within the deep forest algorithm to enhance the model’s predictive accuracy.Results demonstrate that the improved deep forest model not only possesses higher robustness but also shows an increase of 5.02%in AUC value compared to the baseline model.Furthermore,its training runtime speed is 6 times faster than that of deep models,and compared to other improved models,its accuracy has been enhanced by 0.9%.展开更多
In several instances of statistical practice, it is not uncommon to use the same data for both model selection and inference, without taking account of the variability induced by model selection step. This is usually ...In several instances of statistical practice, it is not uncommon to use the same data for both model selection and inference, without taking account of the variability induced by model selection step. This is usually referred to as post-model selection inference. The shortcomings of such practice are widely recognized, finding a general solution is extremely challenging. We propose a model averaging alternative consisting on taking into account model selection probability and the like-lihood in assigning the weights. The approach is applied to Bernoulli trials and outperforms Akaike weights model averaging and post-model selection estimators.展开更多
Time-series-based forecasting is essential to determine how past events affect future events. This paper compares the performance accuracy of different time-series models for oil prices. Three types of univariate mode...Time-series-based forecasting is essential to determine how past events affect future events. This paper compares the performance accuracy of different time-series models for oil prices. Three types of univariate models are discussed: the exponential smoothing (ES), Holt-Winters (HW) and autoregressive intergrade moving average (ARIMA) models. To determine the best model, six different strategies were applied as selection criteria to quantify these models’ prediction accuracies. This comparison should help policy makers and industry marketing strategists select the best forecasting method in oil market. The three models were compared by applying them to the time series of regular oil prices for West Texas Intermediate (WTI) crude. The comparison indicated that the HW model performed better than the ES model for a prediction with a confidence interval of 95%. However, the ARIMA (2, 1, 2) model yielded the best results, leading us to conclude that this sophisticated and robust model outperformed other simple yet flexible models in oil market.展开更多
Evaluation of numerical earthquake forecasting models needs to consider two issues of equal importance:the application scenario of the simulation,and the complexity of the model.Criterion of the evaluation-based model...Evaluation of numerical earthquake forecasting models needs to consider two issues of equal importance:the application scenario of the simulation,and the complexity of the model.Criterion of the evaluation-based model selection faces some interesting problems in need of discussion.展开更多
文摘It is quite common in statistical modeling to select a model and make inference as if the model had been known in advance;i.e. ignoring model selection uncertainty. The resulted estimator is called post-model selection estimator (PMSE) whose properties are hard to derive. Conditioning on data at hand (as it is usually the case), Bayesian model selection is free of this phenomenon. This paper is concerned with the properties of Bayesian estimator obtained after model selection when the frequentist (long run) performances of the resulted Bayesian estimator are of interest. The proposed method, using Bayesian decision theory, is based on the well known Bayesian model averaging (BMA)’s machinery;and outperforms PMSE and BMA. It is shown that if the unconditional model selection probability is equal to model prior, then the proposed approach reduces BMA. The method is illustrated using Bernoulli trials.
文摘To solve the medium and long term power load forecasting problem,the combination forecasting method is further expanded and a weighted combination forecasting model for power load is put forward.This model is divided into two stages which are forecasting model selection and weighted combination forecasting.Based on Markov chain conversion and cloud model,the forecasting model selection is implanted and several outstanding models are selected for the combination forecasting.For the weighted combination forecasting,a fuzzy scale joint evaluation method is proposed to determine the weight of selected forecasting model.The percentage error and mean absolute percentage error of weighted combination forecasting result of the power consumption in a certain area of China are 0.7439%and 0.3198%,respectively,while the maximum values of these two indexes of single forecasting models are 5.2278%and 1.9497%.It shows that the forecasting indexes of proposed model are improved significantly compared with the single forecasting models.
文摘Regional climate change impact assessments are becoming increasingly important for developing adaptation strategies in an uncertain future with respect to hydro-climatic extremes. There are a number of Global Climate Models (GCMs) and emission scenarios providing predictions of future changes in climate. As a result, there is a level of uncertainty associated with the decision of which climate models to use for the assessment of climate change impacts. The IPCC has recommended using as many global climate model scenarios as possible;however, this approach may be impractical for regional assessments that are computationally demanding. Methods have been developed to select climate model scenarios, generally consisting of selecting a model with the highest skill (validation), creating an ensemble, or selecting one or more extremes. Validation methods limit analyses to models with higher skill in simulating historical climate, ensemble methods typically take multi model means, median, or percentiles, and extremes methods tend to use scenarios which bound the projected changes in precipitation and temperature. In this paper a quantile regression based validation method is developed and applied to generate a reduced set of GCM-scenarios to analyze daily maximum streamflow uncertainty in the Upper Thames River Basin, Canada, while extremes and percentile ensemble approaches are also used for comparison. Results indicate that the validation method was able to effectively rank and reduce the set of scenarios, while the extremes and percentile ensemble methods were found not to necessarily correlate well with the range of extreme flows for all calendar months and return periods.
文摘We focus on the development of model selection criteria in linear mixed models. In particular, we propose the model selection criteria following the Mallows’ Conceptual Predictive Statistic (Cp) [1] [2] in linear mixed models. When correlation exists between the observations in data, the normal Gauss discrepancy in univariate case is not appropriate to measure the distance between the true model and a candidate model. Instead, we define a marginal Gauss discrepancy which takes the correlation into account in the mixed models. The model selection criterion, marginal Cp, called MCp, serves as an asymptotically unbiased estimator of the expected marginal Gauss discrepancy. An improvement of MCp, called IMCp, is then derived and proved to be a more accurate estimator of the expected marginal Gauss discrepancy than MCp. The performance of the proposed criteria is investigated in a simulation study. The simulation results show that in small samples, the proposed criteria outperform the Akaike Information Criteria (AIC) [3] [4] and Bayesian Information Criterion (BIC) [5] in selecting the correct model;in large samples, their performance is competitive. Further, the proposed criteria perform significantly better for highly correlated response data than for weakly correlated data.
基金supported by National Natural Science Foundation of China (No.12271206)Natural Science Foundation of Jilin Province (No.20210101143JC)Science and Technology Research Planning Project of Jilin Provincial Department of Education (No.JJKH20231122KJ)。
文摘The spatial and spatiotemporal autoregressive conditional heteroscedasticity(STARCH) models receive increasing attention. In this paper, we introduce a spatiotemporal autoregressive(STAR) model with STARCH errors, which can capture the spatiotemporal dependence in mean and variance simultaneously. The Bayesian estimation and model selection are considered for our model. By Monte Carlo simulations, it is shown that the Bayesian estimator performs better than the corresponding maximum-likelihood estimator, and the Bayesian model selection can select out the true model in most times. Finally, two empirical examples are given to illustrate the superiority of our models in fitting those data.
基金the National Natural Science Foundation of China(6187138461921001).
文摘The optimal selection of radar clutter model is the premise of target detection,tracking,recognition,and cognitive waveform design in clutter background.Clutter characterization models are usually derived by mathematical simplification or empirical data fitting.However,the lack of standard model labels is a challenge in the optimal selection process.To solve this problem,a general three-level evaluation system for the model selection performance is proposed,including model selection accuracy index based on simulation data,fit goodness indexs based on the optimally selected model,and evaluation index based on the supporting performance to its third-party.The three-level evaluation system can more comprehensively and accurately describe the selection performance of the radar clutter model in different ways,and can be popularized and applied to the evaluation of other similar characterization model selection.
文摘Parkinson's disease(PD)is a neurodegenerative disorder characterized by motor and non-motor symptoms that significantly impact an individual's quality of life.Voice changes have shown promise as early indicators of PD,making voice analysis a valuable tool for early detection and intervention.This study aims to assess and detect the severity of PD through voice analysis using the mobile device voice recordings dataset.The dataset consisted of recordings from PD patients at different stages of the disease and healthy control subjects.A novel approach was employed,incorporating a voice activity detection algorithm for speech segmentation and the wavelet scattering transform for feature extraction.A Bayesian optimization technique is used to fine-tune the hyperparameters of seven commonly used classifiers and optimize the performance of machine learning classifiers for PD severity detection.AdaBoost and K-nearest neighbor consistently demonstrated superior performance across various evaluation metrics among the classifiers.Furthermore,a weighted majority voting(WMV)technique is implemented,leveraging the predictions of multiple models to achieve a near-perfect accuracy of 98.62%,improving classification accuracy.The results highlight the promising potential of voice analysis in PD diagnosis and monitoring.Integrating advanced signal processing techniques and machine learning models provides reliable and accessible tools for PD assessment,facilitating early intervention and improving patient outcomes.This study contributes to the field by demonstrating the effectiveness of the proposed methodology and the significant role of WMV in enhancing classification accuracy for PD severity detection.
文摘In a competitive digital age where data volumes are increasing with time, the ability to extract meaningful knowledge from high-dimensional data using machine learning (ML) and data mining (DM) techniques and making decisions based on the extracted knowledge is becoming increasingly important in all business domains. Nevertheless, high-dimensional data remains a major challenge for classification algorithms due to its high computational cost and storage requirements. The 2016 Demographic and Health Survey of Ethiopia (EDHS 2016) used as the data source for this study which is publicly available contains several features that may not be relevant to the prediction task. In this paper, we developed a hybrid multidimensional metrics framework for predictive modeling for both model performance evaluation and feature selection to overcome the feature selection challenges and select the best model among the available models in DM and ML. The proposed hybrid metrics were used to measure the efficiency of the predictive models. Experimental results show that the decision tree algorithm is the most efficient model. The higher score of HMM (m, r) = 0.47 illustrates the overall significant model that encompasses almost all the user’s requirements, unlike the classical metrics that use a criterion to select the most appropriate model. On the other hand, the ANNs were found to be the most computationally intensive for our prediction task. Moreover, the type of data and the class size of the dataset (unbalanced data) have a significant impact on the efficiency of the model, especially on the computational cost, and the interpretability of the parameters of the model would be hampered. And the efficiency of the predictive model could be improved with other feature selection algorithms (especially hybrid metrics) considering the experts of the knowledge domain, as the understanding of the business domain has a significant impact.
基金Supported by the National Key Research and Development Program of China(2021YFD1201103-01-05)。
文摘Soybean frogeye leaf spot(FLS) disease is a global disease affecting soybean yield, especially in the soybean growing area of Heilongjiang Province. In order to realize genomic selection breeding for FLS resistance of soybean, least absolute shrinkage and selection operator(LASSO) regression and stepwise regression were combined, and a genomic selection model was established for 40 002 SNP markers covering soybean genome and relative lesion area of soybean FLS. As a result, 68 molecular markers controlling soybean FLS were detected accurately, and the phenotypic contribution rate of these markers reached 82.45%. In this study, a model was established, which could be used directly to evaluate the resistance of soybean FLS and to select excellent offspring. This research method could also provide ideas and methods for other plants to breeding in disease resistance.
基金SP is supported by a Discovery Grant of the Natural Sciences and Engineering Research Council of Canada(RGOIN-2018-04967).
文摘A powerful investigative tool in biology is to consider not a single mathematical model but a collection of models designed to explore different working hypotheses and select the best model in that collection.In these lecture notes,the usual workflow of the use of mathematical models to investigate a biological problem is described and the use of a collection of model is motivated.Models depend on parameters that must be estimated using observations;and when a collection of models is considered,the best model has then to be identified based on available observations.Hence,model calibration and selection,which are intrinsically linked,are essential steps of the workflow.Here,some procedures for model calibration and a criterion,the Akaike Information Criterion,of model selection based on experimental data are described.Rough derivation,practical technique of computation and use of this criterion are detailed.
文摘We study the law of the iterated logarithm (LIL) for the maximum likelihood estimation of the parameters (as a convex optimization problem) in the generalized linear models with independent or weakly dependent (ρ-mixing) responses under mild conditions. The LIL is useful to derive the asymptotic bounds for the discrepancy between the empirical process of the log-likelihood function and the true log-likelihood. The strong consistency of some penalized likelihood-based model selection criteria can be shown as an application of the LIL. Under some regularity conditions, the model selection criterion will be helpful to select the simplest correct model almost surely when the penalty term increases with the model dimension, and the penalty term has an order higher than O(log log n) but lower than O(n). Simulation studies are implemented to verify the selection consistency of Bayesian information criterion.
基金The work described in this paper was fully supported by a grant from the Research Grant Council of the Hong Kong SAR(No.CUHK4177/07E).
文摘Based on the problem of detecting the number of signals,this paper provides a systematic empirical investigation on model selection performances of several classical criteria and recently developed methods(including Akaike’s information criterion(AIC),Schwarz’s Bayesian information criterion,Bozdogan’s consistent AIC,Hannan-Quinn information criterion,Minka’s(MK)principal component analysis(PCA)criterion,Kritchman&Nadler’s hypothesis tests(KN),Perry&Wolfe’s minimax rank estimation thresholding algorithm(MM),and Bayesian Ying-Yang(BYY)harmony learning),by varying signal-to-noise ratio(SNR)and training sample size N.A family of model selection indifference curves is defined by the contour lines of model selection accuracies,such that we can examine the joint effect of N and SNR rather than merely the effect of either of SNR and N with the other fixed as usually done in the literature.The indifference curves visually reveal that all methods demonstrate relative advantages obviously within a region of moderate N and SNR.Moreover,the importance of studying this region is also confirmed by an alternative reference criterion by maximizing the testing likelihood.It has been shown via extensive simulations that AIC and BYY harmony learning,as well as MK,KN,and MM,are relatively more robust than the others against decreasing N and SNR,and BYY is superior for a small sample size.
基金The work described in this paper was supported by a grant of the General Research Fund(GRF)from the Research Grant Council of Hong Kong SAR(Project No.CUHK418011E).
文摘Three Bayesian related approaches,namely,variational Bayesian(VB),minimum message length(MML)and Bayesian Ying-Yang(BYY)harmony learning,have been applied to automatically determining an appropriate number of components during learning Gaussian mixture model(GMM).This paper aims to provide a comparative investigation on these approaches with not only a Jeffreys prior but also a conjugate Dirichlet-Normal-Wishart(DNW)prior on GMM.In addition to adopting the existing algorithms either directly or with some modifications,the algorithm for VB with Jeffreys prior and the algorithm for BYY with DNW prior are developed in this paper to fill the missing gap.The performances of automatic model selection are evaluated through extensive experiments,with several empirical findings:1)Considering priors merely on the mixing weights,each of three approaches makes biased mistakes,while considering priors on all the parameters of GMM makes each approach reduce its bias and also improve its performance.2)As Jeffreys prior is replaced by the DNW prior,all the three approaches improve their performances.Moreover,Jeffreys prior makes MML slightly better than VB,while the DNW prior makes VB better than MML.3)As the hyperparameters of DNW prior are further optimized by each of its own learning principle,BYY improves its performances while VB and MML deteriorate their performances when there are too many free hyper-parameters.Actually,VB and MML lack a good guide for optimizing the hyper-parameters of DNW prior.4)BYY considerably outperforms both VB and MML for any type of priors and whether hyper-parameters are optimized.Being different from VB and MML that rely on appropriate priors to perform model selection,BYY does not highly depend on the type of priors.It has model selection ability even without priors and performs already very well with Jeffreys prior,and incrementally improves as Jeffreys prior is replaced by the DNW prior.Finally,all algorithms are applied on the Berkeley segmentation database of real world images.Again,BYY considerably outperforms both VB and MML,especially in detecting the objects of interest from a confusing background.
文摘Selecting the optimal one from similar schemes is a paramount work in equipment design.In consideration of similarity of schemes and repetition of characteristic indices,the theory of set pair analysis(SPA)is proposed,and then an optimal selection model is established.In order to improve the accuracy and flexibility,the model is modified by the contribution degree.At last,this model has been validated by an example,and the result demonstrates the method is feasible and valuable for practical usage.
基金Supported by National Natural Science Foundation of China(11961005)Guangdong Province General University Characteristic Innovation Project(2018KTSCX253).
文摘This article attempted to construct a multi-factor quantitative stock selection model,analyze the financial indicators and transaction data of listed companies in detail via the big data statistical test method,and to find out the alpha excess return relative to the market in the case of short stock index futures as a hedge in the Chinese market.
文摘This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while maintaining data quality.We contributed to meeting the challenges of big data visualization using the embedded method based“Select from model(SFM)”method by using“Random forest Importance algorithm(RFI)”and comparing it with the filter method by using“Select percentile(SP)”method based chi square“Chi2”tool for selecting the most important features,which are then fed into a classification process using the logistic regression(LR)algorithm and the k-nearest neighbor(KNN)algorithm.Thus,the classification accuracy(AC)performance of LRis also compared to theKNN approach in python on eight data sets to see which method produces the best rating when feature selection methods are applied.Consequently,the study concluded that the feature selection methods have a significant impact on the analysis and visualization of the data after removing the repetitive data and the data that do not affect the goal.After making several comparisons,the study suggests(SFMLR)using SFM based on RFI algorithm for feature selection,with LR algorithm for data classify.The proposal proved its efficacy by comparing its results with recent literature.
基金supported by Ningxia Key R&D Program (Key)Project (2023BDE02001)Ningxia Key R&D Program (Talent Introduction Special)Project (2022YCZX0013)+2 种基金North Minzu University 2022 School-Level Research Platform“Digital Agriculture Empowering Ningxia Rural Revitalization Innovation Team”,Project Number:2022PT_S10Yinchuan City School-Enterprise Joint Innovation Project (2022XQZD009)“Innovation Team for Imaging and Intelligent Information Processing”of the National Ethnic Affairs Commission.
文摘Widely used deep neural networks currently face limitations in achieving optimal performance for purchase intention prediction due to constraints on data volume and hyperparameter selection.To address this issue,based on the deep forest algorithm and further integrating evolutionary ensemble learning methods,this paper proposes a novel Deep Adaptive Evolutionary Ensemble(DAEE)model.This model introduces model diversity into the cascade layer,allowing it to adaptively adjust its structure to accommodate complex and evolving purchasing behavior patterns.Moreover,this paper optimizes the methods of obtaining feature vectors,enhancement vectors,and prediction results within the deep forest algorithm to enhance the model’s predictive accuracy.Results demonstrate that the improved deep forest model not only possesses higher robustness but also shows an increase of 5.02%in AUC value compared to the baseline model.Furthermore,its training runtime speed is 6 times faster than that of deep models,and compared to other improved models,its accuracy has been enhanced by 0.9%.
文摘In several instances of statistical practice, it is not uncommon to use the same data for both model selection and inference, without taking account of the variability induced by model selection step. This is usually referred to as post-model selection inference. The shortcomings of such practice are widely recognized, finding a general solution is extremely challenging. We propose a model averaging alternative consisting on taking into account model selection probability and the like-lihood in assigning the weights. The approach is applied to Bernoulli trials and outperforms Akaike weights model averaging and post-model selection estimators.
文摘Time-series-based forecasting is essential to determine how past events affect future events. This paper compares the performance accuracy of different time-series models for oil prices. Three types of univariate models are discussed: the exponential smoothing (ES), Holt-Winters (HW) and autoregressive intergrade moving average (ARIMA) models. To determine the best model, six different strategies were applied as selection criteria to quantify these models’ prediction accuracies. This comparison should help policy makers and industry marketing strategists select the best forecasting method in oil market. The three models were compared by applying them to the time series of regular oil prices for West Texas Intermediate (WTI) crude. The comparison indicated that the HW model performed better than the ES model for a prediction with a confidence interval of 95%. However, the ARIMA (2, 1, 2) model yielded the best results, leading us to conclude that this sophisticated and robust model outperformed other simple yet flexible models in oil market.
基金supported by the National natural Science Foundation of China (NSFC, grant No. U2039207)
文摘Evaluation of numerical earthquake forecasting models needs to consider two issues of equal importance:the application scenario of the simulation,and the complexity of the model.Criterion of the evaluation-based model selection faces some interesting problems in need of discussion.