期刊文献+
共找到74篇文章
< 1 2 4 >
每页显示 20 50 100
Classification of aviation incident causes using LGBM with improved cross-validation
1
作者 NI Xiaomei WANG Huawei +1 位作者 CHEN Lingzi LIN Ruiguan 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第2期396-405,共10页
Aviation accidents are currently one of the leading causes of significant injuries and deaths worldwide. This entices researchers to investigate aircraft safety using data analysis approaches based on an advanced mach... Aviation accidents are currently one of the leading causes of significant injuries and deaths worldwide. This entices researchers to investigate aircraft safety using data analysis approaches based on an advanced machine learning algorithm.To assess aviation safety and identify the causes of incidents, a classification model with light gradient boosting machine (LGBM)based on the aviation safety reporting system (ASRS) has been developed. It is improved by k-fold cross-validation with hybrid sampling model (HSCV), which may boost classification performance and maintain data balance. The results show that employing the LGBM-HSCV model can significantly improve accuracy while alleviating data imbalance. Vertical comparison with other cross-validation (CV) methods and lateral comparison with different fold times comprise the comparative approach. Aside from the comparison, two further CV approaches based on the improved method in this study are discussed:one with a different sampling and folding order, and the other with more CV. According to the assessment indices with different methods, the LGBMHSCV model proposed here is effective at detecting incident causes. The improved model for imbalanced data categorization proposed may serve as a point of reference for similar data processing, and the model’s accurate identification of civil aviation incident causes can assist to improve civil aviation safety. 展开更多
关键词 aviation safety imbalance data light gradient boosting machine(LGBM) cross-validation(CV)
下载PDF
基于Cross-Validation的小波自适应去噪方法 被引量:4
2
作者 黄文清 戴瑜兴 李加升 《湖南大学学报(自然科学版)》 EI CAS CSCD 北大核心 2008年第11期40-43,共4页
小波去噪算法中,阈值的选择非常关键.提出一种自适应阈值选择算法.该算法先通过Cross-Validation方法将噪声干扰信号分成两个子信号,一个用于阈值处理,一个用作参考信号;再采用最深梯度法来寻求一个最优去噪阈值.仿真和实验结果表明:在... 小波去噪算法中,阈值的选择非常关键.提出一种自适应阈值选择算法.该算法先通过Cross-Validation方法将噪声干扰信号分成两个子信号,一个用于阈值处理,一个用作参考信号;再采用最深梯度法来寻求一个最优去噪阈值.仿真和实验结果表明:在均方误差意义上,所提算法去噪效果优于Donoho等提出的VisuShrink和SureShrink两种去噪算法,且不需要带噪信号的任何'先验信息',适应于实际信号去噪处理. 展开更多
关键词 小波变换 cross-validation 自适应滤波 阈值
下载PDF
Cross-Validation, Shrinkage and Variable Selection in Linear Regression Revisited 被引量:3
3
作者 Hans C. van Houwelingen Willi Sauerbrei 《Open Journal of Statistics》 2013年第2期79-102,共24页
In deriving a regression model analysts often have to use variable selection, despite of problems introduced by data- dependent model building. Resampling approaches are proposed to handle some of the critical issues.... In deriving a regression model analysts often have to use variable selection, despite of problems introduced by data- dependent model building. Resampling approaches are proposed to handle some of the critical issues. In order to assess and compare several strategies, we will conduct a simulation study with 15 predictors and a complex correlation structure in the linear regression model. Using sample sizes of 100 and 400 and estimates of the residual variance corresponding to R2 of 0.50 and 0.71, we consider 4 scenarios with varying amount of information. We also consider two examples with 24 and 13 predictors, respectively. We will discuss the value of cross-validation, shrinkage and backward elimination (BE) with varying significance level. We will assess whether 2-step approaches using global or parameterwise shrinkage (PWSF) can improve selected models and will compare results to models derived with the LASSO procedure. Beside of MSE we will use model sparsity and further criteria for model assessment. The amount of information in the data has an influence on the selected models and the comparison of the procedures. None of the approaches was best in all scenarios. The performance of backward elimination with a suitably chosen significance level was not worse compared to the LASSO and BE models selected were much sparser, an important advantage for interpretation and transportability. Compared to global shrinkage, PWSF had better performance. Provided that the amount of information is not too small, we conclude that BE followed by PWSF is a suitable approach when variable selection is a key part of data analysis. 展开更多
关键词 cross-validation LASSO SHRINKAGE SIMULATION STUDY VARIABLE SELECTION
下载PDF
ON THE CONSISTENCY OF CROSS-VALIDATIONIN NONLINEAR WAVELET REGRESSION ESTIMATION
4
作者 张双林 郑忠国 《Acta Mathematica Scientia》 SCIE CSCD 2000年第1期1-11,共11页
For the nonparametric regression model Yni =g(Xni) +εnii = 1, … n. with regulary spaced nonrandom design, the authors study the behavior of the nonlinear wavelet estimator of g(x). When the threshold and truncation ... For the nonparametric regression model Yni =g(Xni) +εnii = 1, … n. with regulary spaced nonrandom design, the authors study the behavior of the nonlinear wavelet estimator of g(x). When the threshold and truncation parameters are chosen by crossvalidation on the everage squared error, strong consistency for the case of dyadic sample size and moment consistency for arbitrary sample size are established under some regular conditions. 展开更多
关键词 CONSISTENCY cross-validation NONPARAMETRIC regression THRESHOLD TRUNCATION wavelet ESTIMATOR
下载PDF
Using Multiple Risk Factors and Generalized Linear Mixed Models with 5-Fold Cross-Validation Strategy for Optimal Carotid Plaque Progression Prediction
5
作者 Qingyu Wang Dalin Tang +5 位作者 Liang Wang Gador Canton Zheyang Wu Thomas SHatsukami Kristen L Billiar Chun Yuan 《医用生物力学》 EI CAS CSCD 北大核心 2019年第A01期74-75,共2页
Background Cardiovascular diseases are closely linked to atherosclerotic plaque development and rupture.Plaque progression prediction is of fundamental significance to cardiovascular research and disease diagnosis,pre... Background Cardiovascular diseases are closely linked to atherosclerotic plaque development and rupture.Plaque progression prediction is of fundamental significance to cardiovascular research and disease diagnosis,prevention,and treatment.Generalized linear mixed models(GLMM)is an extension of linear model for categorical responses while considering the correlation among observations.Methods Magnetic resonance image(MRI)data of carotid atheroscleroticplaques were acquired from 20 patients with consent obtained and 3D thin-layer models were constructed to calculate plaque stress and strain for plaque progression prediction.Data for ten morphological and biomechanical risk factors included wall thickness(WT),lipid percent(LP),minimum cap thickness(MinCT),plaque area(PA),plaque burden(PB),lumen area(LA),maximum plaque wall stress(MPWS),maximum plaque wall strain(MPWSn),average plaque wall stress(APWS),and average plaque wall strain(APWSn)were extracted from all slices for analysis.Wall thickness increase(WTI),plaque burden increase(PBI)and plaque area increase(PAI) were chosen as three measures for plaque progression.Generalized linear mixed models(GLMM)with 5-fold cross-validation strategy were used to calculate prediction accuracy for each predictor and identify optimal predictor with the highest prediction accuracy defined as sum of sensitivity and specificity.All 201 MRI slices were randomly divided into 4 training subgroups and 1 verification subgroup.The training subgroups were used for model fitting,and the verification subgroup was used to estimate the model.All combinations(total1023)of 10 risk factors were feed to GLMM and the prediction accuracy of each predictor were selected from the point on the ROC(receiver operating characteristic)curve with the highest sum of specificity and sensitivity.Results LA was the best single predictor for PBI with the highest prediction accuracy(1.360 1),and the area under of the ROC curve(AUC)is0.654 0,followed by APWSn(1.336 3)with AUC=0.6342.The optimal predictor among all possible combinations for PBI was the combination of LA,PA,LP,WT,MPWS and MPWSn with prediction accuracy=1.414 6(AUC=0.715 8).LA was once again the best single predictor for PAI with the highest prediction accuracy(1.184 6)with AUC=0.606 4,followed by MPWSn(1. 183 2)with AUC=0.6084.The combination of PA,PB,WT,MPWS,MPWSn and APWSn gave the best prediction accuracy(1.302 5)for PAI,and the AUC value is 0.6657.PA was the best single predictor for WTI with highest prediction accuracy(1.288 7)with AUC=0.641 5,followed by WT(1.254 0),with AUC=0.6097.The combination of PA,PB,WT,LP,MinCT,MPWS and MPWS was the best predictor for WTI with prediction accuracy as 1.314 0,with AUC=0.6552.This indicated that PBI was a more predictable measure than WTI and PAI. The combinational predictors improved prediction accuracy by 9.95%,4.01%and 1.96%over the best single predictors for PAI,PBI and WTI(AUC values improved by9.78%,9.45%,and 2.14%),respectively.Conclusions The use of GLMM with 5-fold cross-validation strategy combining both morphological and biomechanical risk factors could potentially improve the accuracy of carotid plaque progression prediction.This study suggests that a linear combination of multiple predictors can provide potential improvement to existing plaque assessment schemes. 展开更多
关键词 Multiple Risk FACTORS GENERALIZED Linear 5-Fold cross-validation STRATEGY AUC
原文传递
基于V-foldCross-validation和Elman神经网络的信用评价研究 被引量:20
6
作者 吴德胜 梁樑 《系统工程理论与实践》 EI CSCD 北大核心 2004年第4期92-98,共7页
 研究了关于公司信用评估问题的现状,指出一般神经网络应用于信用评估领域的不足.在此基础上,提出一套甄选原则以选择关键的信用评分指标;然后依据这些指标建立了基于Elman回归神经网络的我国企业的信用评估模型.采用V-foldCross-valid...  研究了关于公司信用评估问题的现状,指出一般神经网络应用于信用评估领域的不足.在此基础上,提出一套甄选原则以选择关键的信用评分指标;然后依据这些指标建立了基于Elman回归神经网络的我国企业的信用评估模型.采用V-foldCross-validation技巧对该模型的评分效果进行了实证研究. 展开更多
关键词 ELMAN神经网络 V-fold cross-validation技巧 信用评分
原文传递
On Splitting Training and Validation Set:A Comparative Study of Cross-Validation,Bootstrap and Systematic Sampling for Estimating the Generalization Performance of Supervised Learning 被引量:8
7
作者 Yun Xu Royston Goodacre 《Journal of Analysis and Testing》 EI 2018年第3期249-262,共14页
Model validation is the most important part of building a supervised model.For building a model with good generalization performance one must have a sensible data splitting strategy,and this is crucial for model valid... Model validation is the most important part of building a supervised model.For building a model with good generalization performance one must have a sensible data splitting strategy,and this is crucial for model validation.In this study,we con-ducted a comparative study on various reported data splitting methods.The MixSim model was employed to generate nine simulated datasets with different probabilities of mis-classification and variable sample sizes.Then partial least squares for discriminant analysis and support vector machines for classification were applied to these datasets.Data splitting methods tested included variants of cross-validation,bootstrapping,bootstrapped Latin partition,Kennard-Stone algorithm(K-S)and sample set partitioning based on joint X-Y distances algorithm(SPXY).These methods were employed to split the data into training and validation sets.The estimated generalization performances from the validation sets were then compared with the ones obtained from the blind test sets which were generated from the same distribution but were unseen by the train-ing/validation procedure used in model construction.The results showed that the size of the data is the deciding factor for the qualities of the generalization performance estimated from the validation set.We found that there was a significant gap between the performance estimated from the validation set and the one from the test set for the all the data splitting methods employed on small datasets.Such disparity decreased when more samples were available for training/validation,and this is because the models were then moving towards approximations of the central limit theory for the simulated datasets used.We also found that having too many or too few samples in the training set had a negative effect on the estimated model performance,suggesting that it is necessary to have a good balance between the sizes of training set and validation set to have a reliable estimation of model performance.We also found that systematic sampling method such as K-S and SPXY generally had very poor estimation of the model performance,most likely due to the fact that they are designed to take the most representative samples first and thus left a rather poorly representative sample set for model performance estimation. 展开更多
关键词 cross-validation BOOTSTRAPPING Bootstrapped Latin partition Kennard-Stone algorithm SPXY Model selection Model validation Partial least squares for discriminant analysis Support vector machines
原文传递
Convergence rate of cross-validation in nonlinear wavelet regression estimation 被引量:1
8
作者 Zhang Shuanglin Zheng Zhongguo 《Chinese Science Bulletin》 SCIE EI CAS 1999年第10期898-901,共4页
Cross-validation method is used to choose the three smoothing parameters in nonlin ear wavelet regression estimators. The strong consistency and convergence rate of cross-vali dation nonlinear wavelet regression estim... Cross-validation method is used to choose the three smoothing parameters in nonlin ear wavelet regression estimators. The strong consistency and convergence rate of cross-vali dation nonlinear wavelet regression estimators are obtained. 展开更多
关键词 WAVELET estimation NONPARAMETRIC regression ESTIMATORS cross-validation strong consistency.
下载PDF
Artificial neural network with a cross-validation approach to blast-induced ground vibration propagation modeling
9
作者 Gustavo Paneiro Manuel Rafael 《Underground Space》 SCIE EI 2021年第3期281-289,共9页
Given their technical and economic advantages,the application of explosive substances to rock mass excavation is widely used.However,because of serious environmental restraints,there has been an increasing need to use... Given their technical and economic advantages,the application of explosive substances to rock mass excavation is widely used.However,because of serious environmental restraints,there has been an increasing need to use complex tools to control environmental effects due to blast-induced ground vibrations.In the present study,an artificial neural network(ANN)with k-fold cross-validation was applied to a dataset containing 1114 observations that was obtained from published results;furthermore,quantitative and qualitative parameters were considered for ground vibration amplitude prediction.The best ANN model obtained has a maximum coefficient of determination of 0.840 and a mean absolute error of 5.59 and it comprises 17 input parameters,12 neurons in a one-layer hidden layer,and a sigmoid transfer function.Compared with the traditional models,the model obtained using the proposed methodology demonstrated better generalization ability.Furthermore,the proposed methodology offers an ANN model with higher prediction ability. 展开更多
关键词 Rock blasting EXCAVATION Ground vibrations Artificial neural network K-fold cross-validation MODELING
下载PDF
PPP-RTK considering the ionosphere uncertainty with cross-validation
10
作者 Pan Li Bobin Cui +4 位作者 Jiahuan Hu Xuexi Liu Xiaohong Zhang Maorong Ge Harald Schuh 《Satellite Navigation》 2022年第1期34-46,I0002,共14页
With the high-precision products of satellite orbit and clock,uncalibrated phase delay,and the atmosphere delay corrections,Precise Point Positioning(PPP)based on a Real-Time Kinematic(RTK)network is possible to rapid... With the high-precision products of satellite orbit and clock,uncalibrated phase delay,and the atmosphere delay corrections,Precise Point Positioning(PPP)based on a Real-Time Kinematic(RTK)network is possible to rapidly achieve centimeter-level positioning accuracy.In the ionosphere-weighted PPP–RTK model,not only the a priori value of ionosphere but also its precision afect the convergence and accuracy of positioning.This study proposes a method to determine the precision of the interpolated slant ionospheric delay by cross-validation.The new method takes the high temporal and spatial variation into consideration.A distance-dependent function is built to represent the stochastic model of the slant ionospheric delay derived from each reference station,and an error model is built for each reference station on a fve-minute piecewise basis.The user can interpolate ionospheric delay correction and the corresponding precision with an error function related to the distance and time of each reference station.With the European Reference Frame(EUREF)Permanent GNSS(Global Navigation Satellite Systems)network(EPN),and SONEL(Système d’Observation du Niveau des Eaux Littorales)GNSS stations covering most of Europe,the efectiveness of our wide-area ionosphere constraint method for PPP-RTK is validated,compared with the method with a fxed ionosphere precision threshold.It is shown that although the Root Mean Square(RMS)of the interpolated ionosphere error is within 5 cm in most of the areas,it exceeds 10 cm for some areas with sparse reference stations during some periods of time.The convergence time of the 90th percentile is 4.0 and 20.5 min for horizontal and vertical directions using Global Positioning System(GPS)kinematic solution,respectively,with the proposed method.This convergence is faster than those with the fxed ionosphere precision values of 1,8,and 30 cm.The improvement with respect to the latter three solutions ranges from 10 to 60%.After integrating the Galileo navigation satellite system(Galileo),the convergence time of the 90th percentile for combined kinematic solutions is 2.0 and 9.0 min,with an improvement of 50.0%and 56.1%for horizontal and vertical directions,respectively,compared with the GPS-only solution.The average convergence time of GPS PPP-RTK for horizontal and vertical directions are 2.0 and 5.0 min,and those of GPS+Galileo PPP-RTK are 1.4 and 3.0 min,respectively. 展开更多
关键词 PPP-RTK Ionosphere precision cross-validation Rapid ambiguity resolution
原文传递
Multi-environment BSA-seq using large F3 populations is able to achieve reliable QTL mapping with high power and resolution: An experimental demonstration in rice
11
作者 Yan Zheng Ei Ei Khine +9 位作者 Khin Mar Thi Ei Ei Nyein Likun Huang Lihui Lin Xiaofang Xie Min Htay Wai Lin Khin Than Oo Myat Myat Moe San San Aye Weiren Wu 《The Crop Journal》 SCIE CSCD 2024年第2期549-557,共9页
Bulked-segregant analysis by deep sequencing(BSA-seq) is a widely used method for mapping QTL(quantitative trait loci) due to its simplicity, speed, cost-effectiveness, and efficiency. However, the ability of BSA-seq ... Bulked-segregant analysis by deep sequencing(BSA-seq) is a widely used method for mapping QTL(quantitative trait loci) due to its simplicity, speed, cost-effectiveness, and efficiency. However, the ability of BSA-seq to detect QTL is often limited by inappropriate experimental designs, as evidenced by numerous practical studies. Most BSA-seq studies have utilized small to medium-sized populations, with F2populations being the most common choice. Nevertheless, theoretical studies have shown that using a large population with an appropriate pool size can significantly enhance the power and resolution of QTL detection in BSA-seq, with F_(3)populations offering notable advantages over F2populations. To provide an experimental demonstration, we tested the power of BSA-seq to identify QTL controlling days from sowing to heading(DTH) in a 7200-plant rice F_(3)population in two environments, with a pool size of approximately 500. Each experiment identified 34 QTL, an order of magnitude greater than reported in most BSA-seq experiments, of which 23 were detected in both experiments, with 17 of these located near41 previously reported QTL and eight cloned genes known to control DTH in rice. These results indicate that QTL mapping by BSA-seq in large F_(3)populations and multi-environment experiments can achieve high power, resolution, and reliability. 展开更多
关键词 BSA-seq QTL mapping Large F3 population Multi-environment experiment cross-validation
下载PDF
Adaptive Random Effects/Coefficients Modeling
12
作者 George J. Knafl 《Open Journal of Statistics》 2024年第2期179-206,共28页
Adaptive fractional polynomial modeling of general correlated outcomes is formulated to address nonlinearity in means, variances/dispersions, and correlations. Means and variances/dispersions are modeled using general... Adaptive fractional polynomial modeling of general correlated outcomes is formulated to address nonlinearity in means, variances/dispersions, and correlations. Means and variances/dispersions are modeled using generalized linear models in fixed effects/coefficients. Correlations are modeled using random effects/coefficients. Nonlinearity is addressed using power transforms of primary (untransformed) predictors. Parameter estimation is based on extended linear mixed modeling generalizing both generalized estimating equations and linear mixed modeling. Models are evaluated using likelihood cross-validation (LCV) scores and are generated adaptively using a heuristic search controlled by LCV scores. Cases covered include linear, Poisson, logistic, exponential, and discrete regression of correlated continuous, count/rate, dichotomous, positive continuous, and discrete numeric outcomes treated as normally, Poisson, Bernoulli, exponentially, and discrete numerically distributed, respectively. Example analyses are also generated for these five cases to compare adaptive random effects/coefficients modeling of correlated outcomes to previously developed adaptive modeling based on directly specified covariance structures. Adaptive random effects/coefficients modeling substantially outperforms direct covariance modeling in the linear, exponential, and discrete regression example analyses. It generates equivalent results in the logistic regression example analyses and it is substantially outperformed in the Poisson regression case. Random effects/coefficients modeling of correlated outcomes can provide substantial improvements in model selection compared to directly specified covariance modeling. However, directly specified covariance modeling can generate competitive or substantially better results in some cases while usually requiring less computation time. 展开更多
关键词 Adaptive Regression Correlated Outcomes Extended Linear Mixed Modeling Fractional Polynomials Likelihood cross-validation Random Effects/Coefficients
下载PDF
OPT-BAG Model for Predicting Student Employability
13
作者 Minh-Thanh Vo Trang Nguyen Tuong Le 《Computers, Materials & Continua》 SCIE EI 2023年第8期1555-1568,共14页
The use of machine learning to predict student employability is important in order to analyse a student’s capability to get a job.Based on the results of this type of analysis,university managers can improve the empl... The use of machine learning to predict student employability is important in order to analyse a student’s capability to get a job.Based on the results of this type of analysis,university managers can improve the employability of their students,which can help in attracting students in the future.In addition,learners can focus on the essential skills identified through this analysis during their studies,to increase their employability.An effectivemethod calledOPT-BAG(OPTimisation of BAGging classifiers)was therefore developed to model the problem of predicting the employability of students.This model can help predict the employability of students based on their competencies and can reveal weaknesses that need to be improved.First,we analyse the relationships between several variables and the outcome variable using a correlation heatmap for a student employability dataset.Next,a standard scaler function is applied in the preprocessing module to normalise the variables in the student employability dataset.The training set is then input to our model to identify the optimal parameters for the bagging classifier using a grid search cross-validation technique.Finally,the OPT-BAG model,based on a bagging classifier with optimal parameters found in the previous step,is trained on the training dataset to predict student employability.The empirical outcomes in terms of accuracy,precision,recall,and F1 indicate that the OPT-BAG approach outperforms other cutting-edge machine learning models in terms of predicting student employability.In this study,we also analyse the factors affecting the recruitment process of employers,and find that general appearance,mental alertness,and communication skills are the most important.This indicates that educational institutions should focus on these factors during the learning process to improve student employability. 展开更多
关键词 Ensemble classifier grid search cross-validation OPT-BAG student employability
下载PDF
Functional magnetic resonance imaging study of group independent components underpinning item responses to paranoid-depressive scale
14
作者 Drozdstoy Stoyanov Rositsa Paunova +3 位作者 Julian Dichev Sevdalina Kandilarova Vladimir Khorev Semen Kurkin 《World Journal of Clinical Cases》 SCIE 2023年第36期8458-8474,共17页
BACKGROUND Our study expand upon a large body of evidence in the field of neuropsychiatric imaging with cognitive,affective and behavioral tasks,adapted for the functional magnetic resonance imaging(MRI)(fMRI)experime... BACKGROUND Our study expand upon a large body of evidence in the field of neuropsychiatric imaging with cognitive,affective and behavioral tasks,adapted for the functional magnetic resonance imaging(MRI)(fMRI)experimental environment.There is sufficient evidence that common networks underpin activations in task-based fMRI across different mental disorders.AIM To investigate whether there exist specific neural circuits which underpin differ-ential item responses to depressive,paranoid and neutral items(DN)in patients respectively with schizophrenia(SCZ)and major depressive disorder(MDD).METHODS 60 patients were recruited with SCZ and MDD.All patients have been scanned on 3T magnetic resonance tomography platform with functional MRI paradigm,comprised of block design,including blocks with items from diagnostic paranoid(DP),depression specific(DS)and DN from general interest scale.We performed a two-sample t-test between the two groups-SCZ patients and depressive patients.Our purpose was to observe different brain networks which were activated during a specific condition of the task,respectively DS,DP,DN.RESULTS Several significant results are demonstrated in the comparison between SCZ and depressive groups while performing this task.We identified one component that is task-related and independent of condition(shared between all three conditions),composed by regions within the temporal(right superior and middle temporal gyri),frontal(left middle and inferior frontal gyri)and limbic/salience system(right anterior insula).Another com-ponent is related to both diagnostic specific conditions(DS and DP)e.g.It is shared between DEP and SCZ,and includes frontal motor/language and parietal areas.One specific component is modulated preferentially by to the DP condition,and is related mainly to prefrontal regions,whereas other two components are significantly modulated with the DS condition and include clusters within the default mode network such as posterior cingulate and precuneus,several occipital areas,including lingual and fusiform gyrus,as well as parahippocampal gyrus.Finally,component 12 appeared to be unique for the neutral condition.In addition,there have been determined circuits across components,which are either common,or distinct in the preferential processing of the sub-scales of the task.CONCLUSION This study has delivers further evidence in support of the model of trans-disciplinary cross-validation in psychiatry. 展开更多
关键词 Paranoid-depressive scale Functional magnetic resonance imaging cross-validation Group independent component analysis Schizophrenia Depression
下载PDF
SCADA Data-Based Support Vector Machine for False Alarm Identification for Wind Turbine Management
15
作者 Ana María Peco Chacón Isaac Segovia Ramírez Fausto Pedro García Márquez 《Intelligent Automation & Soft Computing》 SCIE 2023年第9期2595-2608,共14页
Maintenance operations have a critical influence on power gen-eration by wind turbines(WT).Advanced algorithms must analyze large volume of data from condition monitoring systems(CMS)to determine the actual working co... Maintenance operations have a critical influence on power gen-eration by wind turbines(WT).Advanced algorithms must analyze large volume of data from condition monitoring systems(CMS)to determine the actual working conditions and avoid false alarms.This paper proposes different support vector machine(SVM)algorithms for the prediction and detection of false alarms.K-Fold cross-validation(CV)is applied to evaluate the classification reliability of these algorithms.Supervisory Control and Data Acquisition(SCADA)data from an operating WT are applied to test the proposed approach.The results from the quadratic SVM showed an accuracy rate of 98.6%.Misclassifications from the confusion matrix,alarm log and maintenance records are analyzed to obtain quantitative information and determine if it is a false alarm.The classifier reduces the number of false alarms called misclassifications by 25%.These results demonstrate that the proposed approach presents high reliability and accuracy in false alarm identification. 展开更多
关键词 Machine learning classification support vector machine false alarm wind turbine cross-validation
下载PDF
An Adaptive Approach for Hazard Regression Modeling
16
作者 George J. Knafl 《Open Journal of Statistics》 2023年第3期300-315,共16页
Regression models for survival time data involve estimation of the hazard rate as a function of predictor variables and associated slope parameters. An adaptive approach is formulated for such hazard regression modeli... Regression models for survival time data involve estimation of the hazard rate as a function of predictor variables and associated slope parameters. An adaptive approach is formulated for such hazard regression modeling. The hazard rate is modeled using fractional polynomials, that is, linear combinations of products of power transforms of time together with other available predictors. These fractional polynomial models are restricted to generating positive-valued hazard rates and decreasing survival times. Exponentially distributed survival times are a special case. Parameters are estimated using maximum likelihood estimation allowing for right censored survival times. Models are evaluated and compared using likelihood cross-validation (LCV) scores. LCV scores and tolerance parameters are used to control an adaptive search through alternative fractional polynomial hazard rate models to identify effective models for the underlying survival time data. These methods are demonstrated using two different survival time data sets including survival times for lung cancer patients and for multiple myeloma patients. For the lung cancer data, the hazard rate depends distinctly on time. However, controlling for cell type provides a distinct improvement while the hazard rate depends only on cell type and no longer on time. Furthermore, Cox regression is unable to identify a cell type effect. For the multiple myeloma data, the hazard rate also depends distinctly on time. Moreover, consideration of hemoglobin at diagnosis provides a distinct improvement, the hazard rate still depends distinctly on time, and hemoglobin distinctly moderates the effect of time on the hazard rate. These results indicate that adaptive hazard rate modeling can provide unique insights into survival time data. 展开更多
关键词 Adaptive Regression Fractional Polynomials Hazard Rate Likelihood cross-validation Survival Times
下载PDF
基于ANFIS和Elman网络的信用评价研究 被引量:8
17
作者 梁樑 吴德胜 +2 位作者 王志强 熊立 王国华 《管理工程学报》 CSSCI 2005年第1期69-73,共5页
BP神经网络用作信用等级分类可取得较好的效果,但在过分要求输出信用分值时效果不佳。针对该缺陷,本文采用自适应神经网络(ANFIS)和Elman网络研究公司信用评分。文中提出了一套甄选方法准则,用于建立适合我国企业的信用评分指标体系;然... BP神经网络用作信用等级分类可取得较好的效果,但在过分要求输出信用分值时效果不佳。针对该缺陷,本文采用自适应神经网络(ANFIS)和Elman网络研究公司信用评分。文中提出了一套甄选方法准则,用于建立适合我国企业的信用评分指标体系;然后依据该指标体系建立了基于Elman网络和ANFIS的信用评估模型;采用V foldCross validation技巧,利用样本公司实际指标数据对该模型的评分效果进行了实证研究。 展开更多
关键词 信用评分 自适应神经模糊推理 ELMAN网络 V-fold cross-validation技巧 主成分分析
下载PDF
直方图理论与最优直方图制作 被引量:26
18
作者 张建方 王秀祥 《应用概率统计》 CSCD 北大核心 2009年第2期201-214,共14页
直方图是一种最为常见的密度估计和数据分析工具.在直方图理论和制作过程中,组距的选择和边界点的确定尤为重要.然而,许多学者对这两个参数的选择仍然采用经验的方法,甚至现在大多数统计软件在确定直方图分组数时也是默认采用粗略的计... 直方图是一种最为常见的密度估计和数据分析工具.在直方图理论和制作过程中,组距的选择和边界点的确定尤为重要.然而,许多学者对这两个参数的选择仍然采用经验的方法,甚至现在大多数统计软件在确定直方图分组数时也是默认采用粗略的计算公式.本文主要介绍直方图理论和最优直方图制作的最新研究成果,强调面向样本的最优直方图制作方法. 展开更多
关键词 直方图 Sturges公式 Scott公式 cross-validation Histogram-Kernel ERROR 误差平方和
下载PDF
不同模型在信用评价中的比较研究 被引量:8
19
作者 吴德胜 梁樑 杨力 《预测》 CSSCI 2004年第2期73-76,69,共5页
比较了不同模型应用于企业信用评价问题的优劣,针对信用评分问题特点,采用Elman回归神经网络和BP网络建模。在建立了适合于我国企业的信用评分指标体系之后,运用以上两种方法进行实证研究并比较两种网络的诊断行为;为克服小样本建模的缺... 比较了不同模型应用于企业信用评价问题的优劣,针对信用评分问题特点,采用Elman回归神经网络和BP网络建模。在建立了适合于我国企业的信用评分指标体系之后,运用以上两种方法进行实证研究并比较两种网络的诊断行为;为克服小样本建模的缺点,引进V foldCross validation计算技巧。 展开更多
关键词 ELMAN神经网络 BP神经网络 V-fold cross-validation技巧 信用评分
下载PDF
基于支持向量机的机械故障特征选择方法研究 被引量:4
20
作者 王新峰 邱静 刘冠军 《机械科学与技术》 CSCD 北大核心 2005年第9期1122-1125,共4页
在机械故障诊断中,对机器状态信号进行处理可得到故障特征集。但是此特征集中通常含有冗余特征而影响诊断效果。特征选择可以去除原始特征中的冗余特征,提高诊断精度和诊断效率。本文提出采用支持向量机(SVM)作为决策分类器,研究了使用... 在机械故障诊断中,对机器状态信号进行处理可得到故障特征集。但是此特征集中通常含有冗余特征而影响诊断效果。特征选择可以去除原始特征中的冗余特征,提高诊断精度和诊断效率。本文提出采用支持向量机(SVM)作为决策分类器,研究了使用SVM的错误上界如半径-间距上界代替学习错误率作为特征性能评价,并且使用遗传算法对特征集进行寻优的特征选择方法。此方法由于只需要训练一次SVM,相比常用的分组轮换方法有较高的计算效率。数值仿真和减速器的轴承故障特征选择试验中,采用此方法对生成特征集进行选择,并与常用的分组轮换法进行了对比。结果显示此方法有较好的选择性能和选择效率。 展开更多
关键词 特征选择 分组轮换法(cross-validation) 支持向量机(SVM) 半径-间距上界 遗传算法
下载PDF
上一页 1 2 4 下一页 到第
使用帮助 返回顶部