Time series forecasting has become an important aspect of data analysis and has many real-world applications.However,undesirable missing values are often encountered,which may adversely affect many forecasting tasks.I...Time series forecasting has become an important aspect of data analysis and has many real-world applications.However,undesirable missing values are often encountered,which may adversely affect many forecasting tasks.In this study,we evaluate and compare the effects of imputationmethods for estimating missing values in a time series.Our approach does not include a simulation to generate pseudo-missing data,but instead perform imputation on actual missing data and measure the performance of the forecasting model created therefrom.In an experiment,therefore,several time series forecasting models are trained using different training datasets prepared using each imputation method.Subsequently,the performance of the imputation methods is evaluated by comparing the accuracy of the forecasting models.The results obtained from a total of four experimental cases show that the k-nearest neighbor technique is the most effective in reconstructing missing data and contributes positively to time series forecasting compared with other imputation methods.展开更多
This research was an effort to select best imputation method for missing upper air temperature data over 24 standard pressure levels. We have implemented four imputation techniques like inverse distance weighting, Bil...This research was an effort to select best imputation method for missing upper air temperature data over 24 standard pressure levels. We have implemented four imputation techniques like inverse distance weighting, Bilinear, Natural and Nearest interpolation for missing data imputations. Performance indicators for these techniques were the root mean square error (RMSE), absolute mean error (AME), correlation coefficient and coefficient of determination ( R<sup>2</sup> ) adopted in this research. We randomly make 30% of total samples (total samples was 324) predictable from 70% remaining data. Although four interpolation methods seem good (producing <1 RMSE, AME) for imputations of air temperature data, but bilinear method was the most accurate with least errors for missing data imputations. RMSE for bilinear method remains <0.01 on all pressure levels except 1000 hPa where this value was 0.6. The low value of AME (<0.1) came at all pressure levels through bilinear imputations. Very strong correlation (>0.99) found between actual and predicted air temperature data through this method. The high value of the coefficient of determination (0.99) through bilinear interpolation method, tells us best fit to the surface. We have also found similar results for imputation with natural interpolation method in this research, but after investigating scatter plots over each month, imputations with this method seem to little obtuse in certain months than bilinear method.展开更多
In this study, we investigate the effects of missing data when estimating HIV/TB co-infection. We revisit the concept of missing data and examine three available approaches for dealing with missingness. The main objec...In this study, we investigate the effects of missing data when estimating HIV/TB co-infection. We revisit the concept of missing data and examine three available approaches for dealing with missingness. The main objective is to identify the best method for correcting missing data in TB/HIV Co-infection setting. We employ both empirical data analysis and extensive simulation study to examine the effects of missing data, the accuracy, sensitivity, specificity and train and test error for different approaches. The novelty of this work hinges on the use of modern statistical learning algorithm when treating missingness. In the empirical analysis, both HIV data and TB-HIV co-infection data imputations were performed, and the missing values were imputed using different approaches. In the simulation study, sets of 0% (Complete case), 10%, 30%, 50% and 80% of the data were drawn randomly and replaced with missing values. Results show complete cases only had a co-infection rate (95% Confidence Interval band) of 29% (25%, 33%), weighted method 27% (23%, 31%), likelihood-based approach 26% (24%, 28%) and multiple imputation approach 21% (20%, 22%). In conclusion, MI remains the best approach for dealing with missing data and failure to apply it, results to overestimation of HIV/TB co-infection rate by 8%.展开更多
In his 1987 classic book on multiple imputation (MI), Rubin used the fraction of missing information, γ, to define the relative efficiency (RE) of MI as RE = (1 + γ/m)?1/2, where m is the number of imputations, lead...In his 1987 classic book on multiple imputation (MI), Rubin used the fraction of missing information, γ, to define the relative efficiency (RE) of MI as RE = (1 + γ/m)?1/2, where m is the number of imputations, leading to the conclusion that a small m (≤5) would be sufficient for MI. However, evidence has been accumulating that many more imputations are needed. Why would the apparently sufficient m deduced from the RE be actually too small? The answer may lie with γ. In this research, γ was determined at the fractions of missing data (δ) of 4%, 10%, 20%, and 29% using the 2012 Physician Workflow Mail Survey of the National Ambulatory Medical Care Survey (NAMCS). The γ values were strikingly small, ranging in the order of 10?6 to 0.01. As δ increased, γ usually increased but sometimes decreased. How the data were analysed had the dominating effects on γ, overshadowing the effect of δ. The results suggest that it is impossible to predict γ using δ and that it may not be appropriate to use the γ-based RE to determine sufficient m.展开更多
The prevalence of a disease in a population is defined as the proportion of people who are infected. Selection bias in disease prevalence estimates occurs if non-participation in testing is correlated with disease sta...The prevalence of a disease in a population is defined as the proportion of people who are infected. Selection bias in disease prevalence estimates occurs if non-participation in testing is correlated with disease status. Missing data are commonly encountered in most medical research. Unfortunately, they are often neglected or not properly handled during analytic procedures, and this may substantially bias the results of the study, reduce the study power, and lead to invalid conclusions. The goal of this study is to illustrate how to estimate prevalence in the presence of missing data. We consider a case where the variable of interest (response variable) is binary and some of the observations are missing and assume that all the covariates are fully observed. In most cases, the statistic of interest, when faced with binary data is the prevalence. We develop a two stage approach to improve the prevalence estimates;in the first stage, we use the logistic regression model to predict the missing binary observations and then in the second stage we recalculate the prevalence using the observed data and the imputed missing data. Such a model would be of great interest in research studies involving HIV/AIDS in which people usually refuse to donate blood for testing yet they are willing to provide other covariates. The prevalence estimation method is illustrated using simulated data and applied to HIV/AIDS data from the Kenya AIDS Indicator Survey, 2007.展开更多
The absence of some data values in any observed dataset has been a real hindrance to achieving valid results in statistical research. This paper</span></span><span><span><span style="fo...The absence of some data values in any observed dataset has been a real hindrance to achieving valid results in statistical research. This paper</span></span><span><span><span style="font-family:""> </span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">aim</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">ed</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;"> at the missing data widespread problem faced by analysts and statisticians in academia and professional environments. Some data-driven methods were studied to obtain accurate data. Projects that highly rely on data face this missing data problem. And since machine learning models are only as good as the data used to train them, the missing data problem has a real impact on the solutions developed for real-world problems. Therefore, in this dissertation, there is an attempt to solve this problem using different mechanisms. This is done by testing the effectiveness of both traditional and modern data imputation techniques by determining the loss of statistical power when these different approaches are used to tackle the missing data problem. At the end of this research dissertation, it should be easy to establish which methods are the best when handling the research problem. It is recommended that using Multivariate Imputation by Chained Equations (MICE) for MAR missingness is the best approach </span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">to</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;"> dealing with missing data.展开更多
The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based o...The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method.展开更多
Background:Missing data are frequently occurred in clinical studies.Due to the development of precision medicine,there is an increased interest in N-of-1 trial.Bayesian models are one of main statistical methods for a...Background:Missing data are frequently occurred in clinical studies.Due to the development of precision medicine,there is an increased interest in N-of-1 trial.Bayesian models are one of main statistical methods for analyzing the data of N-of-1 trials.This simulation study aimed to compare two statistical methods for handling missing values of quantitative data in Bayesian N-of-1 trials.Methods:The simulated data of N-of-1 trials with different coefficients of autocorrelation,effect sizes and missing ratios are obtained by SAS 9.1 system.The missing values are filled with mean filling and regression filling respectively in the condition of different coefficients of autocorrelation,effect sizes and missing ratios by SPSS 25.0 software.Bayesian models are built to estimate the posterior means by Winbugs 14 software.Results:When the missing ratio is relatively small,e.g.5%,missing values have relatively little effect on the results.Therapeutic effects may be underestimated when the coefficient of autocorrelation increases and no filling is used.However,it may be overestimated when mean or regression filling is used,and the results after mean filling are closer to the actual effect than regression filling.In the case of moderate missing ratio,the estimated effect after mean filling is closer to the actual effect compared to regression filling.When a large missing ratio(20%)occurs,data missing can lead to significantly underestimate the effect.In this case,the estimated effect after regression filling is closer to the actual effect compared to mean filling.Conclusion:Data missing can affect the estimated therapeutic effects using Bayesian models in N-of-1 trials.The present study suggests that mean filling can be used under situation of missing ratio≤10%.Otherwise,regression filling may be preferable.展开更多
This article deals with some new chain imputation methods by using two auxiliary variables under missing completely at random(MCAR)approach.The proposed generalized classes of chain imputation methods are tested from ...This article deals with some new chain imputation methods by using two auxiliary variables under missing completely at random(MCAR)approach.The proposed generalized classes of chain imputation methods are tested from the viewpoint of optimality in terms of MSE.The proposed imputation methods can be considered as an efficient extension to the work of Singh and Horn(Metrika 51:267-276,2000),Singh and Deo(Stat Pap 44:555-579,2003),Singh(Stat A J Theor Appl Stat 43(5):499-511,2009),Kadilar and Cingi(Commun Stat Theory Methods 37:2226-2236,2008)and Diana and Perri(Commun Stat Theory Methods 39:3245-3251,2010).The performance of the proposed chain imputation methods is investigated relative to the conventional chain-type imputation methods.The theoretical results are derived and comparative study is conducted and the results are found to be quite encouraging providing the improvement over the discussed work.展开更多
High-quality datasets are of paramount importance for the operation and planning of wind farms.However,the datasets collected by the supervisory control and data acquisition(SCADA)system may contain missing data due t...High-quality datasets are of paramount importance for the operation and planning of wind farms.However,the datasets collected by the supervisory control and data acquisition(SCADA)system may contain missing data due to various factors such as sensor failure and communication congestion.In this paper,a data-driven approach is proposed to fill the missing data of wind farms based on a context encoder(CE),which consists of an encoder,a decoder,and a discriminator.Through deep convolutional neural networks,the proposed method is able to automatically explore the complex nonlinear characteristics of the datasets that are difficult to be modeled explicitly.The proposed method can not only fully use the surrounding context information by the reconstructed loss,but also make filling data look real by the adversarial loss.In addition,the correlation among multiple missing attributes is taken into account by adjusting the format of input data.The simulation results show that CE performs better than traditional methods for the attributes of wind farms with hallmark characteristics such as large peaks,large valleys,and fast ramps.Moreover,the CE shows stronger generalization ability than traditional methods such as auto-encoder,K-means,k-nearest neighbor,back propagation neural network,cubic interpolation,and conditional generative adversarial network for different missing data scales.展开更多
A control valve is one of the most widely used machines in hydraulic systems.However,it often works in harsh environments and failure occurs from time to time.An intelligent and robust control valve fault diagnosis is...A control valve is one of the most widely used machines in hydraulic systems.However,it often works in harsh environments and failure occurs from time to time.An intelligent and robust control valve fault diagnosis is therefore important for operation of the system.In this study,a fault diagnosis based on the mathematical model(MM)imputation and the modified deep residual shrinkage network(MDRSN)is proposed to solve the problem that data-driven models for control valves are susceptible to changing operating conditions and missing data.The multiple fault time-series samples of the control valve at different openings are collected for fault diagnosis to verify the effectiveness of the proposed method.The effects of the proposed method in missing data imputation and fault diagnosis are analyzed.Compared with random and k-nearest neighbor(KNN)imputation,the accuracies of MM-based imputation are improved by 17.87%and 21.18%,in the circumstances of a20.00%data missing rate at valve opening from 10%to 28%.Furthermore,the results show that the proposed MDRSN can maintain high fault diagnosis accuracy with missing data.展开更多
Traditional methods for solving linear systems have quickly become imprac-tical due to an increase in the size of available data.Utilizing massive amounts of data is further complicated when the data is incomplete or ...Traditional methods for solving linear systems have quickly become imprac-tical due to an increase in the size of available data.Utilizing massive amounts of data is further complicated when the data is incomplete or has missing entries.In this work,we address the obstacles presented when working with large data and incom-plete data simultaneously.In particular,we propose to adapt the Stochastic Gradient Descent method to address missing data in linear systems.Our proposed algorithm,the Stochastic Gradient Descent for Missing Data method(mSGD),is introduced and theoretical convergence guarantees are provided.In addition,we include numerical experiments on simulated and real world data that demonstrate the usefulness of our method.展开更多
Next Generation Sequencing (NGS) provides an effective basis for estimating the survival time of cancer patients, but it also poses the problem of high data dimensionality, in addition to the fact that some patients d...Next Generation Sequencing (NGS) provides an effective basis for estimating the survival time of cancer patients, but it also poses the problem of high data dimensionality, in addition to the fact that some patients drop out of the study, making the data missing, so a method for estimating the mean of the response variable with missing values for the ultra-high dimensional datasets is needed. In this paper, we propose a two-stage ultra-high dimensional variable screening method, RF-SIS, based on random forest regression, which effectively solves the problem of estimating missing values due to excessive data dimension. After the dimension reduction process by applying RF-SIS, mean interpolation is executed on the missing responses. The results of the simulated data show that compared with the estimation method of directly deleting missing observations, the estimation results of RF-SIS-MI have significant advantages in terms of the proportion of intervals covered, the average length of intervals, and the average absolute deviation.展开更多
The generalized linear model is an indispensable tool for analyzing non-Gaussian response data, with both canonical and non-canonical link functions comprehensively used. When missing values are present, many existing...The generalized linear model is an indispensable tool for analyzing non-Gaussian response data, with both canonical and non-canonical link functions comprehensively used. When missing values are present, many existing methods in the literature heavily depend on an unverifiable assumption of the missing data mechanism, and they fail when the assumption is violated. This paper proposes a missing data mechanism that is as generally applicable as possible, which includes both ignorable and nonignorable missing data cases, as well as both scenarios of missing values in response and covariate.Under this general missing data mechanism, the authors adopt an approximate conditional likelihood method to estimate unknown parameters. The authors rigorously establish the regularity conditions under which the unknown parameters are identifiable under the approximate conditional likelihood approach. For parameters that are identifiable, the authors prove the asymptotic normality of the estimators obtained by maximizing the approximate conditional likelihood. Some simulation studies are conducted to evaluate finite sample performance of the proposed estimators as well as estimators from some existing methods. Finally, the authors present a biomarker analysis in prostate cancer study to illustrate the proposed method.展开更多
In wireless sensor networks(WSNs),the performance of related applications is highly dependent on the quality of data collected.Unfortunately,missing data is almost inevitable in the process of data acquisition and tra...In wireless sensor networks(WSNs),the performance of related applications is highly dependent on the quality of data collected.Unfortunately,missing data is almost inevitable in the process of data acquisition and transmission.Existing methods often rely on prior information such as low-rank characteristics or spatiotemporal correlation when recovering missing WSNs data.However,in realistic application scenarios,it is very difficult to obtain these prior information from incomplete data sets.Therefore,we aim to recover the missing WSNs data effectively while getting rid of the perplexity of prior information.By designing the corresponding measurement matrix that can capture the position of missing data and sparse representation matrix,a compressive sensing(CS)based missing data recovery model is established.Then,we design a comparison standard to select the best sparse representation basis and introduce average cross-correlation to examine the rationality of the established model.Furthermore,an improved fast matching pursuit algorithm is proposed to solve the model.Simulation results show that the proposed method can effectively recover the missing WSNs data.展开更多
In wireless sensor networks,data missing is a common problem due to sensor faults,time synchronization,malicious attacks,and communication malfunctions,which may degrade the network's performance or lead to ineffi...In wireless sensor networks,data missing is a common problem due to sensor faults,time synchronization,malicious attacks,and communication malfunctions,which may degrade the network's performance or lead to inefficient decisions.Therefore,it is necessary to effectively estimate the missing data.A double weighted least squares support vector machines(DWLS-SVM) model for the missing data estimation in wireless sensor networks is proposed in this paper.The algorithm first applies the weighted LS-SVM(WLS-SVM) to estimate the missing data on temporal domain and spatial domain respectively,and then uses the weighted average of these two candidates as the final estimated value.DWLS-SVM considers the possibility of outliers in the dataset and utilizes spatio-temporal dependencies among sensor nodes fully,which makes the estimate more robust and precise.Experimental results on real world dataset demonstrate that the proposed algorithm is outlier robust and can estimate the missing values accurately.展开更多
<strong>Background:</strong><span style="font-family:;" "=""><span style="font-family:Verdana;"> In discrete-time event history analysis, subjects are measure...<strong>Background:</strong><span style="font-family:;" "=""><span style="font-family:Verdana;"> In discrete-time event history analysis, subjects are measured once each time period until they experience the event, prematurely drop out, or when the study concludes. This implies measuring event status of a subject in each time period determines whether (s)he should be measured in subsequent time periods. For that reason, intermittent missing event status causes a problem because, unlike other repeated measurement designs, it does not make sense to simply ignore the corresponding missing event status from the analysis (as long as the dropout is ignorable). </span><b><span style="font-family:Verdana;">Method:</span></b><span style="font-family:Verdana;"> We used Monte Carlo simulation to evaluate and compare various alternatives, including event occurrence recall, event (non-)occurrence, case deletion, period deletion, and single and multiple imputation methods, to deal with missing event status. Moreover, we showed the methods’ performance in the analysis of an empirical example on relapse to drug use. </span><b><span style="font-family:Verdana;">Result:</span></b><span style="font-family:Verdana;"> The strategies assuming event (non-)occurrence and the recall strategy had the worst performance because of a substantial parameter bias and a sharp decrease in coverage rate. Deletion methods suffered from either loss of power or undercoverage</span><span style="color:red;"> </span><span style="font-family:Verdana;">issues resulting from a biased standard error. Single imputation recovered the bias issue but showed an undercoverage estimate. Multiple imputations performed reasonabl</span></span><span style="font-family:Verdana;">y</span><span style="font-family:;" "=""><span style="font-family:Verdana;"> with a negligible standard error bias leading to a gradual decrease in power. </span><b><span style="font-family:Verdana;">Conclusion:</span></b><span style="font-family:Verdana;"> On the basis of the simulation results and real example, we provide practical guidance to researches in terms of the best ways to deal with missing event history data</span></span><span style="font-family:Verdana;">.</span>展开更多
This paper presents some installation and data analysis issues from an ongoing urban air temperature and humidity measurement campaign in Hangzhou and Ningbo, China. The location of the measurement sites, the position...This paper presents some installation and data analysis issues from an ongoing urban air temperature and humidity measurement campaign in Hangzhou and Ningbo, China. The location of the measurement sites, the positioning of the sensors and the harsh conditions in an urban environment can result in missing values and observations that are unrepresentative of the local urban microclimate. Missing data and erroneous values in micro-scale weather time series can produce bias in the data analysis, false correlations and wrong conclusions when deriving the specific local weather patterns. A methodology is presented for the identification of values that could be false and for determining whether these are “noise”. Seven statistical methods were evaluated in their performance for replacing missing and erroneous values in urban weather time series. The two methods that proposed replacement with the mean values from sensors in locations with a Sky View Factor similar to that of the target sensor and the sensors closest to the target’s location performed well for all Day-Night and Cold-Warm days scenarios. However, during night time in warm weather the replacement with the mean values for air temperature of the nearest locations outperformed all other methods. The results give some initial evidence of the distinctive urban microclimate development in time and space under different regional weather forcings.展开更多
Blast furnace data processing is prone to problems such as outliers.To overcome these problems and identify an improved method for processing blast furnace data,we conducted an in-depth study of blast furnace data.Bas...Blast furnace data processing is prone to problems such as outliers.To overcome these problems and identify an improved method for processing blast furnace data,we conducted an in-depth study of blast furnace data.Based on data samples from selected iron and steel companies,data types were classified according to different characteristics;then,appropriate methods were selected to process them in order to solve the deficiencies and outliers of the original blast furnace data.Linear interpolation was used to fill in the divided continuation data,the Knearest neighbor(KNN)algorithm was used to fill in correlation data with the internal law,and periodic statistical data were filled by the average.The error rate in the filling was low,and the fitting degree was over 85%.For the screening of outliers,corresponding indicator parameters were added according to the continuity,relevance,and periodicity of different data.Also,a variety of algorithms were used for processing.Through the analysis of screening results,a large amount of efficient information in the data was retained,and ineffective outliers were eliminated.Standardized processing of blast furnace big data as the basis of applied research on blast furnace big data can serve as an important means to improve data quality and retain data value.展开更多
Rhododendron is famous for its high ornamental value.However,the genus is taxonomically difficult and the relationships within Rhododendron remain unresolved.In addition,the origin of key morphological characters with...Rhododendron is famous for its high ornamental value.However,the genus is taxonomically difficult and the relationships within Rhododendron remain unresolved.In addition,the origin of key morphological characters with high horticulture value need to be explored.Both problems largely hinder utilization of germplasm resources.Most studies attempted to disentangle the phylogeny of Rhododendron,but only used a few genomic markers and lacked large-scale sampling,resulting in low clade support and contradictory phylogenetic signals.Here,we used restriction-site associated DNA sequencing(RAD-seq)data and morphological traits for 144 species of Rhododendron,representing all subgenera and most sections and subsections of this species-rich genus,to decipher its intricate evolutionary history and reconstruct ancestral state.Our results revealed high resolutions at subgenera and section levels of Rhododendron based on RAD-seq data.Both optimal phylogenetic tree and split tree recovered five lineages among Rhododendron.Subg.Therorhodion(cladeⅠ)formed the basal lineage.Subg.Tsutsusi and Azaleastrum formed cladeⅡand had sister relationships.CladeⅢincluded all scaly rhododendron species.Subg.Pentanthera(cladeⅣ)formed a sister group to Subg.Hymenanthes(cladeⅤ).The results of ancestral state reconstruction showed that Rhododendron ancestor was a deciduous woody plant with terminal inflorescence,ten stamens,leaf blade without scales and broadly funnelform corolla with pink or purple color.This study shows significant distinguishability to resolve the evolutionary history of Rhododendron based on high clade support of phylogenetic tree constructed by RAD-seq data.It also provides an example to resolve discordant signals in phylogenetic trees and demonstrates the application feasibility of RAD-seq with large amounts of missing data in deciphering intricate evolutionary relationships.Additionally,the reconstructed ancestral state of six important characters provides insights into the innovation of key characters in Rhododendron.展开更多
基金This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(Grant Number 2020R1A6A1A03040583).
文摘Time series forecasting has become an important aspect of data analysis and has many real-world applications.However,undesirable missing values are often encountered,which may adversely affect many forecasting tasks.In this study,we evaluate and compare the effects of imputationmethods for estimating missing values in a time series.Our approach does not include a simulation to generate pseudo-missing data,but instead perform imputation on actual missing data and measure the performance of the forecasting model created therefrom.In an experiment,therefore,several time series forecasting models are trained using different training datasets prepared using each imputation method.Subsequently,the performance of the imputation methods is evaluated by comparing the accuracy of the forecasting models.The results obtained from a total of four experimental cases show that the k-nearest neighbor technique is the most effective in reconstructing missing data and contributes positively to time series forecasting compared with other imputation methods.
文摘This research was an effort to select best imputation method for missing upper air temperature data over 24 standard pressure levels. We have implemented four imputation techniques like inverse distance weighting, Bilinear, Natural and Nearest interpolation for missing data imputations. Performance indicators for these techniques were the root mean square error (RMSE), absolute mean error (AME), correlation coefficient and coefficient of determination ( R<sup>2</sup> ) adopted in this research. We randomly make 30% of total samples (total samples was 324) predictable from 70% remaining data. Although four interpolation methods seem good (producing <1 RMSE, AME) for imputations of air temperature data, but bilinear method was the most accurate with least errors for missing data imputations. RMSE for bilinear method remains <0.01 on all pressure levels except 1000 hPa where this value was 0.6. The low value of AME (<0.1) came at all pressure levels through bilinear imputations. Very strong correlation (>0.99) found between actual and predicted air temperature data through this method. The high value of the coefficient of determination (0.99) through bilinear interpolation method, tells us best fit to the surface. We have also found similar results for imputation with natural interpolation method in this research, but after investigating scatter plots over each month, imputations with this method seem to little obtuse in certain months than bilinear method.
文摘In this study, we investigate the effects of missing data when estimating HIV/TB co-infection. We revisit the concept of missing data and examine three available approaches for dealing with missingness. The main objective is to identify the best method for correcting missing data in TB/HIV Co-infection setting. We employ both empirical data analysis and extensive simulation study to examine the effects of missing data, the accuracy, sensitivity, specificity and train and test error for different approaches. The novelty of this work hinges on the use of modern statistical learning algorithm when treating missingness. In the empirical analysis, both HIV data and TB-HIV co-infection data imputations were performed, and the missing values were imputed using different approaches. In the simulation study, sets of 0% (Complete case), 10%, 30%, 50% and 80% of the data were drawn randomly and replaced with missing values. Results show complete cases only had a co-infection rate (95% Confidence Interval band) of 29% (25%, 33%), weighted method 27% (23%, 31%), likelihood-based approach 26% (24%, 28%) and multiple imputation approach 21% (20%, 22%). In conclusion, MI remains the best approach for dealing with missing data and failure to apply it, results to overestimation of HIV/TB co-infection rate by 8%.
文摘In his 1987 classic book on multiple imputation (MI), Rubin used the fraction of missing information, γ, to define the relative efficiency (RE) of MI as RE = (1 + γ/m)?1/2, where m is the number of imputations, leading to the conclusion that a small m (≤5) would be sufficient for MI. However, evidence has been accumulating that many more imputations are needed. Why would the apparently sufficient m deduced from the RE be actually too small? The answer may lie with γ. In this research, γ was determined at the fractions of missing data (δ) of 4%, 10%, 20%, and 29% using the 2012 Physician Workflow Mail Survey of the National Ambulatory Medical Care Survey (NAMCS). The γ values were strikingly small, ranging in the order of 10?6 to 0.01. As δ increased, γ usually increased but sometimes decreased. How the data were analysed had the dominating effects on γ, overshadowing the effect of δ. The results suggest that it is impossible to predict γ using δ and that it may not be appropriate to use the γ-based RE to determine sufficient m.
文摘The prevalence of a disease in a population is defined as the proportion of people who are infected. Selection bias in disease prevalence estimates occurs if non-participation in testing is correlated with disease status. Missing data are commonly encountered in most medical research. Unfortunately, they are often neglected or not properly handled during analytic procedures, and this may substantially bias the results of the study, reduce the study power, and lead to invalid conclusions. The goal of this study is to illustrate how to estimate prevalence in the presence of missing data. We consider a case where the variable of interest (response variable) is binary and some of the observations are missing and assume that all the covariates are fully observed. In most cases, the statistic of interest, when faced with binary data is the prevalence. We develop a two stage approach to improve the prevalence estimates;in the first stage, we use the logistic regression model to predict the missing binary observations and then in the second stage we recalculate the prevalence using the observed data and the imputed missing data. Such a model would be of great interest in research studies involving HIV/AIDS in which people usually refuse to donate blood for testing yet they are willing to provide other covariates. The prevalence estimation method is illustrated using simulated data and applied to HIV/AIDS data from the Kenya AIDS Indicator Survey, 2007.
文摘The absence of some data values in any observed dataset has been a real hindrance to achieving valid results in statistical research. This paper</span></span><span><span><span style="font-family:""> </span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">aim</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">ed</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;"> at the missing data widespread problem faced by analysts and statisticians in academia and professional environments. Some data-driven methods were studied to obtain accurate data. Projects that highly rely on data face this missing data problem. And since machine learning models are only as good as the data used to train them, the missing data problem has a real impact on the solutions developed for real-world problems. Therefore, in this dissertation, there is an attempt to solve this problem using different mechanisms. This is done by testing the effectiveness of both traditional and modern data imputation techniques by determining the loss of statistical power when these different approaches are used to tackle the missing data problem. At the end of this research dissertation, it should be easy to establish which methods are the best when handling the research problem. It is recommended that using Multivariate Imputation by Chained Equations (MICE) for MAR missingness is the best approach </span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">to</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;"> dealing with missing data.
文摘The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method.
基金supported by the National Natural Science Foundation of China (No.81973705).
文摘Background:Missing data are frequently occurred in clinical studies.Due to the development of precision medicine,there is an increased interest in N-of-1 trial.Bayesian models are one of main statistical methods for analyzing the data of N-of-1 trials.This simulation study aimed to compare two statistical methods for handling missing values of quantitative data in Bayesian N-of-1 trials.Methods:The simulated data of N-of-1 trials with different coefficients of autocorrelation,effect sizes and missing ratios are obtained by SAS 9.1 system.The missing values are filled with mean filling and regression filling respectively in the condition of different coefficients of autocorrelation,effect sizes and missing ratios by SPSS 25.0 software.Bayesian models are built to estimate the posterior means by Winbugs 14 software.Results:When the missing ratio is relatively small,e.g.5%,missing values have relatively little effect on the results.Therapeutic effects may be underestimated when the coefficient of autocorrelation increases and no filling is used.However,it may be overestimated when mean or regression filling is used,and the results after mean filling are closer to the actual effect than regression filling.In the case of moderate missing ratio,the estimated effect after mean filling is closer to the actual effect compared to regression filling.When a large missing ratio(20%)occurs,data missing can lead to significantly underestimate the effect.In this case,the estimated effect after regression filling is closer to the actual effect compared to mean filling.Conclusion:Data missing can affect the estimated therapeutic effects using Bayesian models in N-of-1 trials.The present study suggests that mean filling can be used under situation of missing ratio≤10%.Otherwise,regression filling may be preferable.
文摘This article deals with some new chain imputation methods by using two auxiliary variables under missing completely at random(MCAR)approach.The proposed generalized classes of chain imputation methods are tested from the viewpoint of optimality in terms of MSE.The proposed imputation methods can be considered as an efficient extension to the work of Singh and Horn(Metrika 51:267-276,2000),Singh and Deo(Stat Pap 44:555-579,2003),Singh(Stat A J Theor Appl Stat 43(5):499-511,2009),Kadilar and Cingi(Commun Stat Theory Methods 37:2226-2236,2008)and Diana and Perri(Commun Stat Theory Methods 39:3245-3251,2010).The performance of the proposed chain imputation methods is investigated relative to the conventional chain-type imputation methods.The theoretical results are derived and comparative study is conducted and the results are found to be quite encouraging providing the improvement over the discussed work.
基金This work was supported by the China Scholarship Council.
文摘High-quality datasets are of paramount importance for the operation and planning of wind farms.However,the datasets collected by the supervisory control and data acquisition(SCADA)system may contain missing data due to various factors such as sensor failure and communication congestion.In this paper,a data-driven approach is proposed to fill the missing data of wind farms based on a context encoder(CE),which consists of an encoder,a decoder,and a discriminator.Through deep convolutional neural networks,the proposed method is able to automatically explore the complex nonlinear characteristics of the datasets that are difficult to be modeled explicitly.The proposed method can not only fully use the surrounding context information by the reconstructed loss,but also make filling data look real by the adversarial loss.In addition,the correlation among multiple missing attributes is taken into account by adjusting the format of input data.The simulation results show that CE performs better than traditional methods for the attributes of wind farms with hallmark characteristics such as large peaks,large valleys,and fast ramps.Moreover,the CE shows stronger generalization ability than traditional methods such as auto-encoder,K-means,k-nearest neighbor,back propagation neural network,cubic interpolation,and conditional generative adversarial network for different missing data scales.
基金supported by the National Natural Science Foundation of China(No.51875113)the Natural Science Joint Guidance Foundation of the Heilongjiang Province of China(No.LH2019E027)the PhD Student Research and Innovation Fund of the Fundamental Research Funds for the Central Universities(No.XK2070021009),China。
文摘A control valve is one of the most widely used machines in hydraulic systems.However,it often works in harsh environments and failure occurs from time to time.An intelligent and robust control valve fault diagnosis is therefore important for operation of the system.In this study,a fault diagnosis based on the mathematical model(MM)imputation and the modified deep residual shrinkage network(MDRSN)is proposed to solve the problem that data-driven models for control valves are susceptible to changing operating conditions and missing data.The multiple fault time-series samples of the control valve at different openings are collected for fault diagnosis to verify the effectiveness of the proposed method.The effects of the proposed method in missing data imputation and fault diagnosis are analyzed.Compared with random and k-nearest neighbor(KNN)imputation,the accuracies of MM-based imputation are improved by 17.87%and 21.18%,in the circumstances of a20.00%data missing rate at valve opening from 10%to 28%.Furthermore,the results show that the proposed MDRSN can maintain high fault diagnosis accuracy with missing data.
基金Needell was partially supported by NSF CAREER Grant No.1348721,NSF BIGDATA 1740325the Alfred P.Sloan Fellowship.Ma was supported in part by NSF CAREER Grant No.1348721,the CSRC Intellisis Fellowshipthe Edison Interna-tional Scholarship.
文摘Traditional methods for solving linear systems have quickly become imprac-tical due to an increase in the size of available data.Utilizing massive amounts of data is further complicated when the data is incomplete or has missing entries.In this work,we address the obstacles presented when working with large data and incom-plete data simultaneously.In particular,we propose to adapt the Stochastic Gradient Descent method to address missing data in linear systems.Our proposed algorithm,the Stochastic Gradient Descent for Missing Data method(mSGD),is introduced and theoretical convergence guarantees are provided.In addition,we include numerical experiments on simulated and real world data that demonstrate the usefulness of our method.
文摘Next Generation Sequencing (NGS) provides an effective basis for estimating the survival time of cancer patients, but it also poses the problem of high data dimensionality, in addition to the fact that some patients drop out of the study, making the data missing, so a method for estimating the mean of the response variable with missing values for the ultra-high dimensional datasets is needed. In this paper, we propose a two-stage ultra-high dimensional variable screening method, RF-SIS, based on random forest regression, which effectively solves the problem of estimating missing values due to excessive data dimension. After the dimension reduction process by applying RF-SIS, mean interpolation is executed on the missing responses. The results of the simulated data show that compared with the estimation method of directly deleting missing observations, the estimation results of RF-SIS-MI have significant advantages in terms of the proportion of intervals covered, the average length of intervals, and the average absolute deviation.
基金supported by the Chinese 111 Project B14019the US National Science Foundation under Grant Nos.DMS-1305474 and DMS-1612873the US National Institutes of Health Award UL1TR001412
文摘The generalized linear model is an indispensable tool for analyzing non-Gaussian response data, with both canonical and non-canonical link functions comprehensively used. When missing values are present, many existing methods in the literature heavily depend on an unverifiable assumption of the missing data mechanism, and they fail when the assumption is violated. This paper proposes a missing data mechanism that is as generally applicable as possible, which includes both ignorable and nonignorable missing data cases, as well as both scenarios of missing values in response and covariate.Under this general missing data mechanism, the authors adopt an approximate conditional likelihood method to estimate unknown parameters. The authors rigorously establish the regularity conditions under which the unknown parameters are identifiable under the approximate conditional likelihood approach. For parameters that are identifiable, the authors prove the asymptotic normality of the estimators obtained by maximizing the approximate conditional likelihood. Some simulation studies are conducted to evaluate finite sample performance of the proposed estimators as well as estimators from some existing methods. Finally, the authors present a biomarker analysis in prostate cancer study to illustrate the proposed method.
基金supported by the National Natural Science Foundation of China(No.61871400)the Natural Science Foundation of the Jiangsu Province of China(No.BK20171401)。
文摘In wireless sensor networks(WSNs),the performance of related applications is highly dependent on the quality of data collected.Unfortunately,missing data is almost inevitable in the process of data acquisition and transmission.Existing methods often rely on prior information such as low-rank characteristics or spatiotemporal correlation when recovering missing WSNs data.However,in realistic application scenarios,it is very difficult to obtain these prior information from incomplete data sets.Therefore,we aim to recover the missing WSNs data effectively while getting rid of the perplexity of prior information.By designing the corresponding measurement matrix that can capture the position of missing data and sparse representation matrix,a compressive sensing(CS)based missing data recovery model is established.Then,we design a comparison standard to select the best sparse representation basis and introduce average cross-correlation to examine the rationality of the established model.Furthermore,an improved fast matching pursuit algorithm is proposed to solve the model.Simulation results show that the proposed method can effectively recover the missing WSNs data.
基金Supported by Basic Research Foundation of Beijing Institute of Technology (20070542009)
文摘In wireless sensor networks,data missing is a common problem due to sensor faults,time synchronization,malicious attacks,and communication malfunctions,which may degrade the network's performance or lead to inefficient decisions.Therefore,it is necessary to effectively estimate the missing data.A double weighted least squares support vector machines(DWLS-SVM) model for the missing data estimation in wireless sensor networks is proposed in this paper.The algorithm first applies the weighted LS-SVM(WLS-SVM) to estimate the missing data on temporal domain and spatial domain respectively,and then uses the weighted average of these two candidates as the final estimated value.DWLS-SVM considers the possibility of outliers in the dataset and utilizes spatio-temporal dependencies among sensor nodes fully,which makes the estimate more robust and precise.Experimental results on real world dataset demonstrate that the proposed algorithm is outlier robust and can estimate the missing values accurately.
文摘<strong>Background:</strong><span style="font-family:;" "=""><span style="font-family:Verdana;"> In discrete-time event history analysis, subjects are measured once each time period until they experience the event, prematurely drop out, or when the study concludes. This implies measuring event status of a subject in each time period determines whether (s)he should be measured in subsequent time periods. For that reason, intermittent missing event status causes a problem because, unlike other repeated measurement designs, it does not make sense to simply ignore the corresponding missing event status from the analysis (as long as the dropout is ignorable). </span><b><span style="font-family:Verdana;">Method:</span></b><span style="font-family:Verdana;"> We used Monte Carlo simulation to evaluate and compare various alternatives, including event occurrence recall, event (non-)occurrence, case deletion, period deletion, and single and multiple imputation methods, to deal with missing event status. Moreover, we showed the methods’ performance in the analysis of an empirical example on relapse to drug use. </span><b><span style="font-family:Verdana;">Result:</span></b><span style="font-family:Verdana;"> The strategies assuming event (non-)occurrence and the recall strategy had the worst performance because of a substantial parameter bias and a sharp decrease in coverage rate. Deletion methods suffered from either loss of power or undercoverage</span><span style="color:red;"> </span><span style="font-family:Verdana;">issues resulting from a biased standard error. Single imputation recovered the bias issue but showed an undercoverage estimate. Multiple imputations performed reasonabl</span></span><span style="font-family:Verdana;">y</span><span style="font-family:;" "=""><span style="font-family:Verdana;"> with a negligible standard error bias leading to a gradual decrease in power. </span><b><span style="font-family:Verdana;">Conclusion:</span></b><span style="font-family:Verdana;"> On the basis of the simulation results and real example, we provide practical guidance to researches in terms of the best ways to deal with missing event history data</span></span><span style="font-family:Verdana;">.</span>
基金L.B.would like to thank the“Liveable Cities Project”for funding a visit to Hangzhou and Ningbo in China for researching on the urban micro-climate and to collabo-rate with the Centre for Sustainable Energy Technologies at the University of Nottingham Ningbo(EPSRC funded:EP/J017698/1)The installation work of the sensors’network in Hangzhou and Ningbo is supported by the Ningbo Natu-ral Science Foundation(No.2012A610173)the Ningbo Housing and Urban-Rural Development Committee(No.201206).
文摘This paper presents some installation and data analysis issues from an ongoing urban air temperature and humidity measurement campaign in Hangzhou and Ningbo, China. The location of the measurement sites, the positioning of the sensors and the harsh conditions in an urban environment can result in missing values and observations that are unrepresentative of the local urban microclimate. Missing data and erroneous values in micro-scale weather time series can produce bias in the data analysis, false correlations and wrong conclusions when deriving the specific local weather patterns. A methodology is presented for the identification of values that could be false and for determining whether these are “noise”. Seven statistical methods were evaluated in their performance for replacing missing and erroneous values in urban weather time series. The two methods that proposed replacement with the mean values from sensors in locations with a Sky View Factor similar to that of the target sensor and the sensors closest to the target’s location performed well for all Day-Night and Cold-Warm days scenarios. However, during night time in warm weather the replacement with the mean values for air temperature of the nearest locations outperformed all other methods. The results give some initial evidence of the distinctive urban microclimate development in time and space under different regional weather forcings.
基金This work is financially supported by the National Nature Science Foundation of China(No.52004096)the Hebei Province High-End Iron and Steel Metallurgical Joint Research Fund Project,China(No.E2019209314)+1 种基金the Scientific Research Program Project of Hebei Education Department,China(No.QN2019200)the Tangshan Science and Technology Planning Project,China(No.19150241E).
文摘Blast furnace data processing is prone to problems such as outliers.To overcome these problems and identify an improved method for processing blast furnace data,we conducted an in-depth study of blast furnace data.Based on data samples from selected iron and steel companies,data types were classified according to different characteristics;then,appropriate methods were selected to process them in order to solve the deficiencies and outliers of the original blast furnace data.Linear interpolation was used to fill in the divided continuation data,the Knearest neighbor(KNN)algorithm was used to fill in correlation data with the internal law,and periodic statistical data were filled by the average.The error rate in the filling was low,and the fitting degree was over 85%.For the screening of outliers,corresponding indicator parameters were added according to the continuity,relevance,and periodicity of different data.Also,a variety of algorithms were used for processing.Through the analysis of screening results,a large amount of efficient information in the data was retained,and ineffective outliers were eliminated.Standardized processing of blast furnace big data as the basis of applied research on blast furnace big data can serve as an important means to improve data quality and retain data value.
基金supported by Ten Thousand Talent Program of Yunnan Province(Grant No.YNWR-QNBJ-2018-174)the Key Basic Research Program of Yunnan Province,China(Grant No.202101BC070003)+3 种基金National Natural Science Foundation of China(Grant No.31901237)Conservation Program for Plant Species with Extremely Small Populations in Yunnan Province(Grant No.2022SJ07X-03)Key Technologies Research for the Germplasmof Important Woody Flowers in Yunnan Province(Grant No.202302AE090018)Natural Science Foundation of Guizhou Province(Grant No.Qiankehejichu-ZK2021yiban 089&Qiankehejichu-ZK2023yiban 035)。
文摘Rhododendron is famous for its high ornamental value.However,the genus is taxonomically difficult and the relationships within Rhododendron remain unresolved.In addition,the origin of key morphological characters with high horticulture value need to be explored.Both problems largely hinder utilization of germplasm resources.Most studies attempted to disentangle the phylogeny of Rhododendron,but only used a few genomic markers and lacked large-scale sampling,resulting in low clade support and contradictory phylogenetic signals.Here,we used restriction-site associated DNA sequencing(RAD-seq)data and morphological traits for 144 species of Rhododendron,representing all subgenera and most sections and subsections of this species-rich genus,to decipher its intricate evolutionary history and reconstruct ancestral state.Our results revealed high resolutions at subgenera and section levels of Rhododendron based on RAD-seq data.Both optimal phylogenetic tree and split tree recovered five lineages among Rhododendron.Subg.Therorhodion(cladeⅠ)formed the basal lineage.Subg.Tsutsusi and Azaleastrum formed cladeⅡand had sister relationships.CladeⅢincluded all scaly rhododendron species.Subg.Pentanthera(cladeⅣ)formed a sister group to Subg.Hymenanthes(cladeⅤ).The results of ancestral state reconstruction showed that Rhododendron ancestor was a deciduous woody plant with terminal inflorescence,ten stamens,leaf blade without scales and broadly funnelform corolla with pink or purple color.This study shows significant distinguishability to resolve the evolutionary history of Rhododendron based on high clade support of phylogenetic tree constructed by RAD-seq data.It also provides an example to resolve discordant signals in phylogenetic trees and demonstrates the application feasibility of RAD-seq with large amounts of missing data in deciphering intricate evolutionary relationships.Additionally,the reconstructed ancestral state of six important characters provides insights into the innovation of key characters in Rhododendron.