This article proposes a statistical method for working out reliability sampling plans under Type I censored sample for items whose failure times have either normal or lognormal distributions. The quality statistic is ...This article proposes a statistical method for working out reliability sampling plans under Type I censored sample for items whose failure times have either normal or lognormal distributions. The quality statistic is a method of moments estimator of a monotonous function of the unreliability. An approach of choosing a truncation time is recommended. The sample size and acceptability constant are approximately determined by using the Cornish-Fisher expansion for quantiles of distribution. Simulation results show that the method given in this article is feasible.展开更多
A composite random variable is a product (or sum of products) of statistically distributed quantities. Such a variable can represent the solution to a multi-factor quantitative problem submitted to a large, diverse, i...A composite random variable is a product (or sum of products) of statistically distributed quantities. Such a variable can represent the solution to a multi-factor quantitative problem submitted to a large, diverse, independent, anonymous group of non-expert respondents (the “crowd”). The objective of this research is to examine the statistical distribution of solutions from a large crowd to a quantitative problem involving image analysis and object counting. Theoretical analysis by the author, covering a range of conditions and types of factor variables, predicts that composite random variables are distributed log-normally to an excellent approximation. If the factors in a problem are themselves distributed log-normally, then their product is rigorously log-normal. A crowdsourcing experiment devised by the author and implemented with the assistance of a BBC (British Broadcasting Corporation) television show, yielded a sample of approximately 2000 responses consistent with a log-normal distribution. The sample mean was within ~12% of the true count. However, a Monte Carlo simulation (MCS) of the experiment, employing either normal or log-normal random variables as factors to model the processes by which a crowd of 1 million might arrive at their estimates, resulted in a visually perfect log-normal distribution with a mean response within ~5% of the true count. The results of this research suggest that a well-modeled MCS, by simulating a sample of responses from a large, rational, and incentivized crowd, can provide a more accurate solution to a quantitative problem than might be attainable by direct sampling of a smaller crowd or an uninformed crowd, irrespective of size, that guesses randomly.展开更多
In digital soil mapping(DSM),a fundamental assumption is that the spatial variability of the target variable can be explained by the predictors or environmental covariates.Strategies to adequately sample the predictor...In digital soil mapping(DSM),a fundamental assumption is that the spatial variability of the target variable can be explained by the predictors or environmental covariates.Strategies to adequately sample the predictors have been well documented,with the conditioned Latin hypercube sampling(cLHS)algorithm receiving the most attention in the DSM community.Despite advances in sampling design,a critical gap remains in determining the number of samples required for DSM projects.We propose a simple workflow and function coded in R language to determine the minimum sample size for the cLHS algorithm based on histograms of the predictor variables using the Freedman-Diaconis rule for determining optimal bin width.Data preprocessing was included to correct for multimodal and non-normally distributed data,as these can affect sample size determination from the histogram.Based on a user-selected quantile range(QR)for the sample plan,the densities of the histogram bins at the upper and lower bounds of the QR were used as a scaling factor to determine minimum sample size.This technique was applied to a field-scale set of environmental covariates for a well-sampled agricultural study site near Guelph,Ontario,Canada,and tested across a range of QRs.The results showed increasing minimum sample size with an increase in the QR selected.Minimum sample size increased from 44 to 83 when the QR increased from 50% to 95% and then increased exponentially to 194 for the 99%QR.This technique provides an estimate of minimum sample size that can be used as an input to the cLHS algorithm.展开更多
Under the condition of normal strength and stress with unknown distribution parameters but getting a completed sample respectively, a comparison among the errors of some kinds of approximate limits for structural reli...Under the condition of normal strength and stress with unknown distribution parameters but getting a completed sample respectively, a comparison among the errors of some kinds of approximate limits for structural reliability has been made in this paper, basing on the exact limits presented. All results in this paper can be used with condition logical normal distribution conveniently.展开更多
基金This work is partially supported by National Natural Science Foundation of China (10071090 and 10271013).
文摘This article proposes a statistical method for working out reliability sampling plans under Type I censored sample for items whose failure times have either normal or lognormal distributions. The quality statistic is a method of moments estimator of a monotonous function of the unreliability. An approach of choosing a truncation time is recommended. The sample size and acceptability constant are approximately determined by using the Cornish-Fisher expansion for quantiles of distribution. Simulation results show that the method given in this article is feasible.
文摘A composite random variable is a product (or sum of products) of statistically distributed quantities. Such a variable can represent the solution to a multi-factor quantitative problem submitted to a large, diverse, independent, anonymous group of non-expert respondents (the “crowd”). The objective of this research is to examine the statistical distribution of solutions from a large crowd to a quantitative problem involving image analysis and object counting. Theoretical analysis by the author, covering a range of conditions and types of factor variables, predicts that composite random variables are distributed log-normally to an excellent approximation. If the factors in a problem are themselves distributed log-normally, then their product is rigorously log-normal. A crowdsourcing experiment devised by the author and implemented with the assistance of a BBC (British Broadcasting Corporation) television show, yielded a sample of approximately 2000 responses consistent with a log-normal distribution. The sample mean was within ~12% of the true count. However, a Monte Carlo simulation (MCS) of the experiment, employing either normal or log-normal random variables as factors to model the processes by which a crowd of 1 million might arrive at their estimates, resulted in a visually perfect log-normal distribution with a mean response within ~5% of the true count. The results of this research suggest that a well-modeled MCS, by simulating a sample of responses from a large, rational, and incentivized crowd, can provide a more accurate solution to a quantitative problem than might be attainable by direct sampling of a smaller crowd or an uninformed crowd, irrespective of size, that guesses randomly.
基金the Natural Science and Engineering Research Council(NSERC)of Canada,which supported and funded this project through an NSERC Postgraduate Scholarship—Doctoral(PGS-D)。
文摘In digital soil mapping(DSM),a fundamental assumption is that the spatial variability of the target variable can be explained by the predictors or environmental covariates.Strategies to adequately sample the predictors have been well documented,with the conditioned Latin hypercube sampling(cLHS)algorithm receiving the most attention in the DSM community.Despite advances in sampling design,a critical gap remains in determining the number of samples required for DSM projects.We propose a simple workflow and function coded in R language to determine the minimum sample size for the cLHS algorithm based on histograms of the predictor variables using the Freedman-Diaconis rule for determining optimal bin width.Data preprocessing was included to correct for multimodal and non-normally distributed data,as these can affect sample size determination from the histogram.Based on a user-selected quantile range(QR)for the sample plan,the densities of the histogram bins at the upper and lower bounds of the QR were used as a scaling factor to determine minimum sample size.This technique was applied to a field-scale set of environmental covariates for a well-sampled agricultural study site near Guelph,Ontario,Canada,and tested across a range of QRs.The results showed increasing minimum sample size with an increase in the QR selected.Minimum sample size increased from 44 to 83 when the QR increased from 50% to 95% and then increased exponentially to 194 for the 99%QR.This technique provides an estimate of minimum sample size that can be used as an input to the cLHS algorithm.
文摘Under the condition of normal strength and stress with unknown distribution parameters but getting a completed sample respectively, a comparison among the errors of some kinds of approximate limits for structural reliability has been made in this paper, basing on the exact limits presented. All results in this paper can be used with condition logical normal distribution conveniently.