期刊文献+
共找到23篇文章
< 1 2 >
每页显示 20 50 100
基于Subsampling抽样的厚尾AR(p)序列趋势变点的Ratio检验
1
作者 王爱民 金浩 宋雪丽 《统计与决策》 北大核心 2023年第10期34-38,共5页
文章考虑的是厚尾AR(p)序列趋势变点检验问题。首先,在已有研究的启发下,构造了一个Ratio统计量来检验趋势变点;其次,在原假设下证明统计量的极限分布是列维过程的泛函,在备择假设下得到统计量的一致性;其次,为了避免参数的估计,采用Sub... 文章考虑的是厚尾AR(p)序列趋势变点检验问题。首先,在已有研究的启发下,构造了一个Ratio统计量来检验趋势变点;其次,在原假设下证明统计量的极限分布是列维过程的泛函,在备择假设下得到统计量的一致性;其次,为了避免参数的估计,采用Subsampling方法获得更为准确的临界值,数值模拟结果显示,在大样本下基于Subsampling抽样方法的Ratio检验很好地控制了经验水平,经验势也达到了比较好的效果;最后,通过一组实证数据进一步阐明理论的有效性和可行性。 展开更多
关键词 趋势变点 Ratio检验 厚尾 subsampling
下载PDF
含有变点的厚尾单位根的subsampling检验 被引量:1
2
作者 秦瑞兵 田铮 《工程数学学报》 CSCD 北大核心 2010年第3期429-440,共12页
本文研究趋势项含有变点且新息为方差无穷厚尾过程的序列单位根检验问题,通过构造DF型检验,得到了其渐近分布。为避免估计统计量渐近分布中的尾指数,构造subsampling抽样方法来确定统计量渐近分布的百分位数,同时论证了subsampling抽样... 本文研究趋势项含有变点且新息为方差无穷厚尾过程的序列单位根检验问题,通过构造DF型检验,得到了其渐近分布。为避免估计统计量渐近分布中的尾指数,构造subsampling抽样方法来确定统计量渐近分布的百分位数,同时论证了subsampling抽样方法的一致性。最后,用Monte Carlo模拟证实本文所提出统计量以及subsampling抽样方法的可行性。 展开更多
关键词 方差无穷过程 变点 单位根检验 subsampling方法
下载PDF
重尾过程的subsampling协整检验
3
作者 刘维奇 段丽娅 秦瑞兵 《纺织高校基础科学学报》 CAS 2015年第3期316-323 342,342,共9页
由于重尾过程协整检验统计量的渐近分布含有不可估计的重尾指数α,本文通过构造subsampling抽样算法,在不估计重尾指数α的情况下,计算该检验统计量的临界值,并且证明该算法在理论上的合理性.最后,通过MonteCalo模拟证明该方法的有效性.
关键词 重尾过程 协整检验 subsampling抽样算法
下载PDF
基于稳定分布的ARCH模型均值变点Subsampling检验 被引量:2
4
作者 刘舰东 金浩 《统计与信息论坛》 CSSCI 北大核心 2018年第6期14-18,共5页
讨论了基于稳定分布的ARCH模型的均值变点检验问题,其中特征指数k∈(1,2)。基于残量平方累积和统计量,利用Subsampling抽样方法确定渐近分布的临界值,从而避免特征指数k的估计。结果显示:蒙特卡罗数值模拟结果和实证分析充分说明了Subsa... 讨论了基于稳定分布的ARCH模型的均值变点检验问题,其中特征指数k∈(1,2)。基于残量平方累积和统计量,利用Subsampling抽样方法确定渐近分布的临界值,从而避免特征指数k的估计。结果显示:蒙特卡罗数值模拟结果和实证分析充分说明了Subsampling抽样方法的可行性和有效性。因此,基于Subsampling的残量平方累积和检验对于稳定分布的ARCH模型均值变点检验仍不失为一种有效的方法。 展开更多
关键词 稳定分布 变点 残量平方累积和检验 subsampling
下载PDF
Subsampling Method for Robust Estimation of Regression Models 被引量:1
5
作者 Min Tsao Xiao Ling 《Open Journal of Statistics》 2012年第3期281-296,共16页
We propose a subsampling method for robust estimation of regression models which is built on classical methods such as the least squares method. It makes use of the non-robust nature of the underlying classical method... We propose a subsampling method for robust estimation of regression models which is built on classical methods such as the least squares method. It makes use of the non-robust nature of the underlying classical method to find a good sample from regression data contaminated with outliers, and then applies the classical method to the good sample to produce robust estimates of the regression model parameters. The subsampling method is a computational method rooted in the bootstrap methodology which trades analytical treatment for intensive computation;it finds the good sample through repeated fitting of the regression model to many random subsamples of the contaminated data instead of through an analytical treatment of the outliers. The subsampling method can be applied to all regression models for which non-robust classical methods are available. In the present paper, we focus on the basic formulation and robustness property of the subsampling method that are valid for all regression models. We also discuss variations of the method and apply it to three examples involving three different regression models. 展开更多
关键词 subsampling ALGORITHM ROBUST Regression OUTLIERS BOOTSTRAP GOODNESS-OF-FIT
下载PDF
Responses of diff erent biodiversity indices to subsampling efforts in lotic macroinvertebrate assemblages
6
作者 WANG Jun LI Zhengfei +3 位作者 SONG Zhuoyan ZHANG Yun JIANG Xiaoming XIE Zhicai 《Journal of Oceanology and Limnology》 SCIE CAS CSCD 2019年第1期122-133,共12页
As a less time-consuming procedure, subsampling technology has been widely used in biological monitoring and assessment programs. It is clear that subsampling counts af fect the value of traditional biodiversity indic... As a less time-consuming procedure, subsampling technology has been widely used in biological monitoring and assessment programs. It is clear that subsampling counts af fect the value of traditional biodiversity indices, but its ef fect on taxonomic distinctness(TD) indices is less well studied. Here, we examined the responses of traditional(species richness, Shannon-Wiener diversity) and TD(average taxonomic distinctness: Δ +, and variation in taxonomic distinctness: Λ +) indices to subsample counts using a random subsampling procedure from 50 to 400 individuals, based on macroinvertebrate datasets from three dif ferent river systems in China. At regional scale, taxa richness asymptotically increased with ?xed-count size; ≥250–300 individuals to express 95% information of the raw data. In contrast, TD indices were less sensitive to the subsampling procedure. At local scale, TD indices were more stable and had less deviation than species richness and Shannon-Wiener index, even at low subsample counts, with ≥100 individuals needed to estimate 95% of the information of the actual Δ + and Λ + in the three river basins. We also found that abundance had a certain ef fect on diversity indices during the subsampling procedure, with dif ferent subsampling counts for species richness and TD indices varying by regions. Therefore, we suggest that TD indices are suitable for biodiversity assessment and environment monitoring. Meanwhile, pilot analyses are necessary when to determine the appropriate subsample counts for bioassessment in a new region or habitat type. 展开更多
关键词 subsampling MACROINVERTEBRATES TAXONOMIC distinctness indices TAXA richness Shannon-Wiener index
下载PDF
Optimal decorrelated score subsampling for generalized linear models with massive data
7
作者 Junzhuo Gao Lei Wang Heng Lian 《Science China Mathematics》 SCIE CSCD 2024年第2期405-430,共26页
In this paper, we consider the unified optimal subsampling estimation and inference on the lowdimensional parameter of main interest in the presence of the nuisance parameter for low/high-dimensionalgeneralized linear... In this paper, we consider the unified optimal subsampling estimation and inference on the lowdimensional parameter of main interest in the presence of the nuisance parameter for low/high-dimensionalgeneralized linear models (GLMs) with massive data. We first present a general subsampling decorrelated scorefunction to reduce the influence of the less accurate nuisance parameter estimation with the slow convergencerate. The consistency and asymptotic normality of the resultant subsample estimator from a general decorrelatedscore subsampling algorithm are established, and two optimal subsampling probabilities are derived under theA- and L-optimality criteria to downsize the data volume and reduce the computational burden. The proposedoptimal subsampling probabilities provably improve the asymptotic efficiency of the subsampling schemes in thelow-dimensional GLMs and perform better than the uniform subsampling scheme in the high-dimensional GLMs.A two-step algorithm is further proposed to implement, and the asymptotic properties of the correspondingestimators are also given. Simulations show satisfactory performance of the proposed estimators, and twoapplications to census income and Fashion-MNIST datasets also demonstrate its practical applicability. 展开更多
关键词 A-OPTIMALITY decorrelated score subsampling high-dimensional inference L-optimality massive data
原文传递
Optimal Poisson Subsampling for Softmax Regression 被引量:1
8
作者 YAO Yaqiong ZOU Jiahui WANG Haiying 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2023年第4期1609-1625,共17页
Softmax regression,which is also called multinomial logistic regression,is widely used in various fields for modeling the relationship between covariates and categorical responses with multiple levels.The increasing v... Softmax regression,which is also called multinomial logistic regression,is widely used in various fields for modeling the relationship between covariates and categorical responses with multiple levels.The increasing volumes of data bring new challenges for parameter estimation in softmax regression,and the optimal subsampling method is an effective way to solve them.However,optimal subsampling with replacement requires to access all the sampling probabilities simultaneously to draw a subsample,and the resultant subsample could contain duplicate observations.In this paper,the authors consider Poisson subsampling for its higher estimation accuracy and applicability in the scenario that the data exceed the memory limit.The authors derive the asymptotic properties of the general Poisson subsampling estimator and obtain optimal subsampling probabilities by minimizing the asymptotic variance-covariance matrix under both A-and L-optimality criteria.The optimal subsampling probabilities contain unknown quantities from the full dataset,so the authors suggest an approximately optimal Poisson subsampling algorithm which contains two sampling steps,with the first step as a pilot phase.The authors demonstrate the performance of our optimal Poisson subsampling algorithm through numerical simulations and real data examples. 展开更多
关键词 Multinomial logistic regression optimality criterion optimal subsampling
原文传递
Closed-Form Models of Accuracy Loss due to Subsampling in SVD Collaborative Filtering
9
作者 Samin Poudel Marwan Bikdash 《Big Data Mining and Analytics》 EI CSCD 2023年第1期72-84,共13页
We postulate and analyze a nonlinear subsampling accuracy loss(SSAL)model based on the root mean square error(RMSE)and two SSAL models based on the mean square error(MSE),suggested by extensive preliminary simulations... We postulate and analyze a nonlinear subsampling accuracy loss(SSAL)model based on the root mean square error(RMSE)and two SSAL models based on the mean square error(MSE),suggested by extensive preliminary simulations.The SSAL models predict accuracy loss in terms of subsampling parameters like the fraction of users dropped(FUD)and the fraction of items dropped(FID).We seek to investigate whether the models depend on the characteristics of the dataset in a constant way across datasets when using the SVD collaborative filtering(CF)algorithm.The dataset characteristics considered include various densities of the rating matrix and the numbers of users and items.Extensive simulations and rigorous regression analysis led to empirical symmetrical SSAL models in terms of FID and FUD whose coefficients depend only on the data characteristics.The SSAL models came out to be multi-linear in terms of odds ratios of dropping a user(or an item)vs.not dropping it.Moreover,one MSE deterioration model turned out to be linear in the FID and FUD odds where their interaction term has a zero coefficient.Most importantly,the models are constant in the sense that they are written in closed-form using the considered data characteristics(densities and numbers of users and items).The models are validated through extensive simulations based on 850 synthetically generated primary(pre-subsampling)matrices derived from the 25M MovieLens dataset.Nearly 460000 subsampled rating matrices were then simulated and subjected to the singular value decomposition(SVD)CF algorithm.Further validation was conducted using the 1M MovieLens and the Yahoo!Music Rating datasets.The models were constant and significant across all 3 datasets. 展开更多
关键词 collaborative filtering subsampling accuracy loss models performance loss recommendation system SIMULATION rating matrix root mean square error
原文传递
Subsampling bias and the best-discrepancy systematic cross validation 被引量:1
10
作者 Liang Guo Jianya Liu Ruodan Lu 《Science China Mathematics》 SCIE CSCD 2021年第1期197-210,共14页
Statistical machine learning models should be evaluated and validated before putting to work.Conventional k-fold Monte Carlo cross-validation(MCCV)procedure uses a pseudo-random sequence to partition instances into k ... Statistical machine learning models should be evaluated and validated before putting to work.Conventional k-fold Monte Carlo cross-validation(MCCV)procedure uses a pseudo-random sequence to partition instances into k subsets,which usually causes subsampling bias,inflates generalization errors and jeopardizes the reliability and effectiveness of cross-validation.Based on ordered systematic sampling theory in statistics and low-discrepancy sequence theory in number theory,we propose a new k-fold cross-validation procedure by replacing a pseudo-random sequence with a best-discrepancy sequence,which ensures low subsampling bias and leads to more precise expected-prediction-error(EPE)estimates.Experiments with 156 benchmark datasets and three classifiers(logistic regression,decision tree and na?ve bayes)show that in general,our cross-validation procedure can extrude subsampling bias in the MCCV by lowering the EPE around 7.18%and the variances around 26.73%.In comparison,the stratified MCCV can reduce the EPE and variances of the MCCV around 1.58%and 11.85%,respectively.The leave-one-out(LOO)can lower the EPE around 2.50%but its variances are much higher than the any other cross-validation(CV)procedure.The computational time of our cross-validation procedure is just 8.64%of the MCCV,8.67%of the stratified MCCV and 16.72%of the LOO.Experiments also show that our approach is more beneficial for datasets characterized by relatively small size and large aspect ratio.This makes our approach particularly pertinent when solving bioscience classification problems.Our proposed systematic subsampling technique could be generalized to other machine learning algorithms that involve random subsampling mechanism. 展开更多
关键词 subsampling bias cross validation systematic sampling low-discrepancy sequence best-discrepancy sequence
原文传递
Combined subsampling and analytical integration for efficient large-scale GW calculations for 2D systems 被引量:1
11
作者 Weiyi Xia Weiwei Gao +4 位作者 Gabriel Lopez-Candales Yabei Wu Wei Ren Wenqing Zhang Peihong Zhang 《npj Computational Materials》 SCIE EI CSCD 2020年第1期660-668,共9页
Accurate and efficient predictions of the quasiparticle properties of complex materials remain a major challenge due to the convergence issue and the unfavorable scaling of the computational cost with respect to the s... Accurate and efficient predictions of the quasiparticle properties of complex materials remain a major challenge due to the convergence issue and the unfavorable scaling of the computational cost with respect to the system size.Quasiparticle GW calculations for two-dimensional(2D)materials are especially difficult.The unusual analytical behaviors of the dielectric screening and the electron self-energy of 2D materials make the conventional Brillouin zone(BZ)integration approach rather inefficient and require an extremely dense k-grid to properly converge the calculated quasiparticle energies.In this work,we present a combined nonuniform subsampling and analytical integration method that can drastically improve the efficiency of the BZ integration in 2D GW calculations. 展开更多
关键词 ANALYTICAL INTEGRATION subsampling
原文传递
Optimal Dependence of Performance and Efficiency of Collaborative Filtering on Random Stratified Subsampling 被引量:1
12
作者 Samin Poudel Marwan Bikdash 《Big Data Mining and Analytics》 EI 2022年第3期192-205,共14页
Dropping fractions of users or items judiciously can reduce the computational cost of Collaborative Filtering(CF)algorithms.The effect of this subsampling on the computing time and accuracy of CF is not fully understo... Dropping fractions of users or items judiciously can reduce the computational cost of Collaborative Filtering(CF)algorithms.The effect of this subsampling on the computing time and accuracy of CF is not fully understood,and clear guidelines for selecting optimal or even appropriate subsampling levels are not available.In this paper,we present a Density-based Random Stratified Subsampling using Clustering(DRSC)algorithm in which the desired Fraction of Users Dropped(FUD)and Fraction of Items Dropped(FID)are specified,and the overall density during subsampling is maintained.Subsequently,we develop simple models of the Training Time Improvement(TTI)and the Accuracy Loss(AL)as functions of FUD and FID,based on extensive simulations of seven standard CF algorithms as applied to various primary matrices from MovieLens,Yahoo Music Rating,and Amazon Automotive data.Simulations show that both TTI and a scaled AL are bi-linear in FID and FUD for all seven methods.The TTI linear regression of a CF method appears to be same for all datasets.Extensive simulations illustrate that TTI can be estimated reliably with FUD and FID only,but AL requires considering additional dataset characteristics.The derived models are then used to optimize the levels of subsampling addressing the tradeoff between TTI and AL.A simple sub-optimal approximation was found,in which the optimal AL is proportional to the optimal Training Time Reduction Factor(TTRF)for higher values of TTRF,and the optimal subsampling levels,like optimal FID/(1-FID),are proportional to the square root of TTRF. 展开更多
关键词 Collaborative Filtering(CF) subsampling Training Time Improvement(TTI) performance loss Recommendation System(RS) collaborative filtering optimal solutions rating matrix
原文传递
Deep learning for predictive mechanical properties of hot-rolled strip in complex manufacturing systems
13
作者 Feifei Li Anrui He +5 位作者 Yong Song Zheng Wang Xiaoqing Xu Shiwei Zhang Yi Qiang Chao Liu 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS CSCD 2023年第6期1093-1103,共11页
Higher requirements for the accuracy of relevant models are put throughout the transformation and upgrade of the iron and steel sector to intelligent production.It has been difficult to meet the needs of the field wit... Higher requirements for the accuracy of relevant models are put throughout the transformation and upgrade of the iron and steel sector to intelligent production.It has been difficult to meet the needs of the field with the usual prediction model of mechanical properties of hotrolled strip.Insufficient data and difficult parameter adjustment limit deep learning models based on multi-layer networks in practical applications;besides,the limited discrete process parameters used make it impossible to effectively depict the actual strip processing process.In order to solve these problems,this research proposed a new sampling approach for mechanical characteristics input data of hot-rolled strip based on the multi-grained cascade forest(gcForest)framework.According to the characteristics of complex process flow and abnormal sensitivity of process path and parameters to product quality in the hot-rolled strip production,a three-dimensional continuous time series process data sampling method based on time-temperature-deformation was designed.The basic information of strip steel(chemical composition and typical process parameters)is fused with the local process information collected by multi-grained scanning,so that the next link’s input has both local and global features.Furthermore,in the multi-grained scanning structure,a sub sampling scheme with a variable window was designed,so that input data with different dimensions can get output characteristics of the same dimension after passing through the multi-grained scanning structure,allowing the cascade forest structure to be trained normally.Finally,actual production data of three steel grades was used to conduct the experimental evaluation.The results revealed that the gcForest-based mechanical property prediction model outperforms the competition in terms of comprehensive performance,ease of parameter adjustment,and ability to sustain high prediction accuracy with fewer samples. 展开更多
关键词 hot-rolled strip prediction of mechanical properties deep learning multi-grained cascade forest time series feature extraction variable window subsampling
下载PDF
CMOS analog and mixed-signal phase-locked loops: An overview 被引量:2
14
作者 Zhao Zhang 《Journal of Semiconductors》 EI CAS CSCD 2020年第11期13-30,共18页
CMOS analog and mixed-signal phase-locked loops(PLL)are widely used in varies of the system-on-chips(SoC)as the clock generator or frequency synthesizer.This paper presents an overview of the AMS-PLL,including:1)a bri... CMOS analog and mixed-signal phase-locked loops(PLL)are widely used in varies of the system-on-chips(SoC)as the clock generator or frequency synthesizer.This paper presents an overview of the AMS-PLL,including:1)a brief introduction of the basics of the charge-pump based PLL,which is the most widely used AMS-PLL architecture due to its simplicity and robustness;2)a summary of the design issues of the basic CPPLL architecture;3)a systematic introduction of the techniques for the performance enhancement of the CPPLL;4)a brief overview of ultra-low-jitter AMS-PLL architectures which can achieve lower jitter(<100 fs)with lower power consumption compared with the CPPLL,including the injection-locked PLL(ILPLL),subsampling(SSPLL)and sampling PLL(SPLL);5)a discussion about the consideration of the AMS-PLL architecture selection,which could help designers meet their performance requirements. 展开更多
关键词 phase-locked loop(PLL) charge-pump based PLL(CPPLL) ultra-low-jitter PLL injection-locked PLL(ILPLL) subsampling PLL(SSPLL) sampling PLL(SPLL)
下载PDF
Conversion of adverse data corpus to shrewd output using sampling metrics
15
作者 Shahzad Ashraf Sehrish Saleem +2 位作者 Tauqeer Ahmed Zeeshan Aslam Durr Muhammad 《Visual Computing for Industry,Biomedicine,and Art》 2020年第1期202-214,共13页
An imbalanced dataset is commonly found in at least one class,which are typically exceeded by the other ones.A machine learning algorithm(classifier)trained with an imbalanced dataset predicts the majority class(frequ... An imbalanced dataset is commonly found in at least one class,which are typically exceeded by the other ones.A machine learning algorithm(classifier)trained with an imbalanced dataset predicts the majority class(frequently occurring)more than the other minority classes(rarely occurring).Training with an imbalanced dataset poses challenges for classifiers;however,applying suitable techniques for reducing class imbalance issues can enhance classifiers’performance.In this study,we consider an imbalanced dataset from an educational context.Initially,we examine all shortcomings regarding the classification of an imbalanced dataset.Then,we apply data-level algorithms for class balancing and compare the performance of classifiers.The performance of the classifiers is measured using the underlying information in their confusion matrices,such as accuracy,precision,recall,and F measure.The results show that classification with an imbalanced dataset may produce high accuracy but low precision and recall for the minority class.The analysis confirms that undersampling and oversampling are effective for balancing datasets,but the latter dominates. 展开更多
关键词 Classification Machine learning Spread subsampling Class imbalance
下载PDF
HAC-Robust Measurement of the Duration of a Trendless Subsample in a Global Climate Time Series 被引量:1
16
作者 Ross R. McKitrick 《Open Journal of Statistics》 2014年第7期527-535,共9页
The IPCC has drawn attention to an apparent leveling-off of globally-averaged temperatures over the past 15 years or so. Measuring the duration of the hiatus has implications for determining if the underlying trend ha... The IPCC has drawn attention to an apparent leveling-off of globally-averaged temperatures over the past 15 years or so. Measuring the duration of the hiatus has implications for determining if the underlying trend has changed, and for evaluating climate models. Here, I propose a method for estimating the duration of the hiatus that is robust to unknown forms of heteroskedasticity and autocorrelation (HAC) in the temperature series and to cherry-picking of endpoints. For the specific case of global average temperatures I also add the requirement of spatial consistency between hemispheres. The method makes use of the Vogelsang-Franses (2005) HAC-robust trend variance estimator which is valid as long as the underlying series is trend stationary, which is the case for the data used herein. Application of the method shows that there is now a trendless interval of 19 years duration at the end of the HadCRUT4 surface temperature series, and of 16 - 26 years in the lower troposphere. Use of a simple AR1 trend model suggests a shorter hiatus of 14 - 20 years but is likely unreliable. 展开更多
关键词 Global WARMING TREND HAC-Robust Trendless Subsample
下载PDF
Tests for Two-Sample Location Problem Based on Subsample Quantiles
17
作者 Parameshwar V. Pandit Savitha Kumari S. B. Javali 《Open Journal of Statistics》 2014年第1期70-74,共5页
This paper presents a new class of test procedures for two-sample location problem based on subsample quantiles. The class includes Mann-Whitney test as a special case. The asymptotic normality of the class of tests p... This paper presents a new class of test procedures for two-sample location problem based on subsample quantiles. The class includes Mann-Whitney test as a special case. The asymptotic normality of the class of tests proposed is established. The asymptotic relative performance of the proposed class of test with respect to the optimal member of Xie and Priebe (2000) is studied in terms of Pitman efficiency for various underlying distributions. 展开更多
关键词 U-STATISTIC Class of TESTS Two-Sample Location Problem Asymptotic NORMALITY Pitman ARE Subsample QUANTILES
下载PDF
Forecasting Realized Volatility Using Subsample Averaging
18
作者 Huiyu Huang Tae-Hwy Lee 《Open Journal of Statistics》 2013年第5期379-383,共5页
When the observed price process is the true underlying price process plus microstructure noise, it is known that realized volatility (RV) estimates will be overwhelmed by the noise when the sampling frequency approach... When the observed price process is the true underlying price process plus microstructure noise, it is known that realized volatility (RV) estimates will be overwhelmed by the noise when the sampling frequency approaches infinity. Therefore, it may be optimal to sample less frequently, and averaging the less frequently sampled subsamples can improve estimation for quadratic variation. In this paper, we extend this idea to forecasting daily realized volatility. While subsample averaging has been proposed and used in estimating RV, this paper is the first that uses subsample averaging for forecasting RV. The subsample averaging method we examine incorporates the high frequency data in different levels of systematic sampling. It first pools the high frequency data into several subsamples, then generates forecasts from each subsample, and then combines these forecasts. We find that in daily S&P 500 return realized volatility forecasts, subsample averaging generates better forecasts than those using only one subsample. 展开更多
关键词 Subsample AVERAGING FORECAST Combination HIGH-FREQUENCY Data Realized VOLATILITY ARFIMA MODEL HAR MODEL
下载PDF
SSCC: A Novel Computational Framework for Rapid and Accurate Clustering Large-scale Single Cell RNA-seq Data 被引量:3
19
作者 Xianwen Ren Liangtao Zheng Zemin Zhang 《Genomics, Proteomics & Bioinformatics》 SCIE CAS CSCD 2019年第2期201-210,共10页
Clustering is a prevalent analytical means to analyze single cell RNA sequencing (scRNA-seq) data but the rapidly expanding data volume can make this process computationally challenging. New methods for both accurate ... Clustering is a prevalent analytical means to analyze single cell RNA sequencing (scRNA-seq) data but the rapidly expanding data volume can make this process computationally challenging. New methods for both accurate and efficient clustering are of pressing need. Here we proposed Spearman subsampling-clustering-classification (SSCC),a new clustering framework based on random projection and feature construction,for large-scale scRNA-seq data. SSCC greatly improves clustering accuracy,robustness,and computational efficacy for various state-of-the-art algorithms benchmarked on multiple real datasets. On a dataset with 68,578 human blood cells,SSCC achieved 20%improvement for clustering accuracy and 50-fold acceleration,but only consumed 66%memory usage,compared to the widelyused software package SC3. Compared to k-means,the accuracy improvement of SSCC can reach 3-fold. An R implementation of SSCC is available at https://github.com/Japrin/sscClust. 展开更多
关键词 Single cell RNA-SEQ CLUSTERING subsampling Classification
原文传递
Most Likely Optimal Subsampled Markov Chain Monte Carlo 被引量:1
20
作者 HU Guanyu WANG Haiying 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2021年第3期1121-1134,共14页
Markov Chain Monte Carlo(MCMC) requires to evaluate the full data likelihood at different parameter values iteratively and is often computationally infeasible for large data sets. This paper proposes to approximate th... Markov Chain Monte Carlo(MCMC) requires to evaluate the full data likelihood at different parameter values iteratively and is often computationally infeasible for large data sets. This paper proposes to approximate the log-likelihood with subsamples taken according to nonuniform subsampling probabilities, and derives the most likely optimal(MLO) subsampling probabilities for better approximation. Compared with existing subsampled MCMC algorithm with equal subsampling probabilities,the MLO subsampled MCMC has a higher estimation efficiency with the same subsampling ratio. The authors also derive a formula using the asymptotic distribution of the subsampled log-likelihood to determine the required subsample size in each MCMC iteration for a given level of precision. This formula is used to develop an adaptive version of the MLO subsampled MCMC algorithm. Numerical experiments demonstrate that the proposed method outperforms the uniform subsampled MCMC. 展开更多
关键词 Big data MCMC metropolis-hasting algorithm nonuniform subsampling
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部