The lottery has long captivated the imagination of players worldwide, offering the tantalizing possibility of life-changing wins. While winning the lottery is largely a matter of chance, as lottery drawings are typica...The lottery has long captivated the imagination of players worldwide, offering the tantalizing possibility of life-changing wins. While winning the lottery is largely a matter of chance, as lottery drawings are typically random and unpredictable. Some people use the lottery terminal randomly generates numbers for them, some players choose numbers that hold personal significance to them, such as birthdays, anniversaries, or other important dates, some enthusiasts have turned to statistical analysis as a means to analyze past winning numbers identify patterns or frequencies. In this paper, we use order statistics to estimate the probability of specific order of numbers or number combinations being drawn in future drawings.展开更多
In this present time,Human Activity Recognition(HAR)has been of considerable aid in the case of health monitoring and recovery.The exploitation of machine learning with an intelligent agent in the area of health infor...In this present time,Human Activity Recognition(HAR)has been of considerable aid in the case of health monitoring and recovery.The exploitation of machine learning with an intelligent agent in the area of health informatics gathered using HAR augments the decision-making quality and significance.Although many research works conducted on Smart Healthcare Monitoring,there remain a certain number of pitfalls such as time,overhead,and falsification involved during analysis.Therefore,this paper proposes a Statistical Partial Regression and Support Vector Intelligent Agent Learning(SPR-SVIAL)for Smart Healthcare Monitoring.At first,the Statistical Partial Regression Feature Extraction model is used for data preprocessing along with the dimensionality-reduced features extraction process.Here,the input dataset the continuous beat-to-beat heart data,triaxial accelerometer data,and psychological characteristics were acquired from IoT wearable devices.To attain highly accurate Smart Healthcare Monitoring with less time,Partial Least Square helps extract the dimensionality-reduced features.After that,with these resulting features,SVIAL is proposed for Smart Healthcare Monitoring with the help of Machine Learning and Intelligent Agents to minimize both analysis falsification and overhead.Experimental evaluation is carried out for factors such as time,overhead,and false positive rate accuracy concerning several instances.The quantitatively analyzed results indicate the better performance of our proposed SPR-SVIAL method when compared with two state-of-the-art methods.展开更多
The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software w...The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.展开更多
In basketball, each player’s skill level is the key to a team’s success or failure, the skill level is affected by many personal and environmental factors. A physics-informed AI statistics has become extremely impor...In basketball, each player’s skill level is the key to a team’s success or failure, the skill level is affected by many personal and environmental factors. A physics-informed AI statistics has become extremely important. In this article, a complex non-linear process is considered by taking into account the average points per game of each player, playing time, shooting percentage, and others. This physics-informed statistics is to construct a multiple linear regression model with physics-informed neural networks. Based on the official data provided by the American Basketball League, and combined with specific methods of R program analysis, the regression model affecting the player’s average points per game is verified, and the key factors affecting the player’s average points per game are finally elucidated. The paper provides a novel window for coaches to make meaningful in-game adjustments to team members.展开更多
Electrical impedance tomography (EIT) aims to reconstruct the conductivity distribution using the boundary measured voltage potential. Traditional regularization based method would suffer from error propagation due to...Electrical impedance tomography (EIT) aims to reconstruct the conductivity distribution using the boundary measured voltage potential. Traditional regularization based method would suffer from error propagation due to the iteration process. The statistical inverse problem method uses statistical inference to estimate unknown parameters. In this article, we develop a nonlinear weighted anisotropic total variation (NWATV) prior density function based on the recently proposed NWATV regularization method. We calculate the corresponding posterior density function, i.e., the solution of the EIT inverse problem in the statistical sense, via a modified Markov chain Monte Carlo (MCMC) sampling. We do numerical experiment to validate the proposed approach.展开更多
Statistical literacy is crucial for cultivating well-rounded thinkers.The integration of evidence-based strategies in teaching and learning is pivotal for enhancing students’statistical literacy.This research specifi...Statistical literacy is crucial for cultivating well-rounded thinkers.The integration of evidence-based strategies in teaching and learning is pivotal for enhancing students’statistical literacy.This research specifically focuses on the utilization of Share and Model Concepts and Nurturing Metacognition as evidence-based strategies aimed at improving the statistical literacy of learners.The study employed a quasi-experimental design,specifically the nonequivalent control group,wherein students answered pre-test and post-test instruments and researcher-made questionnaires.The study included 50 first-year Bachelor in Secondary Education majors in Mathematics and Science for the academic year 2023-2024.The results of the study revealed a significant difference in the scores of student respondents,indicating that the use of evidence-based strategies helped students enhance their statistical literacy.This signifies a noteworthy increase in their performance,ranging from very low to very high proficiency in understanding statistical concepts,insights into the application of statistical concepts,numeracy,graph skills,interpretation capabilities,and visualization and communication skills.Furthermore,the study showed a significant difference in the post-test scores’performance of the two groups in understanding statistical concepts and visualization and communication skills.However,no significant difference was found in the post-test scores of the two groups concerning insights into the application of statistical concepts,numeracy and graph skills,and interpretation capabilities.Additionally,students acknowledged that the implementation of evidence-based strategies significantly contributed to the improvement of their statistical literacy.展开更多
Background:Survival from birth to slaughter is an important economic trait in commercial pig productions.Increasing survival can improve both economic efficiency and animal welfare.The aim of this study is to explore ...Background:Survival from birth to slaughter is an important economic trait in commercial pig productions.Increasing survival can improve both economic efficiency and animal welfare.The aim of this study is to explore the impact of genotyping strategies and statistical models on the accuracy of genomic prediction for survival in pigs during the total growing period from birth to slaughter.Results:We simulated pig populations with different direct and maternal heritabilities and used a linear mixed model,a logit model,and a probit model to predict genomic breeding values of pig survival based on data of individual survival records with binary outcomes(0,1).The results show that in the case of only alive animals having genotype data,unbiased genomic predictions can be achieved when using variances estimated from pedigreebased model.Models using genomic information achieved up to 59.2%higher accuracy of estimated breeding value compared to pedigree-based model,dependent on genotyping scenarios.The scenario of genotyping all individuals,both dead and alive individuals,obtained the highest accuracy.When an equal number of individuals(80%)were genotyped,random sample of individuals with genotypes achieved higher accuracy than only alive individuals with genotypes.The linear model,logit model and probit model achieved similar accuracy.Conclusions:Our conclusion is that genomic prediction of pig survival is feasible in the situation that only alive pigs have genotypes,but genomic information of dead individuals can increase accuracy of genomic prediction by 2.06%to 6.04%.展开更多
Alfvén ion cyclotron waves(ACWs)and kinetic Alfvén waves(KAWs)are found to exist at<0.3 au observed by Parker Solar Probe in Alfvénic slow solar winds.To examine the statistical properties of the bac...Alfvén ion cyclotron waves(ACWs)and kinetic Alfvén waves(KAWs)are found to exist at<0.3 au observed by Parker Solar Probe in Alfvénic slow solar winds.To examine the statistical properties of the background parameters for ACWs and KAWs and related wave disturbances,both wave events observed by Parker Solar Probe are selected and analyzed.The results show that there are obvious differences in the background and disturbance parameters between ACWs and KAWs.ACW events have a relatively higher occurrence rate but with a total duration slightly shorter than KAW events.The median background magnetic field magnitude and the related background solar wind speed of KAW events are larger than those of ACWs.The distributions of the relative disturbances of the proton velocity,proton temperature,the proton number density,andβcover wider ranges for ACW events than for KAW events.The results may be important for the understanding of the nature and characteristics of Alfvénic slow solar wind fluctuations at ion scales near the Sun,and provide the information of the background field and plasma parameters and the wave disturbances of ACWs and KAWs for further relevant theoretical modeling or numerical simulations.展开更多
Choosing appropriate statistical tests is crucial but deciding which tests to use can be challenging. Different tests suit different types of data and research questions, so it is important to choose the right one. Kn...Choosing appropriate statistical tests is crucial but deciding which tests to use can be challenging. Different tests suit different types of data and research questions, so it is important to choose the right one. Knowing how to select an appropriate test can lead to more accurate results. Invalid results and misleading conclusions may be drawn from a study if an incorrect statistical test is used. Therefore, to avoid these it is essential to understand the nature of the data, the research question, and the assumptions of the tests before selecting one. This is because there are a wide variety of tests available. This paper provides a step-by-step approach to selecting the right statistical test for any study, with an explanation of when it is appropriate to use it and relevant examples of each statistical test. Furthermore, this guide provides a comprehensive overview of the assumptions of each test and what to do if these assumptions are violated.展开更多
Convolutional neural networks(CNNs) have been widely studied and found to obtain favorable results in statistical downscaling to derive high-resolution climate variables from large-scale coarse general circulation mod...Convolutional neural networks(CNNs) have been widely studied and found to obtain favorable results in statistical downscaling to derive high-resolution climate variables from large-scale coarse general circulation models(GCMs).However, there is a lack of research exploring the predictor selection for CNN modeling. This paper presents an effective and efficient greedy elimination algorithm to address this problem. The algorithm has three main steps: predictor importance attribution, predictor removal, and CNN retraining, which are performed sequentially and iteratively. The importance of individual predictors is measured by a gradient-based importance metric computed by a CNN backpropagation technique, which was initially proposed for CNN interpretation. The algorithm is tested on the CNN-based statistical downscaling of monthly precipitation with 20 candidate predictors and compared with a correlation analysisbased approach. Linear models are implemented as benchmarks. The experiments illustrate that the predictor selection solution can reduce the number of input predictors by more than half, improve the accuracy of both linear and CNN models,and outperform the correlation analysis method. Although the RMSE(root-mean-square error) is reduced by only 0.8%,only 9 out of 20 predictors are used to build the CNN, and the FLOPs(Floating Point Operations) decrease by 20.4%. The results imply that the algorithm can find subset predictors that correlate more to the monthly precipitation of the target area and seasons in a nonlinear way. It is worth mentioning that the algorithm is compatible with other CNN models with stacked variables as input and has the potential for nonlinear correlation predictor selection.展开更多
Choosing appropriate statistical tests is crucial but deciding which tests to use can be challenging. Different tests suit different types of data and research questions, so it is important to choose the right one. Kn...Choosing appropriate statistical tests is crucial but deciding which tests to use can be challenging. Different tests suit different types of data and research questions, so it is important to choose the right one. Knowing how to select an appropriate test can lead to more accurate results. Invalid results and misleading conclusions may be drawn from a study if an incorrect statistical test is used. Therefore, to avoid these it is essential to understand the nature of the data, the research question, and the assumptions of the tests before selecting one. This is because there are a wide variety of tests available. This paper provides a step-by-step approach to selecting the right statistical test for any study, with an explanation of when it is appropriate to use it and relevant examples of each statistical test. Furthermore, this guide provides a comprehensive overview of the assumptions of each test and what to do if these assumptions are violated.展开更多
The aim of this paper is to present a generalization of the Shapiro-Wilk W-test or Shapiro-Francia W'-test for application to two or more variables. It consists of calculating all the unweighted linear combination...The aim of this paper is to present a generalization of the Shapiro-Wilk W-test or Shapiro-Francia W'-test for application to two or more variables. It consists of calculating all the unweighted linear combinations of the variables and their W- or W'-statistics with the Royston’s log-transformation and standardization, z<sub>ln(1-W)</sub> or z<sub>ln(1-W</sub><sub>'</sub><sub>)</sub>. Because the calculation of the probability of z<sub>ln(1-W)</sub> or z<sub>ln(1-W</sub><sub>'</sub><sub>)</sub> is to the right tail, negative values are truncated to 0 before doing their sum of squares. Independence in the sequence of these half-normally distributed values is required for the test statistic to follow a chi-square distribution. This assumption is checked using the robust Ljung-Box test. One degree of freedom is lost for each cancelled value. Defined the new test with its two variants (Q-test or Q'-test), 50 random samples with 4 variables and 20 participants were generated, 20% following a multivariate normal distribution and 80% deviating from this distribution. The new test was compared with Mardia’s, runs, and Royston’s tests. Central tendency differences in type II error and statistical power were tested using the Friedman’s test and pairwise comparisons using the Wilcoxon’s test. Differences in the frequency of successes in statistical decision making were compared using the Cochran’s Q test and pairwise comparisons using the McNemar’s test. Sensitivity, specificity and efficiency proportions were compared using the McNemar’s Z test. The generated 50 samples were classified into five ordered categories of deviation from multivariate normality, the correlation between this variable and p-value of each test was calculated using the Spearman’s coefficient and these correlations were compared. Family-wise error rate corrections were applied. The new test and the Royston’s test were the best choices, with a very slight advantage Q-test over Q'-test. Based on these promising results, further study and use of this new sensitive, specific and effective test are suggested.展开更多
Phase-matching quantum key distribution is a promising scheme for remote quantum key distribution,breaking through the traditional linear key-rate bound.In practical applications,finite data size can cause significant...Phase-matching quantum key distribution is a promising scheme for remote quantum key distribution,breaking through the traditional linear key-rate bound.In practical applications,finite data size can cause significant system performance to deteriorate when data size is below 1010.In this work,an improved statistical fluctuation analysis method is applied for the first time to two decoy-states phase-matching quantum key distribution,offering a new insight and potential solutions for improving the key generation rate and the maximum transmission distance while maintaining security.Moreover,we also compare the influence of the proposed improved statistical fluctuation analysis method on system performance with those of the Gaussian approximation and Chernoff-Hoeffding boundary methods on system performance.The simulation results show that the proposed scheme significantly improves the key generation rate and maximum transmission distance in comparison with the Chernoff-Hoeffding approach,and approach the results obtained when the Gaussian approximation is employed.At the same time,the proposed scheme retains the same security level as the Chernoff-Hoeffding method,and is even more secure than the Gaussian approximation.展开更多
In economics, buyers and sellers are usually the main sides in a market. Game theory can perfectly model decisions behind each “player” and calculate an outcome that benefits both sides. However, the use of game the...In economics, buyers and sellers are usually the main sides in a market. Game theory can perfectly model decisions behind each “player” and calculate an outcome that benefits both sides. However, the use of game theory is not lim-ited to economics. In this paper, I will introduce the mathematical model of general sum game, solutions and theorems surrounding game theory, and its real life applications in many different scenarios.展开更多
A method to remove stripes from remote sensing images is proposed based on statistics and a new image enhancement method.The overall processing steps for improving the quality of remote sensing images are introduced t...A method to remove stripes from remote sensing images is proposed based on statistics and a new image enhancement method.The overall processing steps for improving the quality of remote sensing images are introduced to provide a general baseline.Due to the differences in satellite sensors when producing images,subtle but inherent stripes can appear at the stitching positions between the sensors.These stitchingstripes cannot be eliminated by conventional relative radiometric calibration.The inherent stitching stripes cause difficulties in downstream tasks such as the segmentation,classification and interpretation of remote sensing images.Therefore,a method to remove the stripes based on statistics and a new image enhancement approach are proposed in this paper.First,the inconsistency in grayscales around stripes is eliminated with the statistical method.Second,the pixels within stripes are weighted and averaged based on updated pixel values to enhance the uniformity of the overall image radiation quality.Finally,the details of the images are highlighted by a new image enhancement method,which makes the whole image clearer.Comprehensive experiments are performed,and the results indicate that the proposed method outperforms the baseline approach in terms of visual quality and radiation correction accuracy.展开更多
In this paper,we consider a reconfigurable intelligent surface(RIS)-assisted multiple-input multiple-output(MIMO)secure communication system,where only legitimate user's(Bob's)statistical channel state informa...In this paper,we consider a reconfigurable intelligent surface(RIS)-assisted multiple-input multiple-output(MIMO)secure communication system,where only legitimate user's(Bob's)statistical channel state information(CSI)can be obtained at the transmitter(Alice),while eavesdropper's(Eve's)CSI is unknown.Firstly,the analytical expression of the achievable ergodic rate at Bob is obtained.Then,by exploiting Bob's statistical CSI,we jointly design the transmit covariance matrix at Alice and the phase shift matrix at the RIS to minimize the transmit power of the information signal under the quality-of-service(QoS)constraint of Bob.Finally,we propose an artificial noise(AN)-aided method without Eve's CSI to enhance the security of this system and use the residual power to design the transmit covariance for AN.Simulation results verify the convergence of the proposed method,and also show that there exists a trade-off between the secrecy rate and QoS of Bob.展开更多
Quantum computers promise to solve finite-temperature properties of quantum many-body systems,which is generally challenging for classical computers due to high computational complexities.Here,we report experimental p...Quantum computers promise to solve finite-temperature properties of quantum many-body systems,which is generally challenging for classical computers due to high computational complexities.Here,we report experimental preparations of Gibbs states and excited states of Heisenberg X X and X X Z models by using a 5-qubit programmable superconducting processor.In the experiments,we apply a hybrid quantum–classical algorithm to generate finite temperature states with classical probability models and variational quantum circuits.We reveal that the Hamiltonians can be fully diagonalized with optimized quantum circuits,which enable us to prepare excited states at arbitrary energy density.We demonstrate that the approach has a self-verifying feature and can estimate fundamental thermal observables with a small statistical error.Based on numerical results,we further show that the time complexity of our approach scales polynomially in the number of qubits,revealing its potential in solving large-scale problems.展开更多
In this paper,we introduce a new four-parameter version of the traditional Weibull distribution.It is able to provide seven shapes of hazard rate,including constant,decreasing,increasing,unimodal,bathtub,unimodal then...In this paper,we introduce a new four-parameter version of the traditional Weibull distribution.It is able to provide seven shapes of hazard rate,including constant,decreasing,increasing,unimodal,bathtub,unimodal then bathtub,and bathtub then unimodal shapes.Some basic characteristics of the proposedmodel are studied,including moments,entropies,mean deviations and order statistics,and its parameters are estimated using the maximum likelihood approach.Based on the asymptotic properties of the estimators,the approximate confidence intervals are also taken into consideration in addition to the point estimators.We examine the effectiveness of the maximum likelihood estimators of the model’s parameters through simulation research.Based on the simulation findings,it can be concluded that the provided estimators are consistent and that asymptotic normality is a good method to get the interval estimates.Three actual data sets for COVID-19,engineering and blood cancer are used to empirically demonstrate the new distribution’s usefulness inmodeling real-world data.The analysis demonstrates the proposed distribution’s ability in modeling many forms of data as opposed to some of its well-known sub-models,such as alpha powerWeibull distribution.展开更多
Predicting potential risks associated with the fatigue of key structural components is crucial in engineering design.However,fatigue often involves entangled complexities of material microstructures and service condit...Predicting potential risks associated with the fatigue of key structural components is crucial in engineering design.However,fatigue often involves entangled complexities of material microstructures and service conditions,making diagnosis and prognosis of fatigue damage challenging.We report a statistical learning framework to predict the growth of fatigue cracks and the life-to-failure of the components under loading conditions with uncertainties.Digital libraries of fatigue crack patterns and the remaining life are constructed by high-fidelity physical simulations.Dimensionality reduction and neural network architectures are then used to learn the history dependence and nonlinearity of fatigue crack growth.Path-slicing and re-weighting techniques are introduced to handle the statistical noises and rare events.The predicted fatigue crack patterns are self-updated and self-corrected by the evolving crack patterns.The end-to-end approach is validated by representative examples with fatigue cracks in plates,which showcase the digital-twin scenario in real-time structural health monitoring and fatigue life prediction for maintenance management decision-making.展开更多
文摘The lottery has long captivated the imagination of players worldwide, offering the tantalizing possibility of life-changing wins. While winning the lottery is largely a matter of chance, as lottery drawings are typically random and unpredictable. Some people use the lottery terminal randomly generates numbers for them, some players choose numbers that hold personal significance to them, such as birthdays, anniversaries, or other important dates, some enthusiasts have turned to statistical analysis as a means to analyze past winning numbers identify patterns or frequencies. In this paper, we use order statistics to estimate the probability of specific order of numbers or number combinations being drawn in future drawings.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R194)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘In this present time,Human Activity Recognition(HAR)has been of considerable aid in the case of health monitoring and recovery.The exploitation of machine learning with an intelligent agent in the area of health informatics gathered using HAR augments the decision-making quality and significance.Although many research works conducted on Smart Healthcare Monitoring,there remain a certain number of pitfalls such as time,overhead,and falsification involved during analysis.Therefore,this paper proposes a Statistical Partial Regression and Support Vector Intelligent Agent Learning(SPR-SVIAL)for Smart Healthcare Monitoring.At first,the Statistical Partial Regression Feature Extraction model is used for data preprocessing along with the dimensionality-reduced features extraction process.Here,the input dataset the continuous beat-to-beat heart data,triaxial accelerometer data,and psychological characteristics were acquired from IoT wearable devices.To attain highly accurate Smart Healthcare Monitoring with less time,Partial Least Square helps extract the dimensionality-reduced features.After that,with these resulting features,SVIAL is proposed for Smart Healthcare Monitoring with the help of Machine Learning and Intelligent Agents to minimize both analysis falsification and overhead.Experimental evaluation is carried out for factors such as time,overhead,and false positive rate accuracy concerning several instances.The quantitatively analyzed results indicate the better performance of our proposed SPR-SVIAL method when compared with two state-of-the-art methods.
文摘The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.
文摘In basketball, each player’s skill level is the key to a team’s success or failure, the skill level is affected by many personal and environmental factors. A physics-informed AI statistics has become extremely important. In this article, a complex non-linear process is considered by taking into account the average points per game of each player, playing time, shooting percentage, and others. This physics-informed statistics is to construct a multiple linear regression model with physics-informed neural networks. Based on the official data provided by the American Basketball League, and combined with specific methods of R program analysis, the regression model affecting the player’s average points per game is verified, and the key factors affecting the player’s average points per game are finally elucidated. The paper provides a novel window for coaches to make meaningful in-game adjustments to team members.
文摘Electrical impedance tomography (EIT) aims to reconstruct the conductivity distribution using the boundary measured voltage potential. Traditional regularization based method would suffer from error propagation due to the iteration process. The statistical inverse problem method uses statistical inference to estimate unknown parameters. In this article, we develop a nonlinear weighted anisotropic total variation (NWATV) prior density function based on the recently proposed NWATV regularization method. We calculate the corresponding posterior density function, i.e., the solution of the EIT inverse problem in the statistical sense, via a modified Markov chain Monte Carlo (MCMC) sampling. We do numerical experiment to validate the proposed approach.
文摘Statistical literacy is crucial for cultivating well-rounded thinkers.The integration of evidence-based strategies in teaching and learning is pivotal for enhancing students’statistical literacy.This research specifically focuses on the utilization of Share and Model Concepts and Nurturing Metacognition as evidence-based strategies aimed at improving the statistical literacy of learners.The study employed a quasi-experimental design,specifically the nonequivalent control group,wherein students answered pre-test and post-test instruments and researcher-made questionnaires.The study included 50 first-year Bachelor in Secondary Education majors in Mathematics and Science for the academic year 2023-2024.The results of the study revealed a significant difference in the scores of student respondents,indicating that the use of evidence-based strategies helped students enhance their statistical literacy.This signifies a noteworthy increase in their performance,ranging from very low to very high proficiency in understanding statistical concepts,insights into the application of statistical concepts,numeracy,graph skills,interpretation capabilities,and visualization and communication skills.Furthermore,the study showed a significant difference in the post-test scores’performance of the two groups in understanding statistical concepts and visualization and communication skills.However,no significant difference was found in the post-test scores of the two groups concerning insights into the application of statistical concepts,numeracy and graph skills,and interpretation capabilities.Additionally,students acknowledged that the implementation of evidence-based strategies significantly contributed to the improvement of their statistical literacy.
基金funded by the"Genetic improvement of pig survival"project from Danish Pig Levy Foundation (Aarhus,Denmark)The China Scholarship Council (CSC)for providing scholarship to the first author。
文摘Background:Survival from birth to slaughter is an important economic trait in commercial pig productions.Increasing survival can improve both economic efficiency and animal welfare.The aim of this study is to explore the impact of genotyping strategies and statistical models on the accuracy of genomic prediction for survival in pigs during the total growing period from birth to slaughter.Results:We simulated pig populations with different direct and maternal heritabilities and used a linear mixed model,a logit model,and a probit model to predict genomic breeding values of pig survival based on data of individual survival records with binary outcomes(0,1).The results show that in the case of only alive animals having genotype data,unbiased genomic predictions can be achieved when using variances estimated from pedigreebased model.Models using genomic information achieved up to 59.2%higher accuracy of estimated breeding value compared to pedigree-based model,dependent on genotyping scenarios.The scenario of genotyping all individuals,both dead and alive individuals,obtained the highest accuracy.When an equal number of individuals(80%)were genotyped,random sample of individuals with genotypes achieved higher accuracy than only alive individuals with genotypes.The linear model,logit model and probit model achieved similar accuracy.Conclusions:Our conclusion is that genomic prediction of pig survival is feasible in the situation that only alive pigs have genotypes,but genomic information of dead individuals can increase accuracy of genomic prediction by 2.06%to 6.04%.
基金the National Natural Science Foundation of China(NSFC,grant Nos.41874201,12250014,11790302,42174195,and 11873018)the Specialized Research Fund for State Key Laboratories.
文摘Alfvén ion cyclotron waves(ACWs)and kinetic Alfvén waves(KAWs)are found to exist at<0.3 au observed by Parker Solar Probe in Alfvénic slow solar winds.To examine the statistical properties of the background parameters for ACWs and KAWs and related wave disturbances,both wave events observed by Parker Solar Probe are selected and analyzed.The results show that there are obvious differences in the background and disturbance parameters between ACWs and KAWs.ACW events have a relatively higher occurrence rate but with a total duration slightly shorter than KAW events.The median background magnetic field magnitude and the related background solar wind speed of KAW events are larger than those of ACWs.The distributions of the relative disturbances of the proton velocity,proton temperature,the proton number density,andβcover wider ranges for ACW events than for KAW events.The results may be important for the understanding of the nature and characteristics of Alfvénic slow solar wind fluctuations at ion scales near the Sun,and provide the information of the background field and plasma parameters and the wave disturbances of ACWs and KAWs for further relevant theoretical modeling or numerical simulations.
文摘Choosing appropriate statistical tests is crucial but deciding which tests to use can be challenging. Different tests suit different types of data and research questions, so it is important to choose the right one. Knowing how to select an appropriate test can lead to more accurate results. Invalid results and misleading conclusions may be drawn from a study if an incorrect statistical test is used. Therefore, to avoid these it is essential to understand the nature of the data, the research question, and the assumptions of the tests before selecting one. This is because there are a wide variety of tests available. This paper provides a step-by-step approach to selecting the right statistical test for any study, with an explanation of when it is appropriate to use it and relevant examples of each statistical test. Furthermore, this guide provides a comprehensive overview of the assumptions of each test and what to do if these assumptions are violated.
基金supported by the following grants: National Basic R&D Program of China (2018YFA0606203)Strategic Priority Research Program of Chinese Academy of Sciences (XDA23090102 and XDA20060501)+2 种基金Guangdong Major Project of Basic and Applied Basic Research (2020B0301030004)Special Fund of China Meteorological Administration for Innovation and Development (CXFZ2021J026)Special Fund for Forecasters of China Meteorological Administration (CMAYBY2020094)。
文摘Convolutional neural networks(CNNs) have been widely studied and found to obtain favorable results in statistical downscaling to derive high-resolution climate variables from large-scale coarse general circulation models(GCMs).However, there is a lack of research exploring the predictor selection for CNN modeling. This paper presents an effective and efficient greedy elimination algorithm to address this problem. The algorithm has three main steps: predictor importance attribution, predictor removal, and CNN retraining, which are performed sequentially and iteratively. The importance of individual predictors is measured by a gradient-based importance metric computed by a CNN backpropagation technique, which was initially proposed for CNN interpretation. The algorithm is tested on the CNN-based statistical downscaling of monthly precipitation with 20 candidate predictors and compared with a correlation analysisbased approach. Linear models are implemented as benchmarks. The experiments illustrate that the predictor selection solution can reduce the number of input predictors by more than half, improve the accuracy of both linear and CNN models,and outperform the correlation analysis method. Although the RMSE(root-mean-square error) is reduced by only 0.8%,only 9 out of 20 predictors are used to build the CNN, and the FLOPs(Floating Point Operations) decrease by 20.4%. The results imply that the algorithm can find subset predictors that correlate more to the monthly precipitation of the target area and seasons in a nonlinear way. It is worth mentioning that the algorithm is compatible with other CNN models with stacked variables as input and has the potential for nonlinear correlation predictor selection.
文摘Choosing appropriate statistical tests is crucial but deciding which tests to use can be challenging. Different tests suit different types of data and research questions, so it is important to choose the right one. Knowing how to select an appropriate test can lead to more accurate results. Invalid results and misleading conclusions may be drawn from a study if an incorrect statistical test is used. Therefore, to avoid these it is essential to understand the nature of the data, the research question, and the assumptions of the tests before selecting one. This is because there are a wide variety of tests available. This paper provides a step-by-step approach to selecting the right statistical test for any study, with an explanation of when it is appropriate to use it and relevant examples of each statistical test. Furthermore, this guide provides a comprehensive overview of the assumptions of each test and what to do if these assumptions are violated.
文摘The aim of this paper is to present a generalization of the Shapiro-Wilk W-test or Shapiro-Francia W'-test for application to two or more variables. It consists of calculating all the unweighted linear combinations of the variables and their W- or W'-statistics with the Royston’s log-transformation and standardization, z<sub>ln(1-W)</sub> or z<sub>ln(1-W</sub><sub>'</sub><sub>)</sub>. Because the calculation of the probability of z<sub>ln(1-W)</sub> or z<sub>ln(1-W</sub><sub>'</sub><sub>)</sub> is to the right tail, negative values are truncated to 0 before doing their sum of squares. Independence in the sequence of these half-normally distributed values is required for the test statistic to follow a chi-square distribution. This assumption is checked using the robust Ljung-Box test. One degree of freedom is lost for each cancelled value. Defined the new test with its two variants (Q-test or Q'-test), 50 random samples with 4 variables and 20 participants were generated, 20% following a multivariate normal distribution and 80% deviating from this distribution. The new test was compared with Mardia’s, runs, and Royston’s tests. Central tendency differences in type II error and statistical power were tested using the Friedman’s test and pairwise comparisons using the Wilcoxon’s test. Differences in the frequency of successes in statistical decision making were compared using the Cochran’s Q test and pairwise comparisons using the McNemar’s test. Sensitivity, specificity and efficiency proportions were compared using the McNemar’s Z test. The generated 50 samples were classified into five ordered categories of deviation from multivariate normality, the correlation between this variable and p-value of each test was calculated using the Spearman’s coefficient and these correlations were compared. Family-wise error rate corrections were applied. The new test and the Royston’s test were the best choices, with a very slight advantage Q-test over Q'-test. Based on these promising results, further study and use of this new sensitive, specific and effective test are suggested.
文摘Phase-matching quantum key distribution is a promising scheme for remote quantum key distribution,breaking through the traditional linear key-rate bound.In practical applications,finite data size can cause significant system performance to deteriorate when data size is below 1010.In this work,an improved statistical fluctuation analysis method is applied for the first time to two decoy-states phase-matching quantum key distribution,offering a new insight and potential solutions for improving the key generation rate and the maximum transmission distance while maintaining security.Moreover,we also compare the influence of the proposed improved statistical fluctuation analysis method on system performance with those of the Gaussian approximation and Chernoff-Hoeffding boundary methods on system performance.The simulation results show that the proposed scheme significantly improves the key generation rate and maximum transmission distance in comparison with the Chernoff-Hoeffding approach,and approach the results obtained when the Gaussian approximation is employed.At the same time,the proposed scheme retains the same security level as the Chernoff-Hoeffding method,and is even more secure than the Gaussian approximation.
文摘In economics, buyers and sellers are usually the main sides in a market. Game theory can perfectly model decisions behind each “player” and calculate an outcome that benefits both sides. However, the use of game theory is not lim-ited to economics. In this paper, I will introduce the mathematical model of general sum game, solutions and theorems surrounding game theory, and its real life applications in many different scenarios.
文摘A method to remove stripes from remote sensing images is proposed based on statistics and a new image enhancement method.The overall processing steps for improving the quality of remote sensing images are introduced to provide a general baseline.Due to the differences in satellite sensors when producing images,subtle but inherent stripes can appear at the stitching positions between the sensors.These stitchingstripes cannot be eliminated by conventional relative radiometric calibration.The inherent stitching stripes cause difficulties in downstream tasks such as the segmentation,classification and interpretation of remote sensing images.Therefore,a method to remove the stripes based on statistics and a new image enhancement approach are proposed in this paper.First,the inconsistency in grayscales around stripes is eliminated with the statistical method.Second,the pixels within stripes are weighted and averaged based on updated pixel values to enhance the uniformity of the overall image radiation quality.Finally,the details of the images are highlighted by a new image enhancement method,which makes the whole image clearer.Comprehensive experiments are performed,and the results indicate that the proposed method outperforms the baseline approach in terms of visual quality and radiation correction accuracy.
基金supported in part by the National Key Research and Development Program of China under Grant 2020YFB1804900in part by the National Natural Science Foundation of China under Grant 92067201,U1805262,62071247,62071249,62171240+2 种基金in part by the Jiangsu Provincial Key Research and Development Program of China under Grant BE2020084-5in part by Special Funds of the Central Government Guiding Local Science and Technology Development under Grant 2021L3010in part by Key provincial scientific and technological innovation projects under Grant 2021G02006.
文摘In this paper,we consider a reconfigurable intelligent surface(RIS)-assisted multiple-input multiple-output(MIMO)secure communication system,where only legitimate user's(Bob's)statistical channel state information(CSI)can be obtained at the transmitter(Alice),while eavesdropper's(Eve's)CSI is unknown.Firstly,the analytical expression of the achievable ergodic rate at Bob is obtained.Then,by exploiting Bob's statistical CSI,we jointly design the transmit covariance matrix at Alice and the phase shift matrix at the RIS to minimize the transmit power of the information signal under the quality-of-service(QoS)constraint of Bob.Finally,we propose an artificial noise(AN)-aided method without Eve's CSI to enhance the security of this system and use the residual power to design the transmit covariance for AN.Simulation results verify the convergence of the proposed method,and also show that there exists a trade-off between the secrecy rate and QoS of Bob.
基金Project supported by the State Key Development Program for Basic Research of China(Grant No.2017YFA0304300)the National Natural Science Foundation of China(Grant Nos.11934018,11747601,and 11975294)+4 种基金Strategic Priority Research Program of Chinese Academy of Sciences(Grant No.XDB28000000)Scientific Instrument Developing Project of Chinese Academy of Sciences(Grant No.YJKYYQ20200041)Beijing Natural Science Foundation(Grant No.Z200009)the Key-Area Research and Development Program of Guangdong Province,China(Grant No.2020B0303030001)Chinese Academy of Sciences(Grant No.QYZDB-SSW-SYS032)。
文摘Quantum computers promise to solve finite-temperature properties of quantum many-body systems,which is generally challenging for classical computers due to high computational complexities.Here,we report experimental preparations of Gibbs states and excited states of Heisenberg X X and X X Z models by using a 5-qubit programmable superconducting processor.In the experiments,we apply a hybrid quantum–classical algorithm to generate finite temperature states with classical probability models and variational quantum circuits.We reveal that the Hamiltonians can be fully diagonalized with optimized quantum circuits,which enable us to prepare excited states at arbitrary energy density.We demonstrate that the approach has a self-verifying feature and can estimate fundamental thermal observables with a small statistical error.Based on numerical results,we further show that the time complexity of our approach scales polynomially in the number of qubits,revealing its potential in solving large-scale problems.
基金The Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah,Saudi Arabia has funded this project under Grant No.(G-102-130-1443).
文摘In this paper,we introduce a new four-parameter version of the traditional Weibull distribution.It is able to provide seven shapes of hazard rate,including constant,decreasing,increasing,unimodal,bathtub,unimodal then bathtub,and bathtub then unimodal shapes.Some basic characteristics of the proposedmodel are studied,including moments,entropies,mean deviations and order statistics,and its parameters are estimated using the maximum likelihood approach.Based on the asymptotic properties of the estimators,the approximate confidence intervals are also taken into consideration in addition to the point estimators.We examine the effectiveness of the maximum likelihood estimators of the model’s parameters through simulation research.Based on the simulation findings,it can be concluded that the provided estimators are consistent and that asymptotic normality is a good method to get the interval estimates.Three actual data sets for COVID-19,engineering and blood cancer are used to empirically demonstrate the new distribution’s usefulness inmodeling real-world data.The analysis demonstrates the proposed distribution’s ability in modeling many forms of data as opposed to some of its well-known sub-models,such as alpha powerWeibull distribution.
基金the National Natural Science Foundation of China(Grant Nos.52090032 and 11825203)。
文摘Predicting potential risks associated with the fatigue of key structural components is crucial in engineering design.However,fatigue often involves entangled complexities of material microstructures and service conditions,making diagnosis and prognosis of fatigue damage challenging.We report a statistical learning framework to predict the growth of fatigue cracks and the life-to-failure of the components under loading conditions with uncertainties.Digital libraries of fatigue crack patterns and the remaining life are constructed by high-fidelity physical simulations.Dimensionality reduction and neural network architectures are then used to learn the history dependence and nonlinearity of fatigue crack growth.Path-slicing and re-weighting techniques are introduced to handle the statistical noises and rare events.The predicted fatigue crack patterns are self-updated and self-corrected by the evolving crack patterns.The end-to-end approach is validated by representative examples with fatigue cracks in plates,which showcase the digital-twin scenario in real-time structural health monitoring and fatigue life prediction for maintenance management decision-making.