Rock failure can cause serious geological disasters,and the non-extensive statistical features of electric potential(EP)are expected to provide valuable information for disaster prediction.In this paper,the uniaxial c...Rock failure can cause serious geological disasters,and the non-extensive statistical features of electric potential(EP)are expected to provide valuable information for disaster prediction.In this paper,the uniaxial compression experiments with EP monitoring were carried out on fine sandstone,marble and granite samples under four displacement rates.The Tsallis entropy q value of EPs is used to analyze the selforganization evolution of rock failure.Then the influence of displacement rate and rock type on q value are explored by mineral structure and fracture modes.A self-organized critical prediction method with q value is proposed.The results show that the probability density function(PDF)of EPs follows the q-Gaussian distribution.The displacement rate is positively correlated with q value.With the displacement rate increasing,the fracture mode changes,the damage degree intensifies,and the microcrack network becomes denser.The influence of rock type on q value is related to the burst intensity of energy release and the crack fracture mode.The q value of EPs can be used as an effective prediction index for rock failure like b value of acoustic emission(AE).The results provide useful reference and method for the monitoring and early warning of geological disasters.展开更多
In the present paper,we mostly focus on P_(p)^(2)-statistical convergence.We will look into the uniform integrability via the power series method and its characterizations for double sequences.Also,the notions of P_(p...In the present paper,we mostly focus on P_(p)^(2)-statistical convergence.We will look into the uniform integrability via the power series method and its characterizations for double sequences.Also,the notions of P_(p)^(2)-statistically Cauchy sequence,P_(p)^(2)-statistical boundedness and core for double sequences will be described in addition to these findings.展开更多
This paper contributes a sophisticated statistical method for the assessment of performance in routing protocols salient Mobile Ad Hoc Network(MANET)routing protocols:Destination Sequenced Distance Vector(DSDV),Ad hoc...This paper contributes a sophisticated statistical method for the assessment of performance in routing protocols salient Mobile Ad Hoc Network(MANET)routing protocols:Destination Sequenced Distance Vector(DSDV),Ad hoc On-Demand Distance Vector(AODV),Dynamic Source Routing(DSR),and Zone Routing Protocol(ZRP).In this paper,the evaluation will be carried out using complete sets of statistical tests such as Kruskal-Wallis,Mann-Whitney,and Friedman.It articulates a systematic evaluation of how the performance of the previous protocols varies with the number of nodes and the mobility patterns.The study is premised upon the Quality of Service(QoS)metrics of throughput,packet delivery ratio,and end-to-end delay to gain an adequate understanding of the operational efficiency of each protocol under different network scenarios.The findings explained significant differences in the performance of different routing protocols;as a result,decisions for the selection and optimization of routing protocols can be taken effectively according to different network requirements.This paper is a step forward in the general understanding of the routing dynamics of MANETs and contributes significantly to the strategic deployment of robust and efficient network infrastructures.展开更多
In this present time,Human Activity Recognition(HAR)has been of considerable aid in the case of health monitoring and recovery.The exploitation of machine learning with an intelligent agent in the area of health infor...In this present time,Human Activity Recognition(HAR)has been of considerable aid in the case of health monitoring and recovery.The exploitation of machine learning with an intelligent agent in the area of health informatics gathered using HAR augments the decision-making quality and significance.Although many research works conducted on Smart Healthcare Monitoring,there remain a certain number of pitfalls such as time,overhead,and falsification involved during analysis.Therefore,this paper proposes a Statistical Partial Regression and Support Vector Intelligent Agent Learning(SPR-SVIAL)for Smart Healthcare Monitoring.At first,the Statistical Partial Regression Feature Extraction model is used for data preprocessing along with the dimensionality-reduced features extraction process.Here,the input dataset the continuous beat-to-beat heart data,triaxial accelerometer data,and psychological characteristics were acquired from IoT wearable devices.To attain highly accurate Smart Healthcare Monitoring with less time,Partial Least Square helps extract the dimensionality-reduced features.After that,with these resulting features,SVIAL is proposed for Smart Healthcare Monitoring with the help of Machine Learning and Intelligent Agents to minimize both analysis falsification and overhead.Experimental evaluation is carried out for factors such as time,overhead,and false positive rate accuracy concerning several instances.The quantitatively analyzed results indicate the better performance of our proposed SPR-SVIAL method when compared with two state-of-the-art methods.展开更多
Within the framework of quantum statistical mechanics,we have proposed an exact analytical solution to the problemof Bose-Einstein condensation(BEC)of harmonically trapped two-dimensional(2D)ideal photons.We utilize t...Within the framework of quantum statistical mechanics,we have proposed an exact analytical solution to the problemof Bose-Einstein condensation(BEC)of harmonically trapped two-dimensional(2D)ideal photons.We utilize this analyticalsolution to investigate the statistical properties of ideal photons in a 2D dye-filled spherical cap cavity.The resultsof numerical calculation of the analytical solution agree completely with the foregoing experimental results in the BEC ofharmonically trapped 2D ideal photons.The analytical expressions of the critical temperature and the condensate fractionare derived in the thermodynamic limit.It is found that the 2D critical photon number is larger than the one-dimensional(1D)critical photon number by two orders of magnitude.The spectral radiance of a 2D spherical cap cavity has a sharppeak at the frequency of the cavity cutoff when the photon number exceeds the critical value determined by a temperature.展开更多
The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software w...The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.展开更多
Chemical oxygen demand (COD) is an important index to measure the degree of water pollution. In this paper, near-infrared technology is used to obtain 148 wastewater spectra to predict the COD value in wastewater. Fir...Chemical oxygen demand (COD) is an important index to measure the degree of water pollution. In this paper, near-infrared technology is used to obtain 148 wastewater spectra to predict the COD value in wastewater. First, the partial least squares regression (PLS) model was used as the basic model. Monte Carlo cross-validation (MCCV) was used to select 25 samples out of 148 samples that did not conform to conventional statistics. Then, the interval partial least squares (iPLS) regression modeling was carried out on 123 samples, and the spectral bands were divided into 40 subintervals. The optimal subintervals are 20 and 26, and the optimal correlation coefficient of the test set (RT) is 0.58. Further, the waveband is divided into five intervals: 17, 19, 20, 22 and 26. When the number of joint intervals under each interval is three, the optimal RT is 0.71. When the number of joint subintervals is four, the optimal RT is 0.79. Finally, convolutional neural network (CNN) was used for quantitative prediction, and RT was 0.9. The results show that CNN can automatically screen the features inside the data, and the quantitative prediction effect is better than that of iPLS and synergy interval partial least squares model (SiPLS) with joint subinterval three and four, indicating that CNN can be used for quantitative analysis of water pollution degree.展开更多
The present study aims to establish a relationship between serum AMH levels and age in a large group of women living in Bulgaria, as well as to establish reference age-specific AMH levels in women that would serve as ...The present study aims to establish a relationship between serum AMH levels and age in a large group of women living in Bulgaria, as well as to establish reference age-specific AMH levels in women that would serve as an initial estimate of ovarian age. A total of 28,016 women on the territory of the Republic of Bulgaria were tested for serum AMH levels with a median age of 37.0 years (interquartile range 32.0 to 41.0). For women aged 20 - 29 years, the Bulgarian population has relatively high median levels of AMH, similar to women of Asian origin. For women aged 30 - 34 years, our results are comparable to those of women living in Western Europe. For women aged 35 - 39 years, our results are comparable to those of women living in the territory of India and Kenya. For women aged 40 - 44 years, our results were lower than those for women from the Western European and Chinese populations, close to the Indian and higher than Korean and Kenya populations, respectively. Our results for women of Bulgarian origin are also comparable to US Latina women at age 30, 35 and 40 ages. On the base on constructed a statistical model to predicting the decline in AMH levels at different ages, we found non-linear structure of AMH decline for the low AMH 3.5) the dependence of the decline of AMH on age was confirmed as linear. In conclusion, we evaluated the serum level of AMH in Bulgarian women and established age-specific AMH percentile reference values based on a large representative sample. We have developed a prognostic statistical model that can facilitate the application of AMH in clinical practice and the prediction of reproductive capacity and population health.展开更多
Statistical Energy Analysis(SEA) is one of the conventional tools for predicting vehicle high-frequency acoustic responses.This study proposes a new method that can provide customized optimization solutions to meet NV...Statistical Energy Analysis(SEA) is one of the conventional tools for predicting vehicle high-frequency acoustic responses.This study proposes a new method that can provide customized optimization solutions to meet NVH targets based on the specific needs of different project teams during the initial project stages.This approach innovatively integrates dynamic optimization,Radial Basis Function(RBF),and Fuzzy Design Variables Genetic Algorithm(FDVGA) into the optimization process of Statistical Energy Analysis(SEA),and also takes vehicle sheet metal into account in the optimization of sound packages.In the implementation process,a correlation model is established through Python scripts to link material density with acoustic parameters,weight,and cost.By combining Optimus and VaOne software,an optimization design workflow is constructed and the optimization design process is successfully executed.Under various constraints related to acoustic performance,weight and cost,a globally optimal design is achieved.This technology has been effectively applied in the field of Battery Electric Vehicle(BEV).展开更多
Normality testing is a fundamental hypothesis test in the statistical analysis of key biological indicators of diabetes.If this assumption is violated,it may cause the test results to deviate from the true value,leadi...Normality testing is a fundamental hypothesis test in the statistical analysis of key biological indicators of diabetes.If this assumption is violated,it may cause the test results to deviate from the true value,leading to incorrect inferences and conclusions,and ultimately affecting the validity and accuracy of statistical inferences.Considering this,the study designs a unified analysis scheme for different data types based on parametric statistical test methods and non-parametric test methods.The data were grouped according to sample type and divided into discrete data and continuous data.To account for differences among subgroups,the conventional chi-squared test was used for discrete data.The normal distribution is the basis of many statistical methods;if the data does not follow a normal distribution,many statistical methods will fail or produce incorrect results.Therefore,before data analysis and modeling,the data were divided into normal and non-normal groups through normality testing.For normally distributed data,parametric statistical methods were used to judge the differences between groups.For non-normal data,non-parametric tests were employed to improve the accuracy of the analysis.Statistically significant indicators were retained according to the significance index P-value of the statistical test or corresponding statistics.These indicators were then combined with relevant medical background to further explore the etiology leading to the occurrence or transformation of diabetes status.展开更多
In this work, four empirical models of statistical thickness, namely the models of Harkins and Jura, Hasley, Carbon Black and Jaroniec, were compared in order to determine the textural properties (external surface and...In this work, four empirical models of statistical thickness, namely the models of Harkins and Jura, Hasley, Carbon Black and Jaroniec, were compared in order to determine the textural properties (external surface and surface of micropores) of a clay concrete without molasses and clay concretes stabilized with 8%, 12% and 16% molasses. The results obtained show that Hasley’s model can be used to obtain the external surfaces. However, it does not allow the surface of the micropores to be obtained, and is not suitable for the case of simple clay concrete (without molasses) and for clay concretes stabilized with molasses. The Carbon Black, Jaroniec and Harkins and Jura models can be used for clay concrete and stabilized clay concrete. However, the Carbon Black model is the most relevant for clay concrete and the Harkins and Jura model is for molasses-stabilized clay concrete. These last two models augur well for future research.展开更多
Statistical literacy is crucial for cultivating well-rounded thinkers.The integration of evidence-based strategies in teaching and learning is pivotal for enhancing students’statistical literacy.This research specifi...Statistical literacy is crucial for cultivating well-rounded thinkers.The integration of evidence-based strategies in teaching and learning is pivotal for enhancing students’statistical literacy.This research specifically focuses on the utilization of Share and Model Concepts and Nurturing Metacognition as evidence-based strategies aimed at improving the statistical literacy of learners.The study employed a quasi-experimental design,specifically the nonequivalent control group,wherein students answered pre-test and post-test instruments and researcher-made questionnaires.The study included 50 first-year Bachelor in Secondary Education majors in Mathematics and Science for the academic year 2023-2024.The results of the study revealed a significant difference in the scores of student respondents,indicating that the use of evidence-based strategies helped students enhance their statistical literacy.This signifies a noteworthy increase in their performance,ranging from very low to very high proficiency in understanding statistical concepts,insights into the application of statistical concepts,numeracy,graph skills,interpretation capabilities,and visualization and communication skills.Furthermore,the study showed a significant difference in the post-test scores’performance of the two groups in understanding statistical concepts and visualization and communication skills.However,no significant difference was found in the post-test scores of the two groups concerning insights into the application of statistical concepts,numeracy and graph skills,and interpretation capabilities.Additionally,students acknowledged that the implementation of evidence-based strategies significantly contributed to the improvement of their statistical literacy.展开更多
Electrical impedance tomography (EIT) aims to reconstruct the conductivity distribution using the boundary measured voltage potential. Traditional regularization based method would suffer from error propagation due to...Electrical impedance tomography (EIT) aims to reconstruct the conductivity distribution using the boundary measured voltage potential. Traditional regularization based method would suffer from error propagation due to the iteration process. The statistical inverse problem method uses statistical inference to estimate unknown parameters. In this article, we develop a nonlinear weighted anisotropic total variation (NWATV) prior density function based on the recently proposed NWATV regularization method. We calculate the corresponding posterior density function, i.e., the solution of the EIT inverse problem in the statistical sense, via a modified Markov chain Monte Carlo (MCMC) sampling. We do numerical experiment to validate the proposed approach.展开更多
In the strategic context of rural revitalization,optimizing the quality of agricultural statistical services is a crucial element for advancing agricultural modernization and sustainable rural economic development.Thi...In the strategic context of rural revitalization,optimizing the quality of agricultural statistical services is a crucial element for advancing agricultural modernization and sustainable rural economic development.This paper focuses on the significance of enhancing agricultural statistical service quality under the backdrop of rural revitalization.It addresses current issues such as inadequate implementation of agricultural statistical survey systems,an imperfect data quality control system,and a shortage of statistical service personnel.Proposals are made to improve the statistical survey system,enhance the data quality control framework,and strengthen personnel training.These pathways offer references for elevating the quality of agricultural statistical services and implementing the rural revitalization strategy in the new era.展开更多
Inflammatory bowel diseases (IBD) are complex multifactorial disorders that include Crohn’s disease (CD) and ulcerative colitis (UC). Considering that IBD is a genetic and multifactorial disease, we screened for the ...Inflammatory bowel diseases (IBD) are complex multifactorial disorders that include Crohn’s disease (CD) and ulcerative colitis (UC). Considering that IBD is a genetic and multifactorial disease, we screened for the distribution dynamism of IBD pathogenic genetic variants (single nucleotide polymorphisms;SNPs) and risk factors in four (4) IBD pediatric patients, by integrating both clinical exome sequencing and computational statistical approaches, aiming to categorize IBD patients in CD and UC phenotype. To this end, we first aligned genomic read sequences of these IBD patients to hg19 human genome by using bowtie 2 package. Next, we performed genetic variant calling analysis in terms of single nucleotide polymorphism (SNP) for genes covered by at least 20 read genomic sequences. Finally, we checked for biological and genomic functions of genes exhibiting statistically significant genetic variant (SNPs) by introducing Fitcon genomic parameter. Findings showed Fitcon parameter as normalizing IBD patient’s population variability, as well as inducing a relative good clustering between IBD patients in terms of CD and UC phenotypes. Genomic analysis revealed a random distribution of risk factors and as well pathogenic SNPs genetic variants in the four IBD patient’s genome, claiming to be involved in: i) Metabolic disorders, ii) Autoimmune deficiencies;iii) Crohn’s disease pathways. Integration of genomic and computational statistical analysis supported a relative genetic variability regarding IBD patient population by processing IBD pathogenic SNP genetic variants as opposite to IBD risk factor variants. Interestingly, findings clearly allowed categorizing IBD patients in CD and UC phenotypes by applying Fitcon parameter in selecting IBD pathogenic genetic variants. Considering as a whole, the study suggested the efficiency of integrating clinical exome sequencing and computational statistical tools as a right approach in discriminating IBD phenotypes as well as improving inflammatory bowel disease (IBD) molecular diagnostic process.展开更多
With the continuous development of the economy and societal progress,the economic census,as an important aspect of national statistical work,is directly influenced by the quality of grassroots infrastructure.This pape...With the continuous development of the economy and societal progress,the economic census,as an important aspect of national statistical work,is directly influenced by the quality of grassroots infrastructure.This paper thoroughly discusses the importance of strengthening the statistical foundation to improve the efficiency of economic census work,analyzes the existing issues in current infrastructure and census processes,and proposes corresponding solutions.By enhancing the professional training of grassroots statisticians,updating data collection technologies,and optimizing workflows,the aim is to significantly improve the accuracy and efficiency of the economic census,providing strong support for the healthy development of the national economy and informed decision-making.展开更多
In basketball, each player’s skill level is the key to a team’s success or failure, the skill level is affected by many personal and environmental factors. A physics-informed AI statistics has become extremely impor...In basketball, each player’s skill level is the key to a team’s success or failure, the skill level is affected by many personal and environmental factors. A physics-informed AI statistics has become extremely important. In this article, a complex non-linear process is considered by taking into account the average points per game of each player, playing time, shooting percentage, and others. This physics-informed statistics is to construct a multiple linear regression model with physics-informed neural networks. Based on the official data provided by the American Basketball League, and combined with specific methods of R program analysis, the regression model affecting the player’s average points per game is verified, and the key factors affecting the player’s average points per game are finally elucidated. The paper provides a novel window for coaches to make meaningful in-game adjustments to team members.展开更多
A multi-objective linear programming problem is made from fuzzy linear programming problem. It is due the fact that it is used fuzzy programming method during the solution. The Multi objective linear programming probl...A multi-objective linear programming problem is made from fuzzy linear programming problem. It is due the fact that it is used fuzzy programming method during the solution. The Multi objective linear programming problem can be converted into the single objective function by various methods as Chandra Sen’s method, weighted sum method, ranking function method, statistical averaging method. In this paper, Chandra Sen’s method and statistical averaging method both are used here for making single objective function from multi-objective function. Two multi-objective programming problems are solved to verify the result. One is numerical example and the other is real life example. Then the problems are solved by ordinary simplex method and fuzzy programming method. It can be seen that fuzzy programming method gives better optimal values than the ordinary simplex method.展开更多
Background:Survival from birth to slaughter is an important economic trait in commercial pig productions.Increasing survival can improve both economic efficiency and animal welfare.The aim of this study is to explore ...Background:Survival from birth to slaughter is an important economic trait in commercial pig productions.Increasing survival can improve both economic efficiency and animal welfare.The aim of this study is to explore the impact of genotyping strategies and statistical models on the accuracy of genomic prediction for survival in pigs during the total growing period from birth to slaughter.Results:We simulated pig populations with different direct and maternal heritabilities and used a linear mixed model,a logit model,and a probit model to predict genomic breeding values of pig survival based on data of individual survival records with binary outcomes(0,1).The results show that in the case of only alive animals having genotype data,unbiased genomic predictions can be achieved when using variances estimated from pedigreebased model.Models using genomic information achieved up to 59.2%higher accuracy of estimated breeding value compared to pedigree-based model,dependent on genotyping scenarios.The scenario of genotyping all individuals,both dead and alive individuals,obtained the highest accuracy.When an equal number of individuals(80%)were genotyped,random sample of individuals with genotypes achieved higher accuracy than only alive individuals with genotypes.The linear model,logit model and probit model achieved similar accuracy.Conclusions:Our conclusion is that genomic prediction of pig survival is feasible in the situation that only alive pigs have genotypes,but genomic information of dead individuals can increase accuracy of genomic prediction by 2.06%to 6.04%.展开更多
Convolutional neural networks(CNNs) have been widely studied and found to obtain favorable results in statistical downscaling to derive high-resolution climate variables from large-scale coarse general circulation mod...Convolutional neural networks(CNNs) have been widely studied and found to obtain favorable results in statistical downscaling to derive high-resolution climate variables from large-scale coarse general circulation models(GCMs).However, there is a lack of research exploring the predictor selection for CNN modeling. This paper presents an effective and efficient greedy elimination algorithm to address this problem. The algorithm has three main steps: predictor importance attribution, predictor removal, and CNN retraining, which are performed sequentially and iteratively. The importance of individual predictors is measured by a gradient-based importance metric computed by a CNN backpropagation technique, which was initially proposed for CNN interpretation. The algorithm is tested on the CNN-based statistical downscaling of monthly precipitation with 20 candidate predictors and compared with a correlation analysisbased approach. Linear models are implemented as benchmarks. The experiments illustrate that the predictor selection solution can reduce the number of input predictors by more than half, improve the accuracy of both linear and CNN models,and outperform the correlation analysis method. Although the RMSE(root-mean-square error) is reduced by only 0.8%,only 9 out of 20 predictors are used to build the CNN, and the FLOPs(Floating Point Operations) decrease by 20.4%. The results imply that the algorithm can find subset predictors that correlate more to the monthly precipitation of the target area and seasons in a nonlinear way. It is worth mentioning that the algorithm is compatible with other CNN models with stacked variables as input and has the potential for nonlinear correlation predictor selection.展开更多
基金supported by National Key R&D Program of China(2022YFC3004705)the National Natural Science Foundation of China(Nos.52074280,52227901 and 52204249)+1 种基金the Postgraduate Research&Practice Innovation Program of Jiangsu Province(No.KYCX24_2913)the Graduate Innovation Program of China University of Mining and Technology(No.2024WLKXJ139).
文摘Rock failure can cause serious geological disasters,and the non-extensive statistical features of electric potential(EP)are expected to provide valuable information for disaster prediction.In this paper,the uniaxial compression experiments with EP monitoring were carried out on fine sandstone,marble and granite samples under four displacement rates.The Tsallis entropy q value of EPs is used to analyze the selforganization evolution of rock failure.Then the influence of displacement rate and rock type on q value are explored by mineral structure and fracture modes.A self-organized critical prediction method with q value is proposed.The results show that the probability density function(PDF)of EPs follows the q-Gaussian distribution.The displacement rate is positively correlated with q value.With the displacement rate increasing,the fracture mode changes,the damage degree intensifies,and the microcrack network becomes denser.The influence of rock type on q value is related to the burst intensity of energy release and the crack fracture mode.The q value of EPs can be used as an effective prediction index for rock failure like b value of acoustic emission(AE).The results provide useful reference and method for the monitoring and early warning of geological disasters.
文摘In the present paper,we mostly focus on P_(p)^(2)-statistical convergence.We will look into the uniform integrability via the power series method and its characterizations for double sequences.Also,the notions of P_(p)^(2)-statistically Cauchy sequence,P_(p)^(2)-statistical boundedness and core for double sequences will be described in addition to these findings.
基金supported by Northern Border University,Arar,KSA,through the Project Number“NBU-FFR-2024-2248-02”.
文摘This paper contributes a sophisticated statistical method for the assessment of performance in routing protocols salient Mobile Ad Hoc Network(MANET)routing protocols:Destination Sequenced Distance Vector(DSDV),Ad hoc On-Demand Distance Vector(AODV),Dynamic Source Routing(DSR),and Zone Routing Protocol(ZRP).In this paper,the evaluation will be carried out using complete sets of statistical tests such as Kruskal-Wallis,Mann-Whitney,and Friedman.It articulates a systematic evaluation of how the performance of the previous protocols varies with the number of nodes and the mobility patterns.The study is premised upon the Quality of Service(QoS)metrics of throughput,packet delivery ratio,and end-to-end delay to gain an adequate understanding of the operational efficiency of each protocol under different network scenarios.The findings explained significant differences in the performance of different routing protocols;as a result,decisions for the selection and optimization of routing protocols can be taken effectively according to different network requirements.This paper is a step forward in the general understanding of the routing dynamics of MANETs and contributes significantly to the strategic deployment of robust and efficient network infrastructures.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R194)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘In this present time,Human Activity Recognition(HAR)has been of considerable aid in the case of health monitoring and recovery.The exploitation of machine learning with an intelligent agent in the area of health informatics gathered using HAR augments the decision-making quality and significance.Although many research works conducted on Smart Healthcare Monitoring,there remain a certain number of pitfalls such as time,overhead,and falsification involved during analysis.Therefore,this paper proposes a Statistical Partial Regression and Support Vector Intelligent Agent Learning(SPR-SVIAL)for Smart Healthcare Monitoring.At first,the Statistical Partial Regression Feature Extraction model is used for data preprocessing along with the dimensionality-reduced features extraction process.Here,the input dataset the continuous beat-to-beat heart data,triaxial accelerometer data,and psychological characteristics were acquired from IoT wearable devices.To attain highly accurate Smart Healthcare Monitoring with less time,Partial Least Square helps extract the dimensionality-reduced features.After that,with these resulting features,SVIAL is proposed for Smart Healthcare Monitoring with the help of Machine Learning and Intelligent Agents to minimize both analysis falsification and overhead.Experimental evaluation is carried out for factors such as time,overhead,and false positive rate accuracy concerning several instances.The quantitatively analyzed results indicate the better performance of our proposed SPR-SVIAL method when compared with two state-of-the-art methods.
基金supported by the National Natural Science Foundation of China(Grant Nos.10174024 and 10474025).
文摘Within the framework of quantum statistical mechanics,we have proposed an exact analytical solution to the problemof Bose-Einstein condensation(BEC)of harmonically trapped two-dimensional(2D)ideal photons.We utilize this analyticalsolution to investigate the statistical properties of ideal photons in a 2D dye-filled spherical cap cavity.The resultsof numerical calculation of the analytical solution agree completely with the foregoing experimental results in the BEC ofharmonically trapped 2D ideal photons.The analytical expressions of the critical temperature and the condensate fractionare derived in the thermodynamic limit.It is found that the 2D critical photon number is larger than the one-dimensional(1D)critical photon number by two orders of magnitude.The spectral radiance of a 2D spherical cap cavity has a sharppeak at the frequency of the cavity cutoff when the photon number exceeds the critical value determined by a temperature.
文摘The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.
文摘Chemical oxygen demand (COD) is an important index to measure the degree of water pollution. In this paper, near-infrared technology is used to obtain 148 wastewater spectra to predict the COD value in wastewater. First, the partial least squares regression (PLS) model was used as the basic model. Monte Carlo cross-validation (MCCV) was used to select 25 samples out of 148 samples that did not conform to conventional statistics. Then, the interval partial least squares (iPLS) regression modeling was carried out on 123 samples, and the spectral bands were divided into 40 subintervals. The optimal subintervals are 20 and 26, and the optimal correlation coefficient of the test set (RT) is 0.58. Further, the waveband is divided into five intervals: 17, 19, 20, 22 and 26. When the number of joint intervals under each interval is three, the optimal RT is 0.71. When the number of joint subintervals is four, the optimal RT is 0.79. Finally, convolutional neural network (CNN) was used for quantitative prediction, and RT was 0.9. The results show that CNN can automatically screen the features inside the data, and the quantitative prediction effect is better than that of iPLS and synergy interval partial least squares model (SiPLS) with joint subinterval three and four, indicating that CNN can be used for quantitative analysis of water pollution degree.
文摘The present study aims to establish a relationship between serum AMH levels and age in a large group of women living in Bulgaria, as well as to establish reference age-specific AMH levels in women that would serve as an initial estimate of ovarian age. A total of 28,016 women on the territory of the Republic of Bulgaria were tested for serum AMH levels with a median age of 37.0 years (interquartile range 32.0 to 41.0). For women aged 20 - 29 years, the Bulgarian population has relatively high median levels of AMH, similar to women of Asian origin. For women aged 30 - 34 years, our results are comparable to those of women living in Western Europe. For women aged 35 - 39 years, our results are comparable to those of women living in the territory of India and Kenya. For women aged 40 - 44 years, our results were lower than those for women from the Western European and Chinese populations, close to the Indian and higher than Korean and Kenya populations, respectively. Our results for women of Bulgarian origin are also comparable to US Latina women at age 30, 35 and 40 ages. On the base on constructed a statistical model to predicting the decline in AMH levels at different ages, we found non-linear structure of AMH decline for the low AMH 3.5) the dependence of the decline of AMH on age was confirmed as linear. In conclusion, we evaluated the serum level of AMH in Bulgarian women and established age-specific AMH percentile reference values based on a large representative sample. We have developed a prognostic statistical model that can facilitate the application of AMH in clinical practice and the prediction of reproductive capacity and population health.
文摘Statistical Energy Analysis(SEA) is one of the conventional tools for predicting vehicle high-frequency acoustic responses.This study proposes a new method that can provide customized optimization solutions to meet NVH targets based on the specific needs of different project teams during the initial project stages.This approach innovatively integrates dynamic optimization,Radial Basis Function(RBF),and Fuzzy Design Variables Genetic Algorithm(FDVGA) into the optimization process of Statistical Energy Analysis(SEA),and also takes vehicle sheet metal into account in the optimization of sound packages.In the implementation process,a correlation model is established through Python scripts to link material density with acoustic parameters,weight,and cost.By combining Optimus and VaOne software,an optimization design workflow is constructed and the optimization design process is successfully executed.Under various constraints related to acoustic performance,weight and cost,a globally optimal design is achieved.This technology has been effectively applied in the field of Battery Electric Vehicle(BEV).
基金National Natural Science Foundation of China(No.12271261)Postgraduate Research and Practice Innovation Program of Jiangsu Province,China(Grant No.SJCX230368)。
文摘Normality testing is a fundamental hypothesis test in the statistical analysis of key biological indicators of diabetes.If this assumption is violated,it may cause the test results to deviate from the true value,leading to incorrect inferences and conclusions,and ultimately affecting the validity and accuracy of statistical inferences.Considering this,the study designs a unified analysis scheme for different data types based on parametric statistical test methods and non-parametric test methods.The data were grouped according to sample type and divided into discrete data and continuous data.To account for differences among subgroups,the conventional chi-squared test was used for discrete data.The normal distribution is the basis of many statistical methods;if the data does not follow a normal distribution,many statistical methods will fail or produce incorrect results.Therefore,before data analysis and modeling,the data were divided into normal and non-normal groups through normality testing.For normally distributed data,parametric statistical methods were used to judge the differences between groups.For non-normal data,non-parametric tests were employed to improve the accuracy of the analysis.Statistically significant indicators were retained according to the significance index P-value of the statistical test or corresponding statistics.These indicators were then combined with relevant medical background to further explore the etiology leading to the occurrence or transformation of diabetes status.
文摘In this work, four empirical models of statistical thickness, namely the models of Harkins and Jura, Hasley, Carbon Black and Jaroniec, were compared in order to determine the textural properties (external surface and surface of micropores) of a clay concrete without molasses and clay concretes stabilized with 8%, 12% and 16% molasses. The results obtained show that Hasley’s model can be used to obtain the external surfaces. However, it does not allow the surface of the micropores to be obtained, and is not suitable for the case of simple clay concrete (without molasses) and for clay concretes stabilized with molasses. The Carbon Black, Jaroniec and Harkins and Jura models can be used for clay concrete and stabilized clay concrete. However, the Carbon Black model is the most relevant for clay concrete and the Harkins and Jura model is for molasses-stabilized clay concrete. These last two models augur well for future research.
文摘Statistical literacy is crucial for cultivating well-rounded thinkers.The integration of evidence-based strategies in teaching and learning is pivotal for enhancing students’statistical literacy.This research specifically focuses on the utilization of Share and Model Concepts and Nurturing Metacognition as evidence-based strategies aimed at improving the statistical literacy of learners.The study employed a quasi-experimental design,specifically the nonequivalent control group,wherein students answered pre-test and post-test instruments and researcher-made questionnaires.The study included 50 first-year Bachelor in Secondary Education majors in Mathematics and Science for the academic year 2023-2024.The results of the study revealed a significant difference in the scores of student respondents,indicating that the use of evidence-based strategies helped students enhance their statistical literacy.This signifies a noteworthy increase in their performance,ranging from very low to very high proficiency in understanding statistical concepts,insights into the application of statistical concepts,numeracy,graph skills,interpretation capabilities,and visualization and communication skills.Furthermore,the study showed a significant difference in the post-test scores’performance of the two groups in understanding statistical concepts and visualization and communication skills.However,no significant difference was found in the post-test scores of the two groups concerning insights into the application of statistical concepts,numeracy and graph skills,and interpretation capabilities.Additionally,students acknowledged that the implementation of evidence-based strategies significantly contributed to the improvement of their statistical literacy.
文摘Electrical impedance tomography (EIT) aims to reconstruct the conductivity distribution using the boundary measured voltage potential. Traditional regularization based method would suffer from error propagation due to the iteration process. The statistical inverse problem method uses statistical inference to estimate unknown parameters. In this article, we develop a nonlinear weighted anisotropic total variation (NWATV) prior density function based on the recently proposed NWATV regularization method. We calculate the corresponding posterior density function, i.e., the solution of the EIT inverse problem in the statistical sense, via a modified Markov chain Monte Carlo (MCMC) sampling. We do numerical experiment to validate the proposed approach.
文摘In the strategic context of rural revitalization,optimizing the quality of agricultural statistical services is a crucial element for advancing agricultural modernization and sustainable rural economic development.This paper focuses on the significance of enhancing agricultural statistical service quality under the backdrop of rural revitalization.It addresses current issues such as inadequate implementation of agricultural statistical survey systems,an imperfect data quality control system,and a shortage of statistical service personnel.Proposals are made to improve the statistical survey system,enhance the data quality control framework,and strengthen personnel training.These pathways offer references for elevating the quality of agricultural statistical services and implementing the rural revitalization strategy in the new era.
文摘Inflammatory bowel diseases (IBD) are complex multifactorial disorders that include Crohn’s disease (CD) and ulcerative colitis (UC). Considering that IBD is a genetic and multifactorial disease, we screened for the distribution dynamism of IBD pathogenic genetic variants (single nucleotide polymorphisms;SNPs) and risk factors in four (4) IBD pediatric patients, by integrating both clinical exome sequencing and computational statistical approaches, aiming to categorize IBD patients in CD and UC phenotype. To this end, we first aligned genomic read sequences of these IBD patients to hg19 human genome by using bowtie 2 package. Next, we performed genetic variant calling analysis in terms of single nucleotide polymorphism (SNP) for genes covered by at least 20 read genomic sequences. Finally, we checked for biological and genomic functions of genes exhibiting statistically significant genetic variant (SNPs) by introducing Fitcon genomic parameter. Findings showed Fitcon parameter as normalizing IBD patient’s population variability, as well as inducing a relative good clustering between IBD patients in terms of CD and UC phenotypes. Genomic analysis revealed a random distribution of risk factors and as well pathogenic SNPs genetic variants in the four IBD patient’s genome, claiming to be involved in: i) Metabolic disorders, ii) Autoimmune deficiencies;iii) Crohn’s disease pathways. Integration of genomic and computational statistical analysis supported a relative genetic variability regarding IBD patient population by processing IBD pathogenic SNP genetic variants as opposite to IBD risk factor variants. Interestingly, findings clearly allowed categorizing IBD patients in CD and UC phenotypes by applying Fitcon parameter in selecting IBD pathogenic genetic variants. Considering as a whole, the study suggested the efficiency of integrating clinical exome sequencing and computational statistical tools as a right approach in discriminating IBD phenotypes as well as improving inflammatory bowel disease (IBD) molecular diagnostic process.
文摘With the continuous development of the economy and societal progress,the economic census,as an important aspect of national statistical work,is directly influenced by the quality of grassroots infrastructure.This paper thoroughly discusses the importance of strengthening the statistical foundation to improve the efficiency of economic census work,analyzes the existing issues in current infrastructure and census processes,and proposes corresponding solutions.By enhancing the professional training of grassroots statisticians,updating data collection technologies,and optimizing workflows,the aim is to significantly improve the accuracy and efficiency of the economic census,providing strong support for the healthy development of the national economy and informed decision-making.
文摘In basketball, each player’s skill level is the key to a team’s success or failure, the skill level is affected by many personal and environmental factors. A physics-informed AI statistics has become extremely important. In this article, a complex non-linear process is considered by taking into account the average points per game of each player, playing time, shooting percentage, and others. This physics-informed statistics is to construct a multiple linear regression model with physics-informed neural networks. Based on the official data provided by the American Basketball League, and combined with specific methods of R program analysis, the regression model affecting the player’s average points per game is verified, and the key factors affecting the player’s average points per game are finally elucidated. The paper provides a novel window for coaches to make meaningful in-game adjustments to team members.
文摘A multi-objective linear programming problem is made from fuzzy linear programming problem. It is due the fact that it is used fuzzy programming method during the solution. The Multi objective linear programming problem can be converted into the single objective function by various methods as Chandra Sen’s method, weighted sum method, ranking function method, statistical averaging method. In this paper, Chandra Sen’s method and statistical averaging method both are used here for making single objective function from multi-objective function. Two multi-objective programming problems are solved to verify the result. One is numerical example and the other is real life example. Then the problems are solved by ordinary simplex method and fuzzy programming method. It can be seen that fuzzy programming method gives better optimal values than the ordinary simplex method.
基金funded by the"Genetic improvement of pig survival"project from Danish Pig Levy Foundation (Aarhus,Denmark)The China Scholarship Council (CSC)for providing scholarship to the first author。
文摘Background:Survival from birth to slaughter is an important economic trait in commercial pig productions.Increasing survival can improve both economic efficiency and animal welfare.The aim of this study is to explore the impact of genotyping strategies and statistical models on the accuracy of genomic prediction for survival in pigs during the total growing period from birth to slaughter.Results:We simulated pig populations with different direct and maternal heritabilities and used a linear mixed model,a logit model,and a probit model to predict genomic breeding values of pig survival based on data of individual survival records with binary outcomes(0,1).The results show that in the case of only alive animals having genotype data,unbiased genomic predictions can be achieved when using variances estimated from pedigreebased model.Models using genomic information achieved up to 59.2%higher accuracy of estimated breeding value compared to pedigree-based model,dependent on genotyping scenarios.The scenario of genotyping all individuals,both dead and alive individuals,obtained the highest accuracy.When an equal number of individuals(80%)were genotyped,random sample of individuals with genotypes achieved higher accuracy than only alive individuals with genotypes.The linear model,logit model and probit model achieved similar accuracy.Conclusions:Our conclusion is that genomic prediction of pig survival is feasible in the situation that only alive pigs have genotypes,but genomic information of dead individuals can increase accuracy of genomic prediction by 2.06%to 6.04%.
基金supported by the following grants: National Basic R&D Program of China (2018YFA0606203)Strategic Priority Research Program of Chinese Academy of Sciences (XDA23090102 and XDA20060501)+2 种基金Guangdong Major Project of Basic and Applied Basic Research (2020B0301030004)Special Fund of China Meteorological Administration for Innovation and Development (CXFZ2021J026)Special Fund for Forecasters of China Meteorological Administration (CMAYBY2020094)。
文摘Convolutional neural networks(CNNs) have been widely studied and found to obtain favorable results in statistical downscaling to derive high-resolution climate variables from large-scale coarse general circulation models(GCMs).However, there is a lack of research exploring the predictor selection for CNN modeling. This paper presents an effective and efficient greedy elimination algorithm to address this problem. The algorithm has three main steps: predictor importance attribution, predictor removal, and CNN retraining, which are performed sequentially and iteratively. The importance of individual predictors is measured by a gradient-based importance metric computed by a CNN backpropagation technique, which was initially proposed for CNN interpretation. The algorithm is tested on the CNN-based statistical downscaling of monthly precipitation with 20 candidate predictors and compared with a correlation analysisbased approach. Linear models are implemented as benchmarks. The experiments illustrate that the predictor selection solution can reduce the number of input predictors by more than half, improve the accuracy of both linear and CNN models,and outperform the correlation analysis method. Although the RMSE(root-mean-square error) is reduced by only 0.8%,only 9 out of 20 predictors are used to build the CNN, and the FLOPs(Floating Point Operations) decrease by 20.4%. The results imply that the algorithm can find subset predictors that correlate more to the monthly precipitation of the target area and seasons in a nonlinear way. It is worth mentioning that the algorithm is compatible with other CNN models with stacked variables as input and has the potential for nonlinear correlation predictor selection.