In this article,we study Kahler metrics on a certain line bundle over some compact Kahler manifolds to find complete Kahler metrics with positive holomorphic sectional(or bisectional)curvatures.Thus,we apply a strateg...In this article,we study Kahler metrics on a certain line bundle over some compact Kahler manifolds to find complete Kahler metrics with positive holomorphic sectional(or bisectional)curvatures.Thus,we apply a strategy to a famous Yau conjecture with a co-homogeneity one geometry.展开更多
In this paper,we study a class of Finsler metrics defined by a vector field on a gradient Ricci soliton.We obtain a necessary and sufficient condition for these Finsler metrics on a compact gradient Ricci soliton to b...In this paper,we study a class of Finsler metrics defined by a vector field on a gradient Ricci soliton.We obtain a necessary and sufficient condition for these Finsler metrics on a compact gradient Ricci soliton to be of isotropic S-curvature by establishing a new integral inequality.Then we determine the Ricci curvature of navigation Finsler metrics of isotropic S-curvature on a gradient Ricci soliton generalizing result only known in the case when such soliton is of Einstein type.As its application,we obtain the Ricci curvature of all navigation Finsler metrics of isotropic S-curvature on Gaussian shrinking soliton.展开更多
In a very recent article of mine I have corrected the traditional derivation of the Schwarzschild metric thus arriving to formulate a correct Schwarzschild metric different from the traditional Schwarzschild metric. I...In a very recent article of mine I have corrected the traditional derivation of the Schwarzschild metric thus arriving to formulate a correct Schwarzschild metric different from the traditional Schwarzschild metric. In this article, starting from this correct Schwarzschild metric, I also propose corrections to the other traditional Reissner-Nordstrøm, Kerr and Kerr-Newman metrics on the basis of the fact that these metrics should be equal to the correct Schwarzschild metric in the borderline case in which they reduce to the case described by this metric. In this way, we see that, like the correct Schwarzschild metric, also the correct Reissner-Nordstrøm, Kerr and Kerr-Newman metrics do not present any event horizon (and therefore do not present any black hole) unlike the traditional Reissner-Nordstrøm, Kerr and Kerr-Newman metrics.展开更多
In this paper,we prove that for some completions of certain fiber bundles there is a Maxwell-Einstein metric conformally related to any given Kahler class.
Meteorological droughts occur when there is deficiency in rainfall;i.e. rainfall availability is below some acclaimed normal values. Hence, the greater challenge is to be able to obtain suitable methods for assessing ...Meteorological droughts occur when there is deficiency in rainfall;i.e. rainfall availability is below some acclaimed normal values. Hence, the greater challenge is to be able to obtain suitable methods for assessing drought occurrence, its onset or initiation and termination. Thus, an attempt was made in this paper to evaluate the performance of Standardised Precipitation Index (SPI) and Standardised Precipitation Anomaly Index (SPAI) to characterise drought in Northern Nigeria for purposes of comparison and eventual adoption of probable candidate index for the development of an Early Warning System. The findings indicated that despite the fact that the annual timescale may be long, it can be employed to obtain information on the temporal evolution of drought especially, regional behaviour. However, monthly timescale can be more appropriate if emphasis is on evaluating the effects of drought in situations relating to water supply, agriculture and groundwater abstractions. The SPAI can be employed for periodic rainfall time series though;it accentuates drought signatures and may not necessarily dampen high fluctuations due to implications of high climatic variability considering the stochastic nature and state transition of drought phenomena. On the other hand, the temporal evolution of SPI and SPAI were not coherent at different temporal accumulations with differences in fluctuations. However, despite the differences between the SPI and SPAI, generally at some timescales, for instance, 6-month accumulation, both spatial and temporal distributions of drought characteristics were seemingly consistent. In view of the observed shortcomings of both indices, especially the SPI, the Standardised Nonstationary Precipitation Index (SnsPI) should be looked into and too, other indexes that take into consideration the implications of global warming by incorporating potential evapotranspiration may be deemed more suitable for drought studies in Northern Nigeria.展开更多
Component-based software engineering is concerned with the develop-ment of software that can satisfy the customer prerequisites through reuse or inde-pendent development.Coupling and cohesion measurements are primaril...Component-based software engineering is concerned with the develop-ment of software that can satisfy the customer prerequisites through reuse or inde-pendent development.Coupling and cohesion measurements are primarily used to analyse the better software design quality,increase the reliability and reduced system software complexity.The complexity measurement of cohesion and coupling component to analyze the relationship between the component module.In this paper,proposed the component selection framework of Hexa-oval optimization algorithm for selecting the suitable components from the repository.It measures the interface density modules of coupling and cohesion in a modular software sys-tem.This cohesion measurement has been taken into two parameters for analyz-ing the result of complexity,with the help of low cohesion and high cohesion.In coupling measures between the component of inside parameters and outside parameters.Thefinal process of coupling and cohesion,the measured values were used for the average calculation of components parameter.This paper measures the complexity of direct and indirect interaction among the component as well as the proposed algorithm selecting the optimal component for the repository.The better result is observed for high cohesion and low coupling in compo-nent-based software engineering.展开更多
Purpose:This study examines the effects of using publication-based metrics for the initial screening in the application process for a project leader.The key questions are whether formal policy affects the allocation o...Purpose:This study examines the effects of using publication-based metrics for the initial screening in the application process for a project leader.The key questions are whether formal policy affects the allocation of funds to researchers with a better publication record and how the previous academic performance of principal investigators is related to future project results.Design/methodology/approach:We compared two competitions,before and after the policy raised the publication threshold for the principal investigators.We analyzed 9,167 papers published by 332 winners in physics and the social sciences and humanities(SSH),and 11,253 publications resulting from each funded project.Findings:We found that among physicists,even in the first period,grants tended to be allocated to prolific authors publishing in high-quality journals.In contrast,the SSH project grantees had been less prolific in publishing internationally in both periods;however,in the second period,the selection of grant recipients yielded better results regarding awarding grants to more productive authors in terms of the quantity and quality of publications.There was no evidence that this better selection of grant recipients resulted in better publication records during grant realization.Originality:This study contributes to the discussion of formal policies that rely on metrics for the evaluation of grant proposals.The Russian case shows that such policy may have a profound effect on changing the supply side of applicants,especially in disciplines that are less suitable for metric-based evaluations.In spite of the criticism given to metrics,they might be a useful additional instrument in academic systems where professional expertise is corrupted and prevents allocation of funds to prolific researchers.展开更多
Evaluating complex information systems necessitates deep contextual knowledge of technology, user needs, and quality. The quality evaluation challenges increase with the system’s complexity, especially when multiple ...Evaluating complex information systems necessitates deep contextual knowledge of technology, user needs, and quality. The quality evaluation challenges increase with the system’s complexity, especially when multiple services supported by varied technological modules, are offered. Existing standards for software quality, such as the ISO25000 series, provide a broad framework for evaluation. Broadness offers initial implementation ease albeit, it often lacks specificity to cater to individual system modules. This paper maps 48 data metrics and 175 software metrics on specific system modules while aligning them with ISO standard quality traits. Using the ISO25000 series as a foundation, especially ISO25010 and 25012, this research seeks to augment the applicability of these standards to multi-faceted systems, exemplified by five distinct software modules prevalent in modern information ecosystems.展开更多
In a competitive digital age where data volumes are increasing with time, the ability to extract meaningful knowledge from high-dimensional data using machine learning (ML) and data mining (DM) techniques and making d...In a competitive digital age where data volumes are increasing with time, the ability to extract meaningful knowledge from high-dimensional data using machine learning (ML) and data mining (DM) techniques and making decisions based on the extracted knowledge is becoming increasingly important in all business domains. Nevertheless, high-dimensional data remains a major challenge for classification algorithms due to its high computational cost and storage requirements. The 2016 Demographic and Health Survey of Ethiopia (EDHS 2016) used as the data source for this study which is publicly available contains several features that may not be relevant to the prediction task. In this paper, we developed a hybrid multidimensional metrics framework for predictive modeling for both model performance evaluation and feature selection to overcome the feature selection challenges and select the best model among the available models in DM and ML. The proposed hybrid metrics were used to measure the efficiency of the predictive models. Experimental results show that the decision tree algorithm is the most efficient model. The higher score of HMM (m, r) = 0.47 illustrates the overall significant model that encompasses almost all the user’s requirements, unlike the classical metrics that use a criterion to select the most appropriate model. On the other hand, the ANNs were found to be the most computationally intensive for our prediction task. Moreover, the type of data and the class size of the dataset (unbalanced data) have a significant impact on the efficiency of the model, especially on the computational cost, and the interpretability of the parameters of the model would be hampered. And the efficiency of the predictive model could be improved with other feature selection algorithms (especially hybrid metrics) considering the experts of the knowledge domain, as the understanding of the business domain has a significant impact.展开更多
A measure of the“goodness”or efficiency of the test suite is used to determine the proficiency of a test suite.The appropriateness of the test suite is determined through mutation analysis.Several Finite State Machi...A measure of the“goodness”or efficiency of the test suite is used to determine the proficiency of a test suite.The appropriateness of the test suite is determined through mutation analysis.Several Finite State Machine(FSM)mutants are produced in mutation analysis by injecting errors against hypotheses.These mutants serve as test subjects for the test suite(TS).The effectiveness of the test suite is proportional to the number of eliminated mutants.The most effective test suite is the one that removes the most significant number of mutants at the optimal time.It is difficult to determine the fault detection ratio of the system.Because it is difficult to identify the system’s potential flaws precisely.In mutation testing,the Fault Detection Ratio(FDR)metric is currently used to express the adequacy of a test suite.However,there are some issues with this metric.If both test suites have the same defect detection rate,the smaller of the two tests is preferred.The test case(TC)is affected by the same issue.The smaller two test cases with identical performance are assumed to have superior performance.Another difficulty involves time.The performance of numerous vehicles claiming to have a perfect mutant capture time is problematic.Our study developed three metrics to address these issues:FDR/|TS|,FDR/|TC|,and FDR/|Time|;In this context,most used test generation tools were examined and tested using the developed metrics.Thanks to the metrics we have developed,the research contributes to eliminating the problems related to performance measurement by integrating the missing parameters into the system.展开更多
Letting F be a homogeneous(α_(1),α_(2))metric on the reductive homogeneous manifold G/H,we first characterize the natural reductiveness of F as a local f-product between naturally reductive Riemannian metrics.Second...Letting F be a homogeneous(α_(1),α_(2))metric on the reductive homogeneous manifold G/H,we first characterize the natural reductiveness of F as a local f-product between naturally reductive Riemannian metrics.Second,we prove the equivalence among several properties of F for its mean Berwald curvature and S-curvature.Finally,we find an explicit flag curvature formula for G/H when F is naturally reductive.展开更多
Modified Theories of Gravity include spin dependence in General Relativity, to account for additional sources of gravity instead of dark matter/energy approach. The spin-spin interaction is already included in the eff...Modified Theories of Gravity include spin dependence in General Relativity, to account for additional sources of gravity instead of dark matter/energy approach. The spin-spin interaction is already included in the effective nuclear force potential, and theoretical considerations and experimental evidence hint to the hypothesis that Gravity originates from such an interaction, under an averaging process over spin directions. This invites to continue the line of theory initiated by Einstein and Cartan, based on tetrads and spin effects modeled by connections with torsion. As a first step in this direction, the article considers a new modified Coulomb/Newton Law accounting for the spin-spin interaction. The physical potential is geometrized through specific affine connections and specific semi-Riemannian metrics, canonically associated to it, acting on a manifold or at the level of its tangent bundle. Freely falling particles in these “toy Universes” are determined, showing an interesting behavior and unexpected patterns.展开更多
Assessment of rock mass quality significantly impacts the design and construction of underground and open-pit mines from the point of stability and economy.This study develops the novel Gromov-Hausdorff distance for r...Assessment of rock mass quality significantly impacts the design and construction of underground and open-pit mines from the point of stability and economy.This study develops the novel Gromov-Hausdorff distance for rock quality(GHDQR)methodology for rock mass quality rating based on multi-criteria grey metric space.It usually presents the quality of surrounding rock by classes(metric spaces)with specified properties and adequate interval-grey numbers.Measuring the distance between surrounding rock sample characteristics and existing classes represents the core of this study.The Gromov-Hausdorff distance is an especially useful discriminant function,i.e.,a classifier to calculate these distances,and assess the quality of the surrounding rock.The efficiency of the developed methodology is analyzed using the Mean Absolute Percentage Error(MAPE)technique.Seven existing methods,such as the Gaussian cloud method,Discriminant method,Mutation series method,Artificial neural network(ANN),Support vector machine(SVM),Grey wolf optimizer and Support vector classification method(GWO-SVC)and Rock mass rating method(RMR)are used for comparison with the proposed GHDQR method.The share of the highly accurate category of 85.71%clearly indicates compliance with actual values obtained by the compared methods.The results of comparisons showed that the model enables objective,efficient,and reliable assessment of rock mass quality.展开更多
Using the Raychaudhuri equation, we associate quantum probability amplitudes (propagators) to equatorial principal ingoing and outgoing null geodesic congruences in the Kerr metric. The expansion scalars diverge at th...Using the Raychaudhuri equation, we associate quantum probability amplitudes (propagators) to equatorial principal ingoing and outgoing null geodesic congruences in the Kerr metric. The expansion scalars diverge at the ring singularity;however, the propagators remain finite, which is an indication that at the quantum level singularities might disappear or, at least, become softened.展开更多
Cross entropy is a measure in machine learning and deep learning that assesses the difference between predicted and actual probability distributions. In this study, we propose cross entropy as a performance evaluation...Cross entropy is a measure in machine learning and deep learning that assesses the difference between predicted and actual probability distributions. In this study, we propose cross entropy as a performance evaluation metric for image classifier models and apply it to the CT image classification of lung cancer. A convolutional neural network is employed as the deep neural network (DNN) image classifier, with the residual network (ResNet) 50 chosen as the DNN archi-tecture. The image data used comprise a lung CT image set. Two classification models are built from datasets with varying amounts of data, and lung cancer is categorized into four classes using 10-fold cross-validation. Furthermore, we employ t-distributed stochastic neighbor embedding to visually explain the data distribution after classification. Experimental results demonstrate that cross en-tropy is a highly useful metric for evaluating the reliability of image classifier models. It is noted that for a more comprehensive evaluation of model perfor-mance, combining with other evaluation metrics is considered essential. .展开更多
Background:Failure to rescue has been an effective quality metric in congenital heart surgery.Conversely,mor-bidity and mortality depend greatly on non-modifiable individual factors and have a weak correlation with be...Background:Failure to rescue has been an effective quality metric in congenital heart surgery.Conversely,mor-bidity and mortality depend greatly on non-modifiable individual factors and have a weak correlation with better-quality performance.We aim to measure the complications,mortality,and risk factors in pediatric patients undergoing congenital heart surgery in a high-complexity institution located in a middle-income country and compare it with other institutions that have conducted a similar study.Methods:A retrospective observational study was conducted in a high-complexity service provider institution,in Cali,Colombia.All pediatric patients undergoing any congenital heart surgery between 2019 and 2022 were included.The main outcomes evaluated in the study were complication,mortality,and failure to rescue rate.Univariate and multivariate logistic regression analysis was performed with mortality as the outcome variable.Results:We evaluated 308 congenital heart sur-geries.Regarding the outcomes,201(65%)complications occurred,23(7.5%)patients died,and the FTR of the entire cohort was 11.4%.The presence of a postoperative complication(OR 14.88,CI 3.06–268.37,p=0.009),age(OR 0.79,CI 0.57–0.96,p=0.068),and urgent/emergent surgery(OR 8.14,CI 2.97–28.66,p<0.001)were the most significant variables in predicting mortality.Conclusions:Failure to rescue is an effective and comparable quality measure in healthcare institutions and is the major contributor to postoperative mortality in congenital heart surgeries.Despite our higher mortality and complication rate,we obtained a comparable failure to rescue rate to high-income countries’health institutions.展开更多
We investigate the quantum metric and topological Euler number in a cyclically modulated Su-Schrieffer-Heeger(SSH)model with long-range hopping terms.By computing the quantum geometry tensor,we derive exact expression...We investigate the quantum metric and topological Euler number in a cyclically modulated Su-Schrieffer-Heeger(SSH)model with long-range hopping terms.By computing the quantum geometry tensor,we derive exact expressions for the quantum metric and Berry curvature of the energy band electrons,and we obtain the phase diagram of the model marked by the first Chern number.Furthermore,we also obtain the topological Euler number of the energy band based on the Gauss-Bonnet theorem on the topological characterization of the closed Bloch states manifold in the first Brillouin zone.However,some regions where the Berry curvature is identically zero in the first Brillouin zone result in the degeneracy of the quantum metric,which leads to ill-defined non-integer topological Euler numbers.Nevertheless,the non-integer"Euler number"provides valuable insights and an upper bound for the absolute values of the Chern numbers.展开更多
文摘In this article,we study Kahler metrics on a certain line bundle over some compact Kahler manifolds to find complete Kahler metrics with positive holomorphic sectional(or bisectional)curvatures.Thus,we apply a strategy to a famous Yau conjecture with a co-homogeneity one geometry.
基金Supported by the National Natural Science Foundation of China(11771020,12171005).
文摘In this paper,we study a class of Finsler metrics defined by a vector field on a gradient Ricci soliton.We obtain a necessary and sufficient condition for these Finsler metrics on a compact gradient Ricci soliton to be of isotropic S-curvature by establishing a new integral inequality.Then we determine the Ricci curvature of navigation Finsler metrics of isotropic S-curvature on a gradient Ricci soliton generalizing result only known in the case when such soliton is of Einstein type.As its application,we obtain the Ricci curvature of all navigation Finsler metrics of isotropic S-curvature on Gaussian shrinking soliton.
文摘In a very recent article of mine I have corrected the traditional derivation of the Schwarzschild metric thus arriving to formulate a correct Schwarzschild metric different from the traditional Schwarzschild metric. In this article, starting from this correct Schwarzschild metric, I also propose corrections to the other traditional Reissner-Nordstrøm, Kerr and Kerr-Newman metrics on the basis of the fact that these metrics should be equal to the correct Schwarzschild metric in the borderline case in which they reduce to the case described by this metric. In this way, we see that, like the correct Schwarzschild metric, also the correct Reissner-Nordstrøm, Kerr and Kerr-Newman metrics do not present any event horizon (and therefore do not present any black hole) unlike the traditional Reissner-Nordstrøm, Kerr and Kerr-Newman metrics.
文摘In this paper,we prove that for some completions of certain fiber bundles there is a Maxwell-Einstein metric conformally related to any given Kahler class.
文摘Meteorological droughts occur when there is deficiency in rainfall;i.e. rainfall availability is below some acclaimed normal values. Hence, the greater challenge is to be able to obtain suitable methods for assessing drought occurrence, its onset or initiation and termination. Thus, an attempt was made in this paper to evaluate the performance of Standardised Precipitation Index (SPI) and Standardised Precipitation Anomaly Index (SPAI) to characterise drought in Northern Nigeria for purposes of comparison and eventual adoption of probable candidate index for the development of an Early Warning System. The findings indicated that despite the fact that the annual timescale may be long, it can be employed to obtain information on the temporal evolution of drought especially, regional behaviour. However, monthly timescale can be more appropriate if emphasis is on evaluating the effects of drought in situations relating to water supply, agriculture and groundwater abstractions. The SPAI can be employed for periodic rainfall time series though;it accentuates drought signatures and may not necessarily dampen high fluctuations due to implications of high climatic variability considering the stochastic nature and state transition of drought phenomena. On the other hand, the temporal evolution of SPI and SPAI were not coherent at different temporal accumulations with differences in fluctuations. However, despite the differences between the SPI and SPAI, generally at some timescales, for instance, 6-month accumulation, both spatial and temporal distributions of drought characteristics were seemingly consistent. In view of the observed shortcomings of both indices, especially the SPI, the Standardised Nonstationary Precipitation Index (SnsPI) should be looked into and too, other indexes that take into consideration the implications of global warming by incorporating potential evapotranspiration may be deemed more suitable for drought studies in Northern Nigeria.
基金We deeply acknowledge Taif University for Supporting this research through Taif University Researchers Supporting Project number(TURSP-2020/231),Taif University,Taif,Saudi Arabia.
文摘Component-based software engineering is concerned with the develop-ment of software that can satisfy the customer prerequisites through reuse or inde-pendent development.Coupling and cohesion measurements are primarily used to analyse the better software design quality,increase the reliability and reduced system software complexity.The complexity measurement of cohesion and coupling component to analyze the relationship between the component module.In this paper,proposed the component selection framework of Hexa-oval optimization algorithm for selecting the suitable components from the repository.It measures the interface density modules of coupling and cohesion in a modular software sys-tem.This cohesion measurement has been taken into two parameters for analyz-ing the result of complexity,with the help of low cohesion and high cohesion.In coupling measures between the component of inside parameters and outside parameters.Thefinal process of coupling and cohesion,the measured values were used for the average calculation of components parameter.This paper measures the complexity of direct and indirect interaction among the component as well as the proposed algorithm selecting the optimal component for the repository.The better result is observed for high cohesion and low coupling in compo-nent-based software engineering.
基金This work is supported by Russian Science Foundation(Grant No.21-78-10102).
文摘Purpose:This study examines the effects of using publication-based metrics for the initial screening in the application process for a project leader.The key questions are whether formal policy affects the allocation of funds to researchers with a better publication record and how the previous academic performance of principal investigators is related to future project results.Design/methodology/approach:We compared two competitions,before and after the policy raised the publication threshold for the principal investigators.We analyzed 9,167 papers published by 332 winners in physics and the social sciences and humanities(SSH),and 11,253 publications resulting from each funded project.Findings:We found that among physicists,even in the first period,grants tended to be allocated to prolific authors publishing in high-quality journals.In contrast,the SSH project grantees had been less prolific in publishing internationally in both periods;however,in the second period,the selection of grant recipients yielded better results regarding awarding grants to more productive authors in terms of the quantity and quality of publications.There was no evidence that this better selection of grant recipients resulted in better publication records during grant realization.Originality:This study contributes to the discussion of formal policies that rely on metrics for the evaluation of grant proposals.The Russian case shows that such policy may have a profound effect on changing the supply side of applicants,especially in disciplines that are less suitable for metric-based evaluations.In spite of the criticism given to metrics,they might be a useful additional instrument in academic systems where professional expertise is corrupted and prevents allocation of funds to prolific researchers.
文摘Evaluating complex information systems necessitates deep contextual knowledge of technology, user needs, and quality. The quality evaluation challenges increase with the system’s complexity, especially when multiple services supported by varied technological modules, are offered. Existing standards for software quality, such as the ISO25000 series, provide a broad framework for evaluation. Broadness offers initial implementation ease albeit, it often lacks specificity to cater to individual system modules. This paper maps 48 data metrics and 175 software metrics on specific system modules while aligning them with ISO standard quality traits. Using the ISO25000 series as a foundation, especially ISO25010 and 25012, this research seeks to augment the applicability of these standards to multi-faceted systems, exemplified by five distinct software modules prevalent in modern information ecosystems.
文摘In a competitive digital age where data volumes are increasing with time, the ability to extract meaningful knowledge from high-dimensional data using machine learning (ML) and data mining (DM) techniques and making decisions based on the extracted knowledge is becoming increasingly important in all business domains. Nevertheless, high-dimensional data remains a major challenge for classification algorithms due to its high computational cost and storage requirements. The 2016 Demographic and Health Survey of Ethiopia (EDHS 2016) used as the data source for this study which is publicly available contains several features that may not be relevant to the prediction task. In this paper, we developed a hybrid multidimensional metrics framework for predictive modeling for both model performance evaluation and feature selection to overcome the feature selection challenges and select the best model among the available models in DM and ML. The proposed hybrid metrics were used to measure the efficiency of the predictive models. Experimental results show that the decision tree algorithm is the most efficient model. The higher score of HMM (m, r) = 0.47 illustrates the overall significant model that encompasses almost all the user’s requirements, unlike the classical metrics that use a criterion to select the most appropriate model. On the other hand, the ANNs were found to be the most computationally intensive for our prediction task. Moreover, the type of data and the class size of the dataset (unbalanced data) have a significant impact on the efficiency of the model, especially on the computational cost, and the interpretability of the parameters of the model would be hampered. And the efficiency of the predictive model could be improved with other feature selection algorithms (especially hybrid metrics) considering the experts of the knowledge domain, as the understanding of the business domain has a significant impact.
文摘A measure of the“goodness”or efficiency of the test suite is used to determine the proficiency of a test suite.The appropriateness of the test suite is determined through mutation analysis.Several Finite State Machine(FSM)mutants are produced in mutation analysis by injecting errors against hypotheses.These mutants serve as test subjects for the test suite(TS).The effectiveness of the test suite is proportional to the number of eliminated mutants.The most effective test suite is the one that removes the most significant number of mutants at the optimal time.It is difficult to determine the fault detection ratio of the system.Because it is difficult to identify the system’s potential flaws precisely.In mutation testing,the Fault Detection Ratio(FDR)metric is currently used to express the adequacy of a test suite.However,there are some issues with this metric.If both test suites have the same defect detection rate,the smaller of the two tests is preferred.The test case(TC)is affected by the same issue.The smaller two test cases with identical performance are assumed to have superior performance.Another difficulty involves time.The performance of numerous vehicles claiming to have a perfect mutant capture time is problematic.Our study developed three metrics to address these issues:FDR/|TS|,FDR/|TC|,and FDR/|Time|;In this context,most used test generation tools were examined and tested using the developed metrics.Thanks to the metrics we have developed,the research contributes to eliminating the problems related to performance measurement by integrating the missing parameters into the system.
基金the National Natural Science Foundation of China(12131012,12001007,11821101)the Beijing Natural Science Foundation(1222003,Z180004)the Natural Science Foundation of Anhui province(1908085QA03)。
文摘Letting F be a homogeneous(α_(1),α_(2))metric on the reductive homogeneous manifold G/H,we first characterize the natural reductiveness of F as a local f-product between naturally reductive Riemannian metrics.Second,we prove the equivalence among several properties of F for its mean Berwald curvature and S-curvature.Finally,we find an explicit flag curvature formula for G/H when F is naturally reductive.
文摘Modified Theories of Gravity include spin dependence in General Relativity, to account for additional sources of gravity instead of dark matter/energy approach. The spin-spin interaction is already included in the effective nuclear force potential, and theoretical considerations and experimental evidence hint to the hypothesis that Gravity originates from such an interaction, under an averaging process over spin directions. This invites to continue the line of theory initiated by Einstein and Cartan, based on tetrads and spin effects modeled by connections with torsion. As a first step in this direction, the article considers a new modified Coulomb/Newton Law accounting for the spin-spin interaction. The physical potential is geometrized through specific affine connections and specific semi-Riemannian metrics, canonically associated to it, acting on a manifold or at the level of its tangent bundle. Freely falling particles in these “toy Universes” are determined, showing an interesting behavior and unexpected patterns.
文摘Assessment of rock mass quality significantly impacts the design and construction of underground and open-pit mines from the point of stability and economy.This study develops the novel Gromov-Hausdorff distance for rock quality(GHDQR)methodology for rock mass quality rating based on multi-criteria grey metric space.It usually presents the quality of surrounding rock by classes(metric spaces)with specified properties and adequate interval-grey numbers.Measuring the distance between surrounding rock sample characteristics and existing classes represents the core of this study.The Gromov-Hausdorff distance is an especially useful discriminant function,i.e.,a classifier to calculate these distances,and assess the quality of the surrounding rock.The efficiency of the developed methodology is analyzed using the Mean Absolute Percentage Error(MAPE)technique.Seven existing methods,such as the Gaussian cloud method,Discriminant method,Mutation series method,Artificial neural network(ANN),Support vector machine(SVM),Grey wolf optimizer and Support vector classification method(GWO-SVC)and Rock mass rating method(RMR)are used for comparison with the proposed GHDQR method.The share of the highly accurate category of 85.71%clearly indicates compliance with actual values obtained by the compared methods.The results of comparisons showed that the model enables objective,efficient,and reliable assessment of rock mass quality.
文摘Using the Raychaudhuri equation, we associate quantum probability amplitudes (propagators) to equatorial principal ingoing and outgoing null geodesic congruences in the Kerr metric. The expansion scalars diverge at the ring singularity;however, the propagators remain finite, which is an indication that at the quantum level singularities might disappear or, at least, become softened.
文摘Cross entropy is a measure in machine learning and deep learning that assesses the difference between predicted and actual probability distributions. In this study, we propose cross entropy as a performance evaluation metric for image classifier models and apply it to the CT image classification of lung cancer. A convolutional neural network is employed as the deep neural network (DNN) image classifier, with the residual network (ResNet) 50 chosen as the DNN archi-tecture. The image data used comprise a lung CT image set. Two classification models are built from datasets with varying amounts of data, and lung cancer is categorized into four classes using 10-fold cross-validation. Furthermore, we employ t-distributed stochastic neighbor embedding to visually explain the data distribution after classification. Experimental results demonstrate that cross en-tropy is a highly useful metric for evaluating the reliability of image classifier models. It is noted that for a more comprehensive evaluation of model perfor-mance, combining with other evaluation metrics is considered essential. .
基金approved by the Institutional Ethics Committee(approval number 628-2022 Act No.I22-112 of November 02,2022)following national and international recommendations for human research.In。
文摘Background:Failure to rescue has been an effective quality metric in congenital heart surgery.Conversely,mor-bidity and mortality depend greatly on non-modifiable individual factors and have a weak correlation with better-quality performance.We aim to measure the complications,mortality,and risk factors in pediatric patients undergoing congenital heart surgery in a high-complexity institution located in a middle-income country and compare it with other institutions that have conducted a similar study.Methods:A retrospective observational study was conducted in a high-complexity service provider institution,in Cali,Colombia.All pediatric patients undergoing any congenital heart surgery between 2019 and 2022 were included.The main outcomes evaluated in the study were complication,mortality,and failure to rescue rate.Univariate and multivariate logistic regression analysis was performed with mortality as the outcome variable.Results:We evaluated 308 congenital heart sur-geries.Regarding the outcomes,201(65%)complications occurred,23(7.5%)patients died,and the FTR of the entire cohort was 11.4%.The presence of a postoperative complication(OR 14.88,CI 3.06–268.37,p=0.009),age(OR 0.79,CI 0.57–0.96,p=0.068),and urgent/emergent surgery(OR 8.14,CI 2.97–28.66,p<0.001)were the most significant variables in predicting mortality.Conclusions:Failure to rescue is an effective and comparable quality measure in healthcare institutions and is the major contributor to postoperative mortality in congenital heart surgeries.Despite our higher mortality and complication rate,we obtained a comparable failure to rescue rate to high-income countries’health institutions.
基金Project supported by the Beijing Natural Science Foundation(Grant No.1232026)the Qinxin Talents Program of BISTU(Grant No.QXTCP C201711)+2 种基金the R&D Program of Beijing Municipal Education Commission(Grant No.KM202011232017)the National Natural Science Foundation of China(Grant No.12304190)the Research fund of BISTU(Grant No.2022XJJ32).
文摘We investigate the quantum metric and topological Euler number in a cyclically modulated Su-Schrieffer-Heeger(SSH)model with long-range hopping terms.By computing the quantum geometry tensor,we derive exact expressions for the quantum metric and Berry curvature of the energy band electrons,and we obtain the phase diagram of the model marked by the first Chern number.Furthermore,we also obtain the topological Euler number of the energy band based on the Gauss-Bonnet theorem on the topological characterization of the closed Bloch states manifold in the first Brillouin zone.However,some regions where the Berry curvature is identically zero in the first Brillouin zone result in the degeneracy of the quantum metric,which leads to ill-defined non-integer topological Euler numbers.Nevertheless,the non-integer"Euler number"provides valuable insights and an upper bound for the absolute values of the Chern numbers.