Adaptive fractional polynomial modeling of general correlated outcomes is formulated to address nonlinearity in means, variances/dispersions, and correlations. Means and variances/dispersions are modeled using general...Adaptive fractional polynomial modeling of general correlated outcomes is formulated to address nonlinearity in means, variances/dispersions, and correlations. Means and variances/dispersions are modeled using generalized linear models in fixed effects/coefficients. Correlations are modeled using random effects/coefficients. Nonlinearity is addressed using power transforms of primary (untransformed) predictors. Parameter estimation is based on extended linear mixed modeling generalizing both generalized estimating equations and linear mixed modeling. Models are evaluated using likelihood cross-validation (LCV) scores and are generated adaptively using a heuristic search controlled by LCV scores. Cases covered include linear, Poisson, logistic, exponential, and discrete regression of correlated continuous, count/rate, dichotomous, positive continuous, and discrete numeric outcomes treated as normally, Poisson, Bernoulli, exponentially, and discrete numerically distributed, respectively. Example analyses are also generated for these five cases to compare adaptive random effects/coefficients modeling of correlated outcomes to previously developed adaptive modeling based on directly specified covariance structures. Adaptive random effects/coefficients modeling substantially outperforms direct covariance modeling in the linear, exponential, and discrete regression example analyses. It generates equivalent results in the logistic regression example analyses and it is substantially outperformed in the Poisson regression case. Random effects/coefficients modeling of correlated outcomes can provide substantial improvements in model selection compared to directly specified covariance modeling. However, directly specified covariance modeling can generate competitive or substantially better results in some cases while usually requiring less computation time.展开更多
Objective: To analyze the impact of an integrated extended care model on improving the quality of life of elderly patients with Type 2 Diabetes Mellitus (T2DM). Methods: A total of 176 patients admitted to the hospita...Objective: To analyze the impact of an integrated extended care model on improving the quality of life of elderly patients with Type 2 Diabetes Mellitus (T2DM). Methods: A total of 176 patients admitted to the hospital from March 2015 to February 2018 were selected and randomly assigned to an observation group and a control group, with 88 patients each. The control group implemented conventional nursing interventions, and the observation group carried out an integrated extended-care model. The level of glycemic control, quality of life, and daily medication adherence between both groups were compared. Results: The observation group showed significant improvement in the level of glycemic control, and their fasting blood glucose, 2-hour postprandial blood glucose, and glycated hemoglobin levels were significantly lower as compared with those in the study group (P < 0.05). The quality of life of the patients in the observation group was higher than that of the control group (P < 0.05). The observation group had a higher compliance score (95.48 ± 7.45) than the control group (81.31 ± 8.72) (t = 8.909, P < 0.05). Conclusion: The integrated extended care model allows patients to receive comprehensive and individualized nursing services after discharge, which improves the effect of drug therapy and the quality of life of patients.展开更多
BACKGROUND Stroke has become one of the most serious life-threatening diseases due to its high morbidity,disability,recurrence and mortality rates.AIM To explore the intervention effect of multi-disciplinary treatment...BACKGROUND Stroke has become one of the most serious life-threatening diseases due to its high morbidity,disability,recurrence and mortality rates.AIM To explore the intervention effect of multi-disciplinary treatment(MDT)extended nursing model on negative emotions and quality of life of young patients with post-stroke.METHODS A total of 60 young stroke patients who were hospitalized in the neurology department of our hospital from January 2020 to December 2021 were selected and randomly divided into a control group and an experimental group,with 30 patients in each group.The control group used the conventional care model and the experimental group used the MDT extended nursing model.After the inhospital and 3-mo post-discharge interventions,the differences in negative emotions and quality of life scores between the two groups were evaluated and analyzed at the time of admission,at the time of discharge and after discharge,respectively.RESULTS There are no statistically significant differences in the negative emotions scores between the two groups at admission,while there are statistically significant differences in the negative emotions scores within each group at admission and discharge,at discharge and post-discharge,and at discharge and post-discharge.In addition,the negative emotions scores were all statistically significant at discharge and after discharge when compared between the two groups.There was no statistically significant difference in quality of life scores at the time of admission between the two groups,and the difference between quality of life scores at the time of admission and discharge,at the time of discharge and post-discharge,and at the time of admission and post-discharge for each group of patients was statistically significant.CONCLUSION The MDT extended nursing mode can improve the negative emotion of patients and improve their quality of life.Therefore,it can be applied in future clinical practice and is worthy of promotion.展开更多
Exploring the role of entanglement in quantum nonequilibrium dynamics is important to understand the mechanism of thermalization in an isolated system. We study the relaxation dynamics in a one-dimensional extended B...Exploring the role of entanglement in quantum nonequilibrium dynamics is important to understand the mechanism of thermalization in an isolated system. We study the relaxation dynamics in a one-dimensional extended Bose–Hubbard model after a global interaction quench by considering several observables: the local Boson numbers, the nonlocal entanglement entropy, and the momentum distribution functions. We calculate the thermalization fidelity for different quench parameters and different sizes of subsystems, and the results show that the degree of thermalization is affected by the distance from the integrable point and the size of the subsystem. We employ the Pearson coefficient as the measurement of the correlation between the entanglement entropy and thermalization fidelity, and a strong correlation is demonstrated for the quenched system.展开更多
We present a new interpretation of the Higgs field as a composite particle made up of a positive, with, a negative mass Planck particle. According to the Winterberg hypothesis, space, i.e., the vacuum, consists of bot...We present a new interpretation of the Higgs field as a composite particle made up of a positive, with, a negative mass Planck particle. According to the Winterberg hypothesis, space, i.e., the vacuum, consists of both positive and negative physical massive particles, which he called planckions, interacting through strong superfluid forces. In our composite model for the Higgs boson, there is an intrinsic length scale associated with the vacuum, different from the one introduced by Winterberg, where, when the vacuum is in a perfectly balanced state, the number density of positive Planck particles equals the number density of negative Planck particles. Due to the mass compensating effect, the vacuum thus appears massless, chargeless, without pressure, energy density, or entropy. However, a situation can arise where there is an effective mass density imbalance due to the two species of Planck particle not matching in terms of populations, within their respective excited energy states. This does not require the physical addition or removal of either positive or negative Planck particles, within a given region of space, as originally thought. Ordinary matter, dark matter, and dark energy can thus be given a new interpretation as residual vacuum energies within the context of a greater vacuum, where the populations of the positive and negative energy states exactly balance. In the present epoch, it is estimated that the dark energy number density imbalance amounts to, , per cubic meter, when cosmic distance scales in excess of, 100 Mpc, are considered. Compared to a strictly balanced vacuum, where we estimate that the positive, and the negative Planck number density, is of the order, 7.85E54 particles per cubic meter, the above is a very small perturbation. This slight imbalance, we argue, would dramatically alleviate, if not altogether eliminate, the long standing cosmological constant problem.展开更多
We work within a Winterberg framework where space, i.e., the vacuum, consists of a two component superfluid/super-solid made up of a vast assembly (sea) of positive and negative mass Planck particles, called planckion...We work within a Winterberg framework where space, i.e., the vacuum, consists of a two component superfluid/super-solid made up of a vast assembly (sea) of positive and negative mass Planck particles, called planckions. These material particles interact indirectly, and have very strong restoring forces keeping them a finite distance apart from each other within their respective species. Because of their mass compensating effect, the vacuum appears massless, charge-less, without pressure, net energy density or entropy. In addition, we consider two varying G models, where G, is Newton’s constant, and G<sup>-1</sup>, increases with an increase in cosmological time. We argue that there are at least two competing models for the quantum vacuum within such a framework. The first follows a strict extension of Winterberg’s model. This leads to nonsensible results, if G increases, going back in cosmological time, as the length scale inherent in such a model will not scale properly. The second model introduces a different length scale, which does scale properly, but keeps the mass of the Planck particle as, ± the Planck mass. Moreover we establish a connection between ordinary matter, dark matter, and dark energy, where all three mass densities within the Friedman equation must be interpreted as residual vacuum energies, which only surface, once aggregate matter has formed, at relatively low CMB temperatures. The symmetry of the vacuum will be shown to be broken, because of the different scaling laws, beginning with the formation of elementary particles. Much like waves on an ocean where positive and negative planckion mass densities effectively cancel each other out and form a zero vacuum energy density/zero vacuum pressure surface, these positive mass densities are very small perturbations (anomalies) about the mean. This greatly alleviates, i.e., minimizes the cosmological constant problem, a long standing problem associated with the vacuum.展开更多
Based on the basic trade gravity model and Xinjiang's practical situation, new explanatory variables (GDP, GDPpc and SCO) are introduced to build an extended trade gravity model fitting for Xinjiang's bilatera...Based on the basic trade gravity model and Xinjiang's practical situation, new explanatory variables (GDP, GDPpc and SCO) are introduced to build an extended trade gravity model fitting for Xinjiang's bilateral trade. From the empirical analysis of this model, it is proposed that those three variables affect the Xinjiang's bilateral trade posi- tively. Whereas, geographic distance is found to be a significant factor influencing Xinjiang’s bilateral trade negatively. Then, by the extended trade gravity model, this article analyzes the present trade situation between Xinjiang and its main trade partners quantitatively in 2004. The results indicate that Xinjiang cooperates with its most trade partners successfully in terms of present economic scale and developing level. Xinjiang has established successfully trade part- nership with Central Asia, Central Europe and Eastern Europe, Western Europe, East Asia and South Asia. However, the foreign trade development with West Asia is much slower. Finally, some suggestions on developing Xinjiang's for- eign trade are put forward.展开更多
As the increasing popularity and complexity of Web applications and the emergence of their new characteristics, the testing and maintenance of large, complex Web applications are becoming more complex and difficult. W...As the increasing popularity and complexity of Web applications and the emergence of their new characteristics, the testing and maintenance of large, complex Web applications are becoming more complex and difficult. Web applications generally contain lots of pages and are used by enormous users. Statistical testing is an effective way of ensuring their quality. Web usage can be accurately described by Markov chain which has been proved to be an ideal model for software statistical testing. The results of unit testing can be utilized in the latter stages, which is an important strategy for bottom-to-top integration testing, and the other improvement of extended Markov chain model (EMM) is to present the error type vector which is treated as a part of page node. this paper also proposes the algorithm for generating test cases of usage paths. Finally, optional usage reliability evaluation methods and an incremental usability regression testing model for testing and evaluation are presented. Key words statistical testing - evaluation for Web usability - extended Markov chain model (EMM) - Web log mining - reliability evaluation CLC number TP311. 5 Foundation item: Supported by the National Defence Research Project (No. 41315. 9. 2) and National Science and Technology Plan (2001BA102A04-02-03)Biography: MAO Cheng-ying (1978-), male, Ph.D. candidate, research direction: software testing. Research direction: advanced database system, software testing, component technology and data mining.展开更多
Accurate identification of influential nodes facilitates the control of rumor propagation and interrupts the spread of computer viruses.Many classical approaches have been proposed by researchers regarding different a...Accurate identification of influential nodes facilitates the control of rumor propagation and interrupts the spread of computer viruses.Many classical approaches have been proposed by researchers regarding different aspects.To explore the impact of location information in depth,this paper proposes an improved global structure model to characterize the influence of nodes.The method considers both the node’s self-information and the role of the location information of neighboring nodes.First,degree centrality of each node is calculated,and then degree value of each node is used to represent self-influence,and degree values of the neighbor layer nodes are divided by the power of the path length,which is path attenuation used to represent global influence.Finally,an extended improved global structure model that considers the nearest neighbor information after combining self-influence and global influence is proposed to identify influential nodes.In this paper,the propagation process of a real network is obtained by simulation with the SIR model,and the effectiveness of the proposed method is verified from two aspects of discrimination and accuracy.The experimental results show that the proposed method is more accurate in identifying influential nodes than other comparative methods with multiple networks.展开更多
The current highly competitive environment has driven industries to operate with increasingly restricted profit margins. Thus, it is imperative to optimize production processes. Faced with this scenario, multivariable...The current highly competitive environment has driven industries to operate with increasingly restricted profit margins. Thus, it is imperative to optimize production processes. Faced with this scenario, multivariable predictive control of processes has been presented as a powerful alternative to achieve these goals. Moreover, the rationale for implementation of advanced control and subsequent analysis of its post-match performance also focus on the benefits that this tool brings to the plant. It is therefore essential to establish a methodology for analysis, based on clear and measurable criteria. Currently, there are different methodologies available in the market to assist with such analysis. These tools can have a quantitative or qualitative focus. The aim of this study is to evaluate three of the best current main performance assessment technologies: Minimum Variance Control-Harris Index; Statistical Process Control (Cp and Cpk); and the Qin and Yu Index. These indexes were studied for an alumina plant controlled by three MPC (model predictive control) algorithms (GPC (generalized predictive control), RMPCT (robust multivariable predictive control technology) and ESSMPC (extended state space model predictive controller)) with different results.展开更多
Phosphorus is one of the most important nutrients required to support various kinds of biodegradation processes. As this particular nutrient is not included in the activated sludge model no. 1 (ASM1), this study ext...Phosphorus is one of the most important nutrients required to support various kinds of biodegradation processes. As this particular nutrient is not included in the activated sludge model no. 1 (ASM1), this study extended this model in order to determine the fate of phosphorus during the biodegradation processes. When some of the kinetics parameters are modified using observed data from the restoration project of the Xuxi River in Wuxi City, China, from August 25 to 31 in 2009, the extended model shows excellent results. In order to obtain optimum values of coefficients of nitrogen and phosphorus, the mass fraction method was used to ensure that the final results were reasonable and practically relevant. The temporal distribution of the data calculated with the extended ASM1 approximates that of the observed data.展开更多
The purpose of this article is to investigate approaches for modeling individual patient count/rate data over time accounting for temporal correlation and non</span><span style="font-family:Verdana;"...The purpose of this article is to investigate approaches for modeling individual patient count/rate data over time accounting for temporal correlation and non</span><span style="font-family:Verdana;">-</span><span style="font-family:Verdana;">constant dispersions while requiring reasonable amounts of time to search over alternative models for those data. This research addresses formulations for two approaches for extending generalized estimating equations (GEE) modeling. These approaches use a likelihood-like function based on the multivariate normal density. The first approach augments standard GEE equations to include equations for estimation of dispersion parameters. The second approach is based on estimating equations determined by partial derivatives of the likelihood-like function with respect to all model parameters and so extends linear mixed modeling. Three correlation structures are considered including independent, exchangeable, and spatial autoregressive of order 1 correlations. The likelihood-like function is used to formulate a likelihood-like cross-validation (LCV) score for use in evaluating models. Example analyses are presented using these two modeling approaches applied to three data sets of counts/rates over time for individual cancer patients including pain flares per day, as needed pain medications taken per day, and around the clock pain medications taken per day per dose. Means and dispersions are modeled as possibly nonlinear functions of time using adaptive regression modeling methods to search through alternative models compared using LCV scores. The results of these analyses demonstrate that extended linear mixed modeling is preferable for modeling individual patient count/rate data over time</span><span style="font-family:Verdana;">,</span><span style="font-family:Verdana;"> because in example analyses</span><span style="font-family:Verdana;">,</span><span style="font-family:Verdana;"> it either generates better LCV scores or more parsimonious models and requires substantially less time.展开更多
Purpose: To formulate and demonstrate methods for regression modeling of probabilities and dispersions for individual-patient longitudinal outcomes taking on discrete numeric values. Methods: Three alternatives for mo...Purpose: To formulate and demonstrate methods for regression modeling of probabilities and dispersions for individual-patient longitudinal outcomes taking on discrete numeric values. Methods: Three alternatives for modeling of outcome probabilities are considered. Multinomial probabilities are based on different intercepts and slopes for probabilities of different outcome values. Ordinal probabilities are based on different intercepts and the same slope for probabilities of different outcome values. Censored Poisson probabilities are based on the same intercept and slope for probabilities of different outcome values. Parameters are estimated with extended linear mixed modeling maximizing a likelihood-like function based on the multivariate normal density that accounts for within-patient correlation. Formulas are provided for gradient vectors and Hessian matrices for estimating model parameters. The likelihood-like function is also used to compute cross-validation scores for alternative models and to control an adaptive modeling process for identifying possibly nonlinear functional relationships in predictors for probabilities and dispersions. Example analyses are provided of daily pain ratings for a cancer patient over a period of 97 days. Results: The censored Poisson approach is preferable for modeling these data, and presumably other data sets of this kind, because it generates a competitive model with fewer parameters in less time than the other two approaches. The generated probabilities for this model are distinctly nonlinear in time while the dispersions are distinctly nonconstant over time, demonstrating the need for adaptive modeling of such data. The analyses also address the dependence of these daily pain ratings on time and the daily numbers of pain flares. Probabilities and dispersions change differently over time for different numbers of pain flares. Conclusions: Adaptive modeling of daily pain ratings for individual cancer patients is an effective way to identify nonlinear relationships in time as well as in other predictors such as the number of pain flares.展开更多
In the paper the extended modelling method with serial sands is used in an experimental research on the erosion patterns at the discharge outlet of a beach Hua-Neng power plant. The theoretical basis for the extended ...In the paper the extended modelling method with serial sands is used in an experimental research on the erosion patterns at the discharge outlet of a beach Hua-Neng power plant. The theoretical basis for the extended modelling method with serial sands is systematically presented in the paper and the method has been successfully employed in the sediment experiment of coastal works. According to the Froude Law, the model is designed to be a normal one with movable bed, the geometric scale lambda(L) = lambda(H) = 15, and three scales of sediment grain size are chosen, i.e., lambda(d1) = 0.207; lambda(d2) = 0.393; and lambda(d3) = 0.656. The median particle diameter of sea beach prototype sand d(50p) = 0.059 mm and the dis-changed water flow of the power plant is 21.7 m(3) / s. Three types of natural sea sands have been chosen as the serial modelling sands to extend the simulation of the prototype, thus replacing the conventional test in which artificial lightweight sands are used. As a result, this method can not only reduce the cost significantly, but also it is an advanced technique easy to use. Upon a series of tests, satisfactory results have been obtained.展开更多
We study systematically an extended Bose-Hubbard model on the triangular lattice by means of a meanfield method based on the Gutzwiller ansatz. Pair hopping terms are explicitly included and a three-body constraint is...We study systematically an extended Bose-Hubbard model on the triangular lattice by means of a meanfield method based on the Gutzwiller ansatz. Pair hopping terms are explicitly included and a three-body constraint is applied. The zero-temperature phase diagram and a variety of quantum phase transitions are investigated in great detail. In particular, we show the existence and the stability of the pair supersolid phase.展开更多
In this study, the performance of the extended shallow water model (ESWM) in evaluation of the flow regime of turbidity currents entering the Dez Reservoir was investigated. The continuity equations for fluid and pa...In this study, the performance of the extended shallow water model (ESWM) in evaluation of the flow regime of turbidity currents entering the Dez Reservoir was investigated. The continuity equations for fluid and particles and the Navier-Stokes equations govern the entire flow of turbidity currents. The shallow water equations governing the flow of the depositing phase of turbidity currents are derived from these equations. A case study was conducted on the flow regime of turbidity currents entering the Dez Reservoir in Iran from January 2002 to July 2003. Facing a serious sedimentation problem, the dead storage of the Dez Reservoir will be full in the coming 10 years, and the inflowing water in the hydropower conduit system is now becoming turbid. Based on the values of the dimensionless friction number ( Nf ≤1 ) and dimensionless entrainment number ( NE≤ 1 ) of turbidity currents, and the coefficient of determination between the observed and predicted deposit depths (R2 = 0.86) for the flow regime of negligible friction and negligible entrainment (NFNE), the flow regime of turbidity currents coming into the Dez Reservoir is considered to be NFNE. The results suggest that the ESWM is an appropriate approach for evaluation of the flow regime of turbidity currents in dam reservoirs where the characteristics of turbidity currents, such as the deposit depth, must be evaluated.展开更多
By using the bosonization and renormalization group methods, we have studied the low energy physical properties in one-dimensional extended Hubbard model. The formation of charge and spin gaps is investigated at the h...By using the bosonization and renormalization group methods, we have studied the low energy physical properties in one-dimensional extended Hubbard model. The formation of charge and spin gaps is investigated at the half-filled electron band. An analytical expression for the charge gap in terms of the Coulomb repulsive interaction strength U and the nearest-neighbour interaction parameter V is obtained.展开更多
In the framework of nonperturbative quantum field theory, the critical phenomena of one-dimensionalextended Hubbard model (EHM) at half-filling are discussed from weak to intermediate interactions. After the EHMbeing ...In the framework of nonperturbative quantum field theory, the critical phenomena of one-dimensionalextended Hubbard model (EHM) at half-filling are discussed from weak to intermediate interactions. After the EHMbeing mapped into two decoupled sine-Gordon models, the ground state phase diagram of the system is derived in anexplicit way. It is confirmed that the coexisting phases appear in different interaction regimes which cannot be foundby conventional theoretical methods. The diagram shows that there are seven different phase regions in the groundstate, which seems not to be the same as previous discussions, especially the boundary between the phase separationand condensed phase regions. The phase transition properties of the model between various phase regions are studied indetail.展开更多
We propose a model for gravity based on the gravitational polarization of space. With this model, we can relate the density parameters within the Friedmann model, and show that dark matter is bound mass formed from ma...We propose a model for gravity based on the gravitational polarization of space. With this model, we can relate the density parameters within the Friedmann model, and show that dark matter is bound mass formed from massive dipoles set up within the vacuum surrounding ordinary matter. Aggregate matter induces a gravitational field within the surrounding space, which reinforces the original field. Dark energy, on the other hand, is the energy density associated with gravitational fields both for ordinary matter, and bound, or induced dipole matter. At high CBR temperatures, the cosmic susceptibility, induced by ordinary matter vanishes, as it is a smeared or average value for the cosmos as a whole. Even though gravitational dipoles do exist, no large-scale alignment or ordering is possible. Our model assumes that space, <i>i.e.</i>, the vacuum, is filled with a vast assembly (sea) of positive and negative mass particles having Planck mass, called planckions, which is based on extensive work by Winterberg. These original particles form a very stiff two-component superfluid, where positive and negative mass species neutralize one another already at the submicroscopic level, leading to zero net mass, zero net gravitational pressure, and zero net entropy, for the undisturbed medium. It is theorized that the gravitational dipoles form from such material positive and negative particles, and moreover, this causes an intrinsic polarization of the vacuum for the universe as a whole. We calculate that in the present epoch, the smeared or average susceptibility of the cosmos equals, <img src="Edit_77cbbf8c-0bcc-4957-92c7-34c999644348.png" width="15" height="20" alt="" />, and the overall resulting polarization equals, <img src="Edit_5fc44cb3-277a-4743-bfce-23e07f968d92.png" width="15" height="20" alt="" />=2.396kg/m<sup>2</sup>. Moreover, due to all the ordinary mass in the universe, made up of quarks and leptons, we calculate a net gravitational field having magnitude, <img src="Edit_c6fd9499-fe39-4d15-bc1c-0fdf1427dfd8.png" width="20" height="20" alt="" />=3.771E-10m/s<sup>2</sup>. This smeared or average value permeates all of space, and can be deduced by any observer, irrespective of location within the universe. This net gravitational field is forced upon us by Gauss’s law, and although technically a surface gravitational field, it is argued that this surface, smeared value holds point for point in the observable universe. A complete theory of gravitational polarization is presented. In contrast to electrostatics, gravistatics leads to anti-screening of the original source field, increasing the original value, <img src="Edit_a56ffe5e-10b9-4d3f-bf1e-bb52816fd07c.png" width="20" height="20" alt="" />, to, <img src="Edit_a6ac691a-342e-4ad4-9be0-808583f9f324.png" width="90" height="20" alt="" />, where <img src="Edit_69c6f874-5a3d-4d4a-84f7-819e06c09a83.png" width="20" height="20" alt="" style="white-space:normal;" /> is the induced or polarized field. In the present epoch, this leads to a bound mass, <img src="Edit_24ed50ca-84c2-4d3a-a018-957f7d0f964a.png" width="140" height="20" alt="" />, where <i>M<sub>F</sub></i> is the sum of all ordinary source matter in the universe, and <img src="Edit_5156dc24-3701-4491-9d10-58321e7d2d85.png" width="20" height="20" alt="" /> equals the relative permittivity. A new radius, and new mass, for the observable universe is dictated by the density parameters in Friedmann’s equation, and Gauss’s law. These lead to the very precise values, R<sub>0</sub>=3.217E27 meters, and, <i>M<sub>F</sub></i>=5.847E55kg, respectively, somewhat larger than current less accurate estimates.展开更多
We propose the generalization of Einstein’s special theory of relativity (STR). In our model, we use the (1 + 4)-dimensional space G, which is the extension of the (1 + 3)-dimensional Minkowski space M. As a fifth ad...We propose the generalization of Einstein’s special theory of relativity (STR). In our model, we use the (1 + 4)-dimensional space G, which is the extension of the (1 + 3)-dimensional Minkowski space M. As a fifth additional coordinate, the interval S is used. This value is constant under the usual Lorentz transformations in M, but it changes when the transformations in the extended space G are used. We call this model the Extended space model (ESM). From a physical point of view, our expansion means that processes in which the rest mass of the particles changes are acceptable now. In the ESM, gravity and electromagnetism are combined in one field. In the ESM, a photon can have a nonzero mass and this mass can be either positive or negative. It is also possible to establish in the frame of ESM connection between mass of a particle and its size.展开更多
文摘Adaptive fractional polynomial modeling of general correlated outcomes is formulated to address nonlinearity in means, variances/dispersions, and correlations. Means and variances/dispersions are modeled using generalized linear models in fixed effects/coefficients. Correlations are modeled using random effects/coefficients. Nonlinearity is addressed using power transforms of primary (untransformed) predictors. Parameter estimation is based on extended linear mixed modeling generalizing both generalized estimating equations and linear mixed modeling. Models are evaluated using likelihood cross-validation (LCV) scores and are generated adaptively using a heuristic search controlled by LCV scores. Cases covered include linear, Poisson, logistic, exponential, and discrete regression of correlated continuous, count/rate, dichotomous, positive continuous, and discrete numeric outcomes treated as normally, Poisson, Bernoulli, exponentially, and discrete numerically distributed, respectively. Example analyses are also generated for these five cases to compare adaptive random effects/coefficients modeling of correlated outcomes to previously developed adaptive modeling based on directly specified covariance structures. Adaptive random effects/coefficients modeling substantially outperforms direct covariance modeling in the linear, exponential, and discrete regression example analyses. It generates equivalent results in the logistic regression example analyses and it is substantially outperformed in the Poisson regression case. Random effects/coefficients modeling of correlated outcomes can provide substantial improvements in model selection compared to directly specified covariance modeling. However, directly specified covariance modeling can generate competitive or substantially better results in some cases while usually requiring less computation time.
文摘Objective: To analyze the impact of an integrated extended care model on improving the quality of life of elderly patients with Type 2 Diabetes Mellitus (T2DM). Methods: A total of 176 patients admitted to the hospital from March 2015 to February 2018 were selected and randomly assigned to an observation group and a control group, with 88 patients each. The control group implemented conventional nursing interventions, and the observation group carried out an integrated extended-care model. The level of glycemic control, quality of life, and daily medication adherence between both groups were compared. Results: The observation group showed significant improvement in the level of glycemic control, and their fasting blood glucose, 2-hour postprandial blood glucose, and glycated hemoglobin levels were significantly lower as compared with those in the study group (P < 0.05). The quality of life of the patients in the observation group was higher than that of the control group (P < 0.05). The observation group had a higher compliance score (95.48 ± 7.45) than the control group (81.31 ± 8.72) (t = 8.909, P < 0.05). Conclusion: The integrated extended care model allows patients to receive comprehensive and individualized nursing services after discharge, which improves the effect of drug therapy and the quality of life of patients.
基金Supported by the Joint Guidance Project of Qiqihar Science and Technology Plan in 2020,No.LHYD-202054。
文摘BACKGROUND Stroke has become one of the most serious life-threatening diseases due to its high morbidity,disability,recurrence and mortality rates.AIM To explore the intervention effect of multi-disciplinary treatment(MDT)extended nursing model on negative emotions and quality of life of young patients with post-stroke.METHODS A total of 60 young stroke patients who were hospitalized in the neurology department of our hospital from January 2020 to December 2021 were selected and randomly divided into a control group and an experimental group,with 30 patients in each group.The control group used the conventional care model and the experimental group used the MDT extended nursing model.After the inhospital and 3-mo post-discharge interventions,the differences in negative emotions and quality of life scores between the two groups were evaluated and analyzed at the time of admission,at the time of discharge and after discharge,respectively.RESULTS There are no statistically significant differences in the negative emotions scores between the two groups at admission,while there are statistically significant differences in the negative emotions scores within each group at admission and discharge,at discharge and post-discharge,and at discharge and post-discharge.In addition,the negative emotions scores were all statistically significant at discharge and after discharge when compared between the two groups.There was no statistically significant difference in quality of life scores at the time of admission between the two groups,and the difference between quality of life scores at the time of admission and discharge,at the time of discharge and post-discharge,and at the time of admission and post-discharge for each group of patients was statistically significant.CONCLUSION The MDT extended nursing mode can improve the negative emotion of patients and improve their quality of life.Therefore,it can be applied in future clinical practice and is worthy of promotion.
基金supported by the National Natural Science Foundation of China (Grant No. 11147110)the Natural Science Youth Foundation of Shanxi Province, China (Grant No. 2011021003)。
文摘Exploring the role of entanglement in quantum nonequilibrium dynamics is important to understand the mechanism of thermalization in an isolated system. We study the relaxation dynamics in a one-dimensional extended Bose–Hubbard model after a global interaction quench by considering several observables: the local Boson numbers, the nonlocal entanglement entropy, and the momentum distribution functions. We calculate the thermalization fidelity for different quench parameters and different sizes of subsystems, and the results show that the degree of thermalization is affected by the distance from the integrable point and the size of the subsystem. We employ the Pearson coefficient as the measurement of the correlation between the entanglement entropy and thermalization fidelity, and a strong correlation is demonstrated for the quenched system.
文摘We present a new interpretation of the Higgs field as a composite particle made up of a positive, with, a negative mass Planck particle. According to the Winterberg hypothesis, space, i.e., the vacuum, consists of both positive and negative physical massive particles, which he called planckions, interacting through strong superfluid forces. In our composite model for the Higgs boson, there is an intrinsic length scale associated with the vacuum, different from the one introduced by Winterberg, where, when the vacuum is in a perfectly balanced state, the number density of positive Planck particles equals the number density of negative Planck particles. Due to the mass compensating effect, the vacuum thus appears massless, chargeless, without pressure, energy density, or entropy. However, a situation can arise where there is an effective mass density imbalance due to the two species of Planck particle not matching in terms of populations, within their respective excited energy states. This does not require the physical addition or removal of either positive or negative Planck particles, within a given region of space, as originally thought. Ordinary matter, dark matter, and dark energy can thus be given a new interpretation as residual vacuum energies within the context of a greater vacuum, where the populations of the positive and negative energy states exactly balance. In the present epoch, it is estimated that the dark energy number density imbalance amounts to, , per cubic meter, when cosmic distance scales in excess of, 100 Mpc, are considered. Compared to a strictly balanced vacuum, where we estimate that the positive, and the negative Planck number density, is of the order, 7.85E54 particles per cubic meter, the above is a very small perturbation. This slight imbalance, we argue, would dramatically alleviate, if not altogether eliminate, the long standing cosmological constant problem.
文摘We work within a Winterberg framework where space, i.e., the vacuum, consists of a two component superfluid/super-solid made up of a vast assembly (sea) of positive and negative mass Planck particles, called planckions. These material particles interact indirectly, and have very strong restoring forces keeping them a finite distance apart from each other within their respective species. Because of their mass compensating effect, the vacuum appears massless, charge-less, without pressure, net energy density or entropy. In addition, we consider two varying G models, where G, is Newton’s constant, and G<sup>-1</sup>, increases with an increase in cosmological time. We argue that there are at least two competing models for the quantum vacuum within such a framework. The first follows a strict extension of Winterberg’s model. This leads to nonsensible results, if G increases, going back in cosmological time, as the length scale inherent in such a model will not scale properly. The second model introduces a different length scale, which does scale properly, but keeps the mass of the Planck particle as, ± the Planck mass. Moreover we establish a connection between ordinary matter, dark matter, and dark energy, where all three mass densities within the Friedman equation must be interpreted as residual vacuum energies, which only surface, once aggregate matter has formed, at relatively low CMB temperatures. The symmetry of the vacuum will be shown to be broken, because of the different scaling laws, beginning with the formation of elementary particles. Much like waves on an ocean where positive and negative planckion mass densities effectively cancel each other out and form a zero vacuum energy density/zero vacuum pressure surface, these positive mass densities are very small perturbations (anomalies) about the mean. This greatly alleviates, i.e., minimizes the cosmological constant problem, a long standing problem associated with the vacuum.
基金Under the auspices of Knowledge Innovation Program of Chinese Academy of Sciences (No. KZCS-SW-355)
文摘Based on the basic trade gravity model and Xinjiang's practical situation, new explanatory variables (GDP, GDPpc and SCO) are introduced to build an extended trade gravity model fitting for Xinjiang's bilateral trade. From the empirical analysis of this model, it is proposed that those three variables affect the Xinjiang's bilateral trade posi- tively. Whereas, geographic distance is found to be a significant factor influencing Xinjiang’s bilateral trade negatively. Then, by the extended trade gravity model, this article analyzes the present trade situation between Xinjiang and its main trade partners quantitatively in 2004. The results indicate that Xinjiang cooperates with its most trade partners successfully in terms of present economic scale and developing level. Xinjiang has established successfully trade part- nership with Central Asia, Central Europe and Eastern Europe, Western Europe, East Asia and South Asia. However, the foreign trade development with West Asia is much slower. Finally, some suggestions on developing Xinjiang's for- eign trade are put forward.
文摘As the increasing popularity and complexity of Web applications and the emergence of their new characteristics, the testing and maintenance of large, complex Web applications are becoming more complex and difficult. Web applications generally contain lots of pages and are used by enormous users. Statistical testing is an effective way of ensuring their quality. Web usage can be accurately described by Markov chain which has been proved to be an ideal model for software statistical testing. The results of unit testing can be utilized in the latter stages, which is an important strategy for bottom-to-top integration testing, and the other improvement of extended Markov chain model (EMM) is to present the error type vector which is treated as a part of page node. this paper also proposes the algorithm for generating test cases of usage paths. Finally, optional usage reliability evaluation methods and an incremental usability regression testing model for testing and evaluation are presented. Key words statistical testing - evaluation for Web usability - extended Markov chain model (EMM) - Web log mining - reliability evaluation CLC number TP311. 5 Foundation item: Supported by the National Defence Research Project (No. 41315. 9. 2) and National Science and Technology Plan (2001BA102A04-02-03)Biography: MAO Cheng-ying (1978-), male, Ph.D. candidate, research direction: software testing. Research direction: advanced database system, software testing, component technology and data mining.
基金supported by the National Natural Science Foundation of China(Grant No.11975307).
文摘Accurate identification of influential nodes facilitates the control of rumor propagation and interrupts the spread of computer viruses.Many classical approaches have been proposed by researchers regarding different aspects.To explore the impact of location information in depth,this paper proposes an improved global structure model to characterize the influence of nodes.The method considers both the node’s self-information and the role of the location information of neighboring nodes.First,degree centrality of each node is calculated,and then degree value of each node is used to represent self-influence,and degree values of the neighbor layer nodes are divided by the power of the path length,which is path attenuation used to represent global influence.Finally,an extended improved global structure model that considers the nearest neighbor information after combining self-influence and global influence is proposed to identify influential nodes.In this paper,the propagation process of a real network is obtained by simulation with the SIR model,and the effectiveness of the proposed method is verified from two aspects of discrimination and accuracy.The experimental results show that the proposed method is more accurate in identifying influential nodes than other comparative methods with multiple networks.
文摘The current highly competitive environment has driven industries to operate with increasingly restricted profit margins. Thus, it is imperative to optimize production processes. Faced with this scenario, multivariable predictive control of processes has been presented as a powerful alternative to achieve these goals. Moreover, the rationale for implementation of advanced control and subsequent analysis of its post-match performance also focus on the benefits that this tool brings to the plant. It is therefore essential to establish a methodology for analysis, based on clear and measurable criteria. Currently, there are different methodologies available in the market to assist with such analysis. These tools can have a quantitative or qualitative focus. The aim of this study is to evaluate three of the best current main performance assessment technologies: Minimum Variance Control-Harris Index; Statistical Process Control (Cp and Cpk); and the Qin and Yu Index. These indexes were studied for an alumina plant controlled by three MPC (model predictive control) algorithms (GPC (generalized predictive control), RMPCT (robust multivariable predictive control technology) and ESSMPC (extended state space model predictive controller)) with different results.
文摘Phosphorus is one of the most important nutrients required to support various kinds of biodegradation processes. As this particular nutrient is not included in the activated sludge model no. 1 (ASM1), this study extended this model in order to determine the fate of phosphorus during the biodegradation processes. When some of the kinetics parameters are modified using observed data from the restoration project of the Xuxi River in Wuxi City, China, from August 25 to 31 in 2009, the extended model shows excellent results. In order to obtain optimum values of coefficients of nitrogen and phosphorus, the mass fraction method was used to ensure that the final results were reasonable and practically relevant. The temporal distribution of the data calculated with the extended ASM1 approximates that of the observed data.
文摘The purpose of this article is to investigate approaches for modeling individual patient count/rate data over time accounting for temporal correlation and non</span><span style="font-family:Verdana;">-</span><span style="font-family:Verdana;">constant dispersions while requiring reasonable amounts of time to search over alternative models for those data. This research addresses formulations for two approaches for extending generalized estimating equations (GEE) modeling. These approaches use a likelihood-like function based on the multivariate normal density. The first approach augments standard GEE equations to include equations for estimation of dispersion parameters. The second approach is based on estimating equations determined by partial derivatives of the likelihood-like function with respect to all model parameters and so extends linear mixed modeling. Three correlation structures are considered including independent, exchangeable, and spatial autoregressive of order 1 correlations. The likelihood-like function is used to formulate a likelihood-like cross-validation (LCV) score for use in evaluating models. Example analyses are presented using these two modeling approaches applied to three data sets of counts/rates over time for individual cancer patients including pain flares per day, as needed pain medications taken per day, and around the clock pain medications taken per day per dose. Means and dispersions are modeled as possibly nonlinear functions of time using adaptive regression modeling methods to search through alternative models compared using LCV scores. The results of these analyses demonstrate that extended linear mixed modeling is preferable for modeling individual patient count/rate data over time</span><span style="font-family:Verdana;">,</span><span style="font-family:Verdana;"> because in example analyses</span><span style="font-family:Verdana;">,</span><span style="font-family:Verdana;"> it either generates better LCV scores or more parsimonious models and requires substantially less time.
文摘Purpose: To formulate and demonstrate methods for regression modeling of probabilities and dispersions for individual-patient longitudinal outcomes taking on discrete numeric values. Methods: Three alternatives for modeling of outcome probabilities are considered. Multinomial probabilities are based on different intercepts and slopes for probabilities of different outcome values. Ordinal probabilities are based on different intercepts and the same slope for probabilities of different outcome values. Censored Poisson probabilities are based on the same intercept and slope for probabilities of different outcome values. Parameters are estimated with extended linear mixed modeling maximizing a likelihood-like function based on the multivariate normal density that accounts for within-patient correlation. Formulas are provided for gradient vectors and Hessian matrices for estimating model parameters. The likelihood-like function is also used to compute cross-validation scores for alternative models and to control an adaptive modeling process for identifying possibly nonlinear functional relationships in predictors for probabilities and dispersions. Example analyses are provided of daily pain ratings for a cancer patient over a period of 97 days. Results: The censored Poisson approach is preferable for modeling these data, and presumably other data sets of this kind, because it generates a competitive model with fewer parameters in less time than the other two approaches. The generated probabilities for this model are distinctly nonlinear in time while the dispersions are distinctly nonconstant over time, demonstrating the need for adaptive modeling of such data. The analyses also address the dependence of these daily pain ratings on time and the daily numbers of pain flares. Probabilities and dispersions change differently over time for different numbers of pain flares. Conclusions: Adaptive modeling of daily pain ratings for individual cancer patients is an effective way to identify nonlinear relationships in time as well as in other predictors such as the number of pain flares.
文摘In the paper the extended modelling method with serial sands is used in an experimental research on the erosion patterns at the discharge outlet of a beach Hua-Neng power plant. The theoretical basis for the extended modelling method with serial sands is systematically presented in the paper and the method has been successfully employed in the sediment experiment of coastal works. According to the Froude Law, the model is designed to be a normal one with movable bed, the geometric scale lambda(L) = lambda(H) = 15, and three scales of sediment grain size are chosen, i.e., lambda(d1) = 0.207; lambda(d2) = 0.393; and lambda(d3) = 0.656. The median particle diameter of sea beach prototype sand d(50p) = 0.059 mm and the dis-changed water flow of the power plant is 21.7 m(3) / s. Three types of natural sea sands have been chosen as the serial modelling sands to extend the simulation of the prototype, thus replacing the conventional test in which artificial lightweight sands are used. As a result, this method can not only reduce the cost significantly, but also it is an advanced technique easy to use. Upon a series of tests, satisfactory results have been obtained.
基金supported by the National Natural Science Foundation of China (Grant Nos. 11175018 and 11247251)
文摘We study systematically an extended Bose-Hubbard model on the triangular lattice by means of a meanfield method based on the Gutzwiller ansatz. Pair hopping terms are explicitly included and a three-body constraint is applied. The zero-temperature phase diagram and a variety of quantum phase transitions are investigated in great detail. In particular, we show the existence and the stability of the pair supersolid phase.
文摘In this study, the performance of the extended shallow water model (ESWM) in evaluation of the flow regime of turbidity currents entering the Dez Reservoir was investigated. The continuity equations for fluid and particles and the Navier-Stokes equations govern the entire flow of turbidity currents. The shallow water equations governing the flow of the depositing phase of turbidity currents are derived from these equations. A case study was conducted on the flow regime of turbidity currents entering the Dez Reservoir in Iran from January 2002 to July 2003. Facing a serious sedimentation problem, the dead storage of the Dez Reservoir will be full in the coming 10 years, and the inflowing water in the hydropower conduit system is now becoming turbid. Based on the values of the dimensionless friction number ( Nf ≤1 ) and dimensionless entrainment number ( NE≤ 1 ) of turbidity currents, and the coefficient of determination between the observed and predicted deposit depths (R2 = 0.86) for the flow regime of negligible friction and negligible entrainment (NFNE), the flow regime of turbidity currents coming into the Dez Reservoir is considered to be NFNE. The results suggest that the ESWM is an appropriate approach for evaluation of the flow regime of turbidity currents in dam reservoirs where the characteristics of turbidity currents, such as the deposit depth, must be evaluated.
基金国家自然科学基金,the Research Fund for the Doctoral Program of Higher Education of China
文摘By using the bosonization and renormalization group methods, we have studied the low energy physical properties in one-dimensional extended Hubbard model. The formation of charge and spin gaps is investigated at the half-filled electron band. An analytical expression for the charge gap in terms of the Coulomb repulsive interaction strength U and the nearest-neighbour interaction parameter V is obtained.
文摘In the framework of nonperturbative quantum field theory, the critical phenomena of one-dimensionalextended Hubbard model (EHM) at half-filling are discussed from weak to intermediate interactions. After the EHMbeing mapped into two decoupled sine-Gordon models, the ground state phase diagram of the system is derived in anexplicit way. It is confirmed that the coexisting phases appear in different interaction regimes which cannot be foundby conventional theoretical methods. The diagram shows that there are seven different phase regions in the groundstate, which seems not to be the same as previous discussions, especially the boundary between the phase separationand condensed phase regions. The phase transition properties of the model between various phase regions are studied indetail.
文摘We propose a model for gravity based on the gravitational polarization of space. With this model, we can relate the density parameters within the Friedmann model, and show that dark matter is bound mass formed from massive dipoles set up within the vacuum surrounding ordinary matter. Aggregate matter induces a gravitational field within the surrounding space, which reinforces the original field. Dark energy, on the other hand, is the energy density associated with gravitational fields both for ordinary matter, and bound, or induced dipole matter. At high CBR temperatures, the cosmic susceptibility, induced by ordinary matter vanishes, as it is a smeared or average value for the cosmos as a whole. Even though gravitational dipoles do exist, no large-scale alignment or ordering is possible. Our model assumes that space, <i>i.e.</i>, the vacuum, is filled with a vast assembly (sea) of positive and negative mass particles having Planck mass, called planckions, which is based on extensive work by Winterberg. These original particles form a very stiff two-component superfluid, where positive and negative mass species neutralize one another already at the submicroscopic level, leading to zero net mass, zero net gravitational pressure, and zero net entropy, for the undisturbed medium. It is theorized that the gravitational dipoles form from such material positive and negative particles, and moreover, this causes an intrinsic polarization of the vacuum for the universe as a whole. We calculate that in the present epoch, the smeared or average susceptibility of the cosmos equals, <img src="Edit_77cbbf8c-0bcc-4957-92c7-34c999644348.png" width="15" height="20" alt="" />, and the overall resulting polarization equals, <img src="Edit_5fc44cb3-277a-4743-bfce-23e07f968d92.png" width="15" height="20" alt="" />=2.396kg/m<sup>2</sup>. Moreover, due to all the ordinary mass in the universe, made up of quarks and leptons, we calculate a net gravitational field having magnitude, <img src="Edit_c6fd9499-fe39-4d15-bc1c-0fdf1427dfd8.png" width="20" height="20" alt="" />=3.771E-10m/s<sup>2</sup>. This smeared or average value permeates all of space, and can be deduced by any observer, irrespective of location within the universe. This net gravitational field is forced upon us by Gauss’s law, and although technically a surface gravitational field, it is argued that this surface, smeared value holds point for point in the observable universe. A complete theory of gravitational polarization is presented. In contrast to electrostatics, gravistatics leads to anti-screening of the original source field, increasing the original value, <img src="Edit_a56ffe5e-10b9-4d3f-bf1e-bb52816fd07c.png" width="20" height="20" alt="" />, to, <img src="Edit_a6ac691a-342e-4ad4-9be0-808583f9f324.png" width="90" height="20" alt="" />, where <img src="Edit_69c6f874-5a3d-4d4a-84f7-819e06c09a83.png" width="20" height="20" alt="" style="white-space:normal;" /> is the induced or polarized field. In the present epoch, this leads to a bound mass, <img src="Edit_24ed50ca-84c2-4d3a-a018-957f7d0f964a.png" width="140" height="20" alt="" />, where <i>M<sub>F</sub></i> is the sum of all ordinary source matter in the universe, and <img src="Edit_5156dc24-3701-4491-9d10-58321e7d2d85.png" width="20" height="20" alt="" /> equals the relative permittivity. A new radius, and new mass, for the observable universe is dictated by the density parameters in Friedmann’s equation, and Gauss’s law. These lead to the very precise values, R<sub>0</sub>=3.217E27 meters, and, <i>M<sub>F</sub></i>=5.847E55kg, respectively, somewhat larger than current less accurate estimates.
文摘We propose the generalization of Einstein’s special theory of relativity (STR). In our model, we use the (1 + 4)-dimensional space G, which is the extension of the (1 + 3)-dimensional Minkowski space M. As a fifth additional coordinate, the interval S is used. This value is constant under the usual Lorentz transformations in M, but it changes when the transformations in the extended space G are used. We call this model the Extended space model (ESM). From a physical point of view, our expansion means that processes in which the rest mass of the particles changes are acceptable now. In the ESM, gravity and electromagnetism are combined in one field. In the ESM, a photon can have a nonzero mass and this mass can be either positive or negative. It is also possible to establish in the frame of ESM connection between mass of a particle and its size.