Let X=(Omega,F,F-t,X(t),theta(t),P-x) be a jump Markov process with q-pair q(x)-q(x, A). In this paper, the equilibrium principle is established and equilibrium functions, energy, capacity and related problems is inve...Let X=(Omega,F,F-t,X(t),theta(t),P-x) be a jump Markov process with q-pair q(x)-q(x, A). In this paper, the equilibrium principle is established and equilibrium functions, energy, capacity and related problems is investigated in terms of the q-pair q(x)-q(x, A).展开更多
The anthem investigate the hitting probability, polarity and the relationship between the polarity and Hausdorff dimension for self-similar Markov processes with state space (0, infinity) and increasing path.
By using Lamperti's bijection between self-similar Markov processes and L@vy processes~ we prove finiteness of moments and asymptotic behavior of passage times for increasing self-similar Markov processes valued in ...By using Lamperti's bijection between self-similar Markov processes and L@vy processes~ we prove finiteness of moments and asymptotic behavior of passage times for increasing self-similar Markov processes valued in (0, ~). We Mso investigate the behavior of the process when it crosses a level. A limit theorem concerning the distribution of the process immediately before it crosses some level is proved. Some useful examples are given.展开更多
This article concerns a class of Ornstein-Uhlenbeck type Markov processes and for which the level sets will be approached. By constructing a new class f processes, we shall obtain an inequality on the Hausdorff dimens...This article concerns a class of Ornstein-Uhlenbeck type Markov processes and for which the level sets will be approached. By constructing a new class f processes, we shall obtain an inequality on the Hausdorff dimensions of the level sets for the Ornstein-Uhlenbeck type Markov processes. Based on this result, we finally verify that any two independent O-U.M.P with alpha-stable processes could collide with probability one.展开更多
A variational formula for the asymptotic variance of general Markov processes is obtained.As application,we get an upper bound of the mean exit time of reversible Markov processes,and some comparison theorems between ...A variational formula for the asymptotic variance of general Markov processes is obtained.As application,we get an upper bound of the mean exit time of reversible Markov processes,and some comparison theorems between the reversible and non-reversible diffusion processes.展开更多
We prove that non-recursive base conversion can always be implemented by using a deterministic Markov process. Our paper discusses the pros and cons of recursive and non-recursive methods, in general. And we include a...We prove that non-recursive base conversion can always be implemented by using a deterministic Markov process. Our paper discusses the pros and cons of recursive and non-recursive methods, in general. And we include a comparison between non-recursion and a deterministic Markov process, proving that the Markov process is twice as efficient.展开更多
For an ergodic continuous-time Markov process with a particular state in its space,the authors provide the necessary and sufficient conditions for exponential and strong ergodicity in terms of the moments of the first...For an ergodic continuous-time Markov process with a particular state in its space,the authors provide the necessary and sufficient conditions for exponential and strong ergodicity in terms of the moments of the first hitting time on the state.An application to the queue length process of M/G/1 queue with multiple vacations is given.展开更多
By adopting a nice auxiliary transform of Markov operators, we derive new bounds for the first eigenvalue of the generator corresponding to symmetric Markov processes. Our results not only extend the related topic in ...By adopting a nice auxiliary transform of Markov operators, we derive new bounds for the first eigenvalue of the generator corresponding to symmetric Markov processes. Our results not only extend the related topic in the literature, but also are efficiently used to study the first eigenvalue of birth-death processes with killing and that of elliptic operators with killing on half line. In particular, we obtain two approximation procedures for the first eigenvalue of birth-death processes with killing, and present qualitatively sharp upper and lower bounds for the first eigenvalue of elliptic operators with killing on half line.展开更多
This paper presents a small perturbation Cramer method for obtaining the large deviation principle of a family of measures (β,ε> 0) on a topological vector space. As an application, we obtain the moderate deviati...This paper presents a small perturbation Cramer method for obtaining the large deviation principle of a family of measures (β,ε> 0) on a topological vector space. As an application, we obtain the moderate deviation estimations for uniformly ergodic Markov processes.展开更多
We investigate the approximating capability of Markov modulated Poisson processes (MMPP) for modeling multifractal Internet traffic. The choice of MMPP is motivated by its ability to capture the variability and correl...We investigate the approximating capability of Markov modulated Poisson processes (MMPP) for modeling multifractal Internet traffic. The choice of MMPP is motivated by its ability to capture the variability and correlation in moderate time scales while being analytically tractable. Important statistics of traffic burstiness are described and a customized moment-based fitting procedure of MMPP to traffic traces is presented. Our methodology of doing this is to examine whether the MMPP can be used to predict the performance of a queue to which MMPP sample paths and measured traffic traces are fed for comparison respectively, in addition to the goodness-of-fit test of MMPP. Numerical results and simulations show that the fitted MMPP can approximate multifractal traffic quite well, i.e. accurately predict the queueing performance.展开更多
In this paper,we study the quasi-stationarity and quasi-ergodicity of general Markov processes.We show,among other things,that if X is a standard Markov process admitting a dual with respect to a finite measure m and ...In this paper,we study the quasi-stationarity and quasi-ergodicity of general Markov processes.We show,among other things,that if X is a standard Markov process admitting a dual with respect to a finite measure m and if X admits a strictly positive continuous transition density p(t,x,y)(with respect to m)which is bounded in(x,y)for every t>0,then X has a unique quasi-stationary distribution and a unique quasi-ergodic distribution.We also present several classes of Markov processes satisfying the above conditions.展开更多
LetXbe an m s ymmetric Markov process andMa multiplicative functional ofXsuch that theMsubprocess ofXis alsom-symmetric.The author characterizes the Dirichlet form associated with the subprocess in terms of that assoc...LetXbe an m s ymmetric Markov process andMa multiplicative functional ofXsuch that theMsubprocess ofXis alsom-symmetric.The author characterizes the Dirichlet form associated with the subprocess in terms of that associated withXand the bivariate Revuz measure ofM.展开更多
The hh-transforms of positivity preserving semigroups and their associated Markov processes are investigated in this paper. In particular, it is shown that any quasi-regular positivity preserving coercive form is hh-a...The hh-transforms of positivity preserving semigroups and their associated Markov processes are investigated in this paper. In particular, it is shown that any quasi-regular positivity preserving coercive form is hh-associated with a pair of special standard processes which are in weak duality.展开更多
In this paper,we provide a new theoretical framework of pyramid Markov processes to solve some open and fundamental problems of blockchain selfish mining under a rigorous mathematical setting.We first describe a more ...In this paper,we provide a new theoretical framework of pyramid Markov processes to solve some open and fundamental problems of blockchain selfish mining under a rigorous mathematical setting.We first describe a more general model of blockchain selfish mining with both a two-block leading competitive criterion and a new economic incentive mechanism.Then we establish a pyramid Markov process and show that it is irreducible and positive recurrent,and its stationary probability vector is matrix-geometric with an explicitly representable rate matrix.Also,we use the stationary probability vector to study the influence of orphan blocks on the waste of computing resource.Next,we set up a pyramid Markov reward process to investigate the long-run average mining profits of the honest and dishonest mining pools,respectively.As a by-product,we build one-dimensional Markov reward processes and provide some new interesting interpretation on the Markov chain and the revenue analysis reported in the seminal work by Eyal and Sirer(2014).Note that the pyramid Markov(reward)processes can open up a new avenue in the study of blockchain selfish mining.Thus we hope that the methodology and results developed in this paper shed light on the blockchain selfish mining such that a series of promising research can be developed potentially.展开更多
Using the forward-backward martingale decomposition and the martingale limit theorems, we establish the functional law of iterated logarithm for an additive functional (At) of a reversible Markov process, under the mi...Using the forward-backward martingale decomposition and the martingale limit theorems, we establish the functional law of iterated logarithm for an additive functional (At) of a reversible Markov process, under the minimal condition that σ~2(A)= tim BA_t~2/t exists in R. We extend also t →∞ the previous remarkable functional central limit theorem of Kipnis and Varadhan.展开更多
Some analytic and probabilistic properties of the weak Poincaré inequality are obtained. In particular, for strong Feller Markov processes the existence of this inequality is equivalent to each of the following: ...Some analytic and probabilistic properties of the weak Poincaré inequality are obtained. In particular, for strong Feller Markov processes the existence of this inequality is equivalent to each of the following: (i)the Liouville property (or the irreducibility); (ii) the existence of successful couplings (or shift-couplings); (iii)the convergence of the Markov process in total variation norm; (iv) the triviality of the tail (or the invariant)σ-field; (v) the convergence of the density. Estimates of the convergence rate in total variation norm of Markov processes are obtained using the weak Poincaré inequality.展开更多
Optimal policies in Markov decision problems may be quite sensitive with regard to transition probabilities.In practice,some transition probabilities may be uncertain.The goals of the present study are to find the rob...Optimal policies in Markov decision problems may be quite sensitive with regard to transition probabilities.In practice,some transition probabilities may be uncertain.The goals of the present study are to find the robust range for a certain optimal policy and to obtain value intervals of exact transition probabilities.Our research yields powerful contributions for Markov decision processes(MDPs)with uncertain transition probabilities.We first propose a method for estimating unknown transition probabilities based on maximum likelihood.Since the estimation may be far from accurate,and the highest expected total reward of the MDP may be sensitive to these transition probabilities,we analyze the robustness of an optimal policy and propose an approach for robust analysis.After giving the definition of a robust optimal policy with uncertain transition probabilities represented as sets of numbers,we formulate a model to obtain the optimal policy.Finally,we define the value intervals of the exact transition probabilities and construct models to determine the lower and upper bounds.Numerical examples are given to show the practicability of our methods.展开更多
This paper studies the limit average variance criterion for continuous-time Markov decision processes in Polish spaces. Based on two approaches, this paper proves not only the existence of solutions to the variance mi...This paper studies the limit average variance criterion for continuous-time Markov decision processes in Polish spaces. Based on two approaches, this paper proves not only the existence of solutions to the variance minimization optimality equation and the existence of a variance minimal policy that is canonical, but also the existence of solutions to the two variance minimization optimality inequalities and the existence of a variance minimal policy which may not be canonical. An example is given to illustrate all of our conditions.展开更多
In this paper, we obtain the transition probability of jump chain of semi-Markov pro- cess, the distribution of sojourn time and one-dimensional distribution of semi-Markov process. Furthermore, the semi-Markov proces...In this paper, we obtain the transition probability of jump chain of semi-Markov pro- cess, the distribution of sojourn time and one-dimensional distribution of semi-Markov process. Furthermore, the semi-Markov process X(t, ω) is constructed from the semi-Markov matrix and it is proved that two definitions of semi-Markov process are equivalent.展开更多
文摘Let X=(Omega,F,F-t,X(t),theta(t),P-x) be a jump Markov process with q-pair q(x)-q(x, A). In this paper, the equilibrium principle is established and equilibrium functions, energy, capacity and related problems is investigated in terms of the q-pair q(x)-q(x, A).
基金the National Natural Science Foundation of China and the StateEducation of Commission Ph.D. Station Foundation
文摘The anthem investigate the hitting probability, polarity and the relationship between the polarity and Hausdorff dimension for self-similar Markov processes with state space (0, infinity) and increasing path.
基金supported in part by the National Natural Science Foundation of China(1117126211171263)
文摘By using Lamperti's bijection between self-similar Markov processes and L@vy processes~ we prove finiteness of moments and asymptotic behavior of passage times for increasing self-similar Markov processes valued in (0, ~). We Mso investigate the behavior of the process when it crosses a level. A limit theorem concerning the distribution of the process immediately before it crosses some level is proved. Some useful examples are given.
文摘This article concerns a class of Ornstein-Uhlenbeck type Markov processes and for which the level sets will be approached. By constructing a new class f processes, we shall obtain an inequality on the Hausdorff dimensions of the level sets for the Ornstein-Uhlenbeck type Markov processes. Based on this result, we finally verify that any two independent O-U.M.P with alpha-stable processes could collide with probability one.
基金Supported by NSFC(Grant No.11901096)NSF-Fujian(Grant No.2020J05036)+3 种基金the Program for Probability and Statistics:Theory and Application(Grant No.IRTL1704)the Program for Innovative Research Team in Science and Technology in Fujian Province University(IRTSTFJ)the National Key R&D Program of China(2020YFA0712900,2020YFA0712901)the National Natural Science Foundation of China(Grant No.11771047)。
文摘A variational formula for the asymptotic variance of general Markov processes is obtained.As application,we get an upper bound of the mean exit time of reversible Markov processes,and some comparison theorems between the reversible and non-reversible diffusion processes.
文摘We prove that non-recursive base conversion can always be implemented by using a deterministic Markov process. Our paper discusses the pros and cons of recursive and non-recursive methods, in general. And we include a comparison between non-recursion and a deterministic Markov process, proving that the Markov process is twice as efficient.
基金the National Natural Science Foundation of China(No.10671212)the Research Fund for the Doctoral Program of Higher Education(No.20050533036).
文摘For an ergodic continuous-time Markov process with a particular state in its space,the authors provide the necessary and sufficient conditions for exponential and strong ergodicity in terms of the moments of the first hitting time on the state.An application to the queue length process of M/G/1 queue with multiple vacations is given.
基金Supported by Foundation of Fujian’s Ministry of Education (Grant Nos. JA10058 and JA11051)National Natural Science Foundation of China (Grant No. 11126350)
文摘By adopting a nice auxiliary transform of Markov operators, we derive new bounds for the first eigenvalue of the generator corresponding to symmetric Markov processes. Our results not only extend the related topic in the literature, but also are efficiently used to study the first eigenvalue of birth-death processes with killing and that of elliptic operators with killing on half line. In particular, we obtain two approximation procedures for the first eigenvalue of birth-death processes with killing, and present qualitatively sharp upper and lower bounds for the first eigenvalue of elliptic operators with killing on half line.
文摘This paper presents a small perturbation Cramer method for obtaining the large deviation principle of a family of measures (β,ε> 0) on a topological vector space. As an application, we obtain the moderate deviation estimations for uniformly ergodic Markov processes.
文摘We investigate the approximating capability of Markov modulated Poisson processes (MMPP) for modeling multifractal Internet traffic. The choice of MMPP is motivated by its ability to capture the variability and correlation in moderate time scales while being analytically tractable. Important statistics of traffic burstiness are described and a customized moment-based fitting procedure of MMPP to traffic traces is presented. Our methodology of doing this is to examine whether the MMPP can be used to predict the performance of a queue to which MMPP sample paths and measured traffic traces are fed for comparison respectively, in addition to the goodness-of-fit test of MMPP. Numerical results and simulations show that the fitted MMPP can approximate multifractal traffic quite well, i.e. accurately predict the queueing performance.
基金supported by National Natural Science Foundation of China(GrantNo.11171010)Beijing Natural Science Foundation(Grant No.1112001)
文摘In this paper,we study the quasi-stationarity and quasi-ergodicity of general Markov processes.We show,among other things,that if X is a standard Markov process admitting a dual with respect to a finite measure m and if X admits a strictly positive continuous transition density p(t,x,y)(with respect to m)which is bounded in(x,y)for every t>0,then X has a unique quasi-stationary distribution and a unique quasi-ergodic distribution.We also present several classes of Markov processes satisfying the above conditions.
文摘LetXbe an m s ymmetric Markov process andMa multiplicative functional ofXsuch that theMsubprocess ofXis alsom-symmetric.The author characterizes the Dirichlet form associated with the subprocess in terms of that associated withXand the bivariate Revuz measure ofM.
文摘The hh-transforms of positivity preserving semigroups and their associated Markov processes are investigated in this paper. In particular, it is shown that any quasi-regular positivity preserving coercive form is hh-associated with a pair of special standard processes which are in weak duality.
基金This work is supported by the National Key R&D Program of China under Grant No.2020AAA0103801Quanlin Li is supported by the National Natural Science Foundation of China under Grant Nos.71671158 and 71932002+1 种基金the Beijing Social Science Foundation Research Base Project under Grant No.19JDGLA004Xiaole Wu is supported by the National Natural Science Foundation of China under Grant No.72025102.
文摘In this paper,we provide a new theoretical framework of pyramid Markov processes to solve some open and fundamental problems of blockchain selfish mining under a rigorous mathematical setting.We first describe a more general model of blockchain selfish mining with both a two-block leading competitive criterion and a new economic incentive mechanism.Then we establish a pyramid Markov process and show that it is irreducible and positive recurrent,and its stationary probability vector is matrix-geometric with an explicitly representable rate matrix.Also,we use the stationary probability vector to study the influence of orphan blocks on the waste of computing resource.Next,we set up a pyramid Markov reward process to investigate the long-run average mining profits of the honest and dishonest mining pools,respectively.As a by-product,we build one-dimensional Markov reward processes and provide some new interesting interpretation on the Markov chain and the revenue analysis reported in the seminal work by Eyal and Sirer(2014).Note that the pyramid Markov(reward)processes can open up a new avenue in the study of blockchain selfish mining.Thus we hope that the methodology and results developed in this paper shed light on the blockchain selfish mining such that a series of promising research can be developed potentially.
基金the National Natural Sciences Foundation of China the Foundation of Y.D. Fok.
文摘Using the forward-backward martingale decomposition and the martingale limit theorems, we establish the functional law of iterated logarithm for an additive functional (At) of a reversible Markov process, under the minimal condition that σ~2(A)= tim BA_t~2/t exists in R. We extend also t →∞ the previous remarkable functional central limit theorem of Kipnis and Varadhan.
基金This work was partially supported by the National Natural Science Foundation of China for Distinguished Young Scholars (Grant No. 10025105) the National Natural Science Foundation of China (Grant No. 10121101) Core Teachers Project and Teaching and R
文摘Some analytic and probabilistic properties of the weak Poincaré inequality are obtained. In particular, for strong Feller Markov processes the existence of this inequality is equivalent to each of the following: (i)the Liouville property (or the irreducibility); (ii) the existence of successful couplings (or shift-couplings); (iii)the convergence of the Markov process in total variation norm; (iv) the triviality of the tail (or the invariant)σ-field; (v) the convergence of the density. Estimates of the convergence rate in total variation norm of Markov processes are obtained using the weak Poincaré inequality.
基金Supported by the National Natural Science Foundation of China(71571019).
文摘Optimal policies in Markov decision problems may be quite sensitive with regard to transition probabilities.In practice,some transition probabilities may be uncertain.The goals of the present study are to find the robust range for a certain optimal policy and to obtain value intervals of exact transition probabilities.Our research yields powerful contributions for Markov decision processes(MDPs)with uncertain transition probabilities.We first propose a method for estimating unknown transition probabilities based on maximum likelihood.Since the estimation may be far from accurate,and the highest expected total reward of the MDP may be sensitive to these transition probabilities,we analyze the robustness of an optimal policy and propose an approach for robust analysis.After giving the definition of a robust optimal policy with uncertain transition probabilities represented as sets of numbers,we formulate a model to obtain the optimal policy.Finally,we define the value intervals of the exact transition probabilities and construct models to determine the lower and upper bounds.Numerical examples are given to show the practicability of our methods.
基金supported by the National Natural Science Foundation of China(10801056)the Natural Science Foundation of Ningbo(2010A610094)
文摘This paper studies the limit average variance criterion for continuous-time Markov decision processes in Polish spaces. Based on two approaches, this paper proves not only the existence of solutions to the variance minimization optimality equation and the existence of a variance minimal policy that is canonical, but also the existence of solutions to the two variance minimization optimality inequalities and the existence of a variance minimal policy which may not be canonical. An example is given to illustrate all of our conditions.
基金the National Natural Science Foundation of China (No. 60574002).
文摘In this paper, we obtain the transition probability of jump chain of semi-Markov pro- cess, the distribution of sojourn time and one-dimensional distribution of semi-Markov process. Furthermore, the semi-Markov process X(t, ω) is constructed from the semi-Markov matrix and it is proved that two definitions of semi-Markov process are equivalent.