期刊文献+
共找到11篇文章
< 1 >
每页显示 20 50 100
BSDES IN GAMES,COUPLED WITH THE VALUE FUNCTIONS.ASSOCIATED NONLOCAL BELLMAN-ISAACS EQUATIONS
1
作者 郝涛 李娟 《Acta Mathematica Scientia》 SCIE CSCD 2017年第5期1497-1518,共22页
We establish a new type of backward stochastic differential equations(BSDEs)connected with stochastic differential games(SDGs), namely, BSDEs strongly coupled with the lower and the upper value functions of SDGs, wher... We establish a new type of backward stochastic differential equations(BSDEs)connected with stochastic differential games(SDGs), namely, BSDEs strongly coupled with the lower and the upper value functions of SDGs, where the lower and the upper value functions are defined through this BSDE. The existence and the uniqueness theorem and comparison theorem are proved for such equations with the help of an iteration method. We also show that the lower and the upper value functions satisfy the dynamic programming principle. Moreover, we study the associated Hamilton-Jacobi-Bellman-Isaacs(HJB-Isaacs)equations, which are nonlocal, and strongly coupled with the lower and the upper value functions. Using a new method, we characterize the pair(W, U) consisting of the lower and the upper value functions as the unique viscosity solution of our nonlocal HJB-Isaacs equation. Furthermore, the game has a value under the Isaacs’ condition. 展开更多
关键词 Mc Kean-Vlasov SDE BSDE coupled with the lower and the upper value functions dynamic programming principle mean-field BSDE viscosity solution coupled nonlocal HJB-Isaacs equation Isaacs' condition
下载PDF
Optimal Investment Strategy in Safe-region on Consumption and Portfolio Problem
2
作者 Ruicheng Yang Ailing Zuo 《Chinese Business Review》 2004年第8期45-49,共5页
This paper investigates an optimal investment strategy on consumption and portfolio problem, in which the investor must withdraw funds continuously at a given rate. By analyzing the evolving process of wealth, we give... This paper investigates an optimal investment strategy on consumption and portfolio problem, in which the investor must withdraw funds continuously at a given rate. By analyzing the evolving process of wealth, we give the definition of safe-region for investment. Moreover, in order to obtain the target wealth as quickly as possible, using Bellman dynamic programming principle, we get the optimal investment strategy and corresponding necessary expected time. At last we give some numerical computations for a set of different parameters. 展开更多
关键词 portfolio optimal strategy geometric Brownian MotionBellman dynamic programming principle
下载PDF
OPTIMAL LOGISTICS FOR MULTIPLE JEEPS 被引量:2
3
作者 陈文磊 丁义明 范文涛 《Acta Mathematica Scientia》 SCIE CSCD 2010年第5期1429-1439,共11页
We consider variations of the classical jeep problems: the optimal logistics for a caravan of jeeps which travel together in the desert. The main purpose is to arrange the travels for the one-way trip and the round t... We consider variations of the classical jeep problems: the optimal logistics for a caravan of jeeps which travel together in the desert. The main purpose is to arrange the travels for the one-way trip and the round trip of a caravan of jeeps so that the chief jeep visits the farthest destination. Based on the dynamic program principle, the maximum distances for the caravan when only part of the jeeps should return and when all drivers should return are obtained. Some related results such as the efficiency of the abandoned jeeps, and the advantages of more jeeps in the caravan are also presented. 展开更多
关键词 jeep problem LOGISTICS dynamic program principle
下载PDF
Optimal bounded control for maximizing reliability of Duhem hysteretic systems
4
作者 Ming XU Xiaoling JIN +1 位作者 Yong WANG Zhilong HUANG 《Applied Mathematics and Mechanics(English Edition)》 SCIE EI CSCD 2015年第10期1337-1346,共10页
The optimal bounded control of stochastic-excited systems with Duhem hysteretic components for maximizing system reliability is investigated. The Duhem hysteretic force is transformed to energy-depending damping and s... The optimal bounded control of stochastic-excited systems with Duhem hysteretic components for maximizing system reliability is investigated. The Duhem hysteretic force is transformed to energy-depending damping and stiffness by the energy dissipation balance technique. The controlled system is transformed to the equivalent non- hysteretic system. Stochastic averaging is then implemented to obtain the It5 stochastic equation associated with the total energy of the vibrating system, appropriate for eval- uating system responses. Dynamical programming equations for maximizing system re- liability are formulated by the dynamical programming principle. The optimal bounded control is derived from the maximization condition in the dynamical programming equation. Finally, the conditional reliability function and mean time of first-passage failure of the optimal Duhem systems are numerically solved from the Kolmogorov equations. The proposed procedure is illustrated with a representative example. 展开更多
关键词 optimal bounded control RELIABILITY Duhem hysteretic system stochastic dynamical programming principle
下载PDF
Stochastic optimal control of cable vibration in plane by using axial support motion
5
作者 Ming Zhao Wei-Qiu Zhu 《Acta Mechanica Sinica》 SCIE EI CAS CSCD 2011年第4期578-586,共9页
A stochastic optimal control strategy for a slightly sagged cable using support motion in the cable axial direction is proposed. The nonlinear equation of cable motion in plane is derived and reduced to the equations ... A stochastic optimal control strategy for a slightly sagged cable using support motion in the cable axial direction is proposed. The nonlinear equation of cable motion in plane is derived and reduced to the equations for the first two modes of cable vibration by using the Galerkin method. The partially averaged Ito equation for controlled system energy is further derived by applying the stochastic averaging method for quasi-non-integrable Hamiltonian systems. The dynamical programming equation for the controlled system energy with a performance index is established by applying the stochastic dynamical programming principle and a stochastic optimal control law is obtained through solving the dynamical programming equation. A bilinear controller by using the direct method of Lyapunov is introduced. The comparison between the two controllers shows that the proposed stochastic optimal control strategy is superior to the bilinear control strategy in terms of higher control effectiveness and efficiency. 展开更多
关键词 Stay cable Active control - Stochastic optimalcontrol dynamical programming principle
下载PDF
Stochastic Maximum Principle for Forward-Backward Regime Switching Jump Diffusion Systems and Applications to Finance 被引量:1
6
作者 Siyu LV Zhen WU 《Chinese Annals of Mathematics,Series B》 SCIE CSCD 2018年第5期773-790,共18页
The authors prove a sufficient stochastic maximum principle for the optimal control of a forward-backward Markov regime switching jump diffusion system and show its connection to dynamic programming principle. The res... The authors prove a sufficient stochastic maximum principle for the optimal control of a forward-backward Markov regime switching jump diffusion system and show its connection to dynamic programming principle. The result is applied to a cash flow valuation problem with terminal wealth constraint in a financial market. An explicit optimal strategy is obtained in this example. 展开更多
关键词 Stochastic maximum principle dynamic programming principle Forward-backward stochastic differential equation Regime switching Jump diffusion
原文传递
Filtration Consistent Nonlinear Expectations and Evaluations of Contingent Claims 被引量:19
7
作者 ShigePeng 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 2004年第2期191-214,共24页
We will study the following problem.Let X_t,t∈[0,T],be an R^d-valued process defined on atime interval t∈[0,T].Let Y be a random value depending on the trajectory of X.Assume that,at each fixedtime t≤T,the informat... We will study the following problem.Let X_t,t∈[0,T],be an R^d-valued process defined on atime interval t∈[0,T].Let Y be a random value depending on the trajectory of X.Assume that,at each fixedtime t≤T,the information available to an agent(an individual,a firm,or even a market)is the trajectory ofX before t.Thus at time T,the random value of Y(ω) will become known to this agent.The question is:howwill this agent evaluate Y at the time t?We will introduce an evaluation operator ε_t[Y] to define the value of Y given by this agent at time t.Thisoperator ε_t[·] assigns an (X_s)0(?)s(?)T-dependent random variable Y to an (X_s)0(?)s(?)t-dependent random variableε_t[Y].We will mainly treat the situation in which the process X is a solution of a SDE (see equation (3.1)) withthe drift coefficient b and diffusion coefficient σ containing an unknown parameter θ=θ_t.We then consider theso called super evaluation when the agent is a seller of the asset Y.We will prove that such super evaluation is afiltration consistent nonlinear expectation.In some typical situations,we will prove that a filtration consistentnonlinear evaluation dominated by this super evaluation is a g-evaluation.We also consider the correspondingnonlinear Markovian situation. 展开更多
关键词 option pricing measure of risk backward stochastic differential equation nonlinear potential theory nonlinear Markov property dynamic programming principle
原文传递
Controlled Mean-Field Backward Stochastic Differential Equations with Jumps Involving the Value Function 被引量:2
8
作者 LI Juan MIN Hui 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2016年第5期1238-1268,共31页
This paper discusses mean-field backward stochastic differentiM equations (mean-field BS- DEs) with jumps and a new type of controlled mean-field BSDEs with jumps, namely mean-field BSDEs with jumps strongly coupled... This paper discusses mean-field backward stochastic differentiM equations (mean-field BS- DEs) with jumps and a new type of controlled mean-field BSDEs with jumps, namely mean-field BSDEs with jumps strongly coupled with the value function of the associated control problem. The authors first prove the existence and the uniqueness as well as a comparison theorem for the above two types of BSDEs. For this the authors use an approximation method. Then, with the help of the notion of stochastic backward semigroups introduced by Peng in 1997, the authors get the dynamic programming principle (DPP) for the value functions. Furthermore, the authors prove that the value function is a viscosity solution of the associated nonlocal Hamilton-Jacobi-Bellman (HJB) integro-partial differential equation, which is unique in an adequate space of continuous functions introduced by Barles, et al. in 1997. 展开更多
关键词 dynamic programming principle (DPP) Hamilton-Jacobi-Bellman (HJB) equation mean-field backward stochastic differential equation (mean-field BSDE) with jump Poisson random measure value function.
原文传递
A BSDE Approach to Stochastic Differential Games Involving Impulse Controls and HJBI Equation 被引量:1
9
作者 ZHANG Liangquan 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2022年第3期766-801,共36页
This paper focuses on zero-sum stochastic differential games in the framework of forwardbackward stochastic differential equations on a finite time horizon with both players adopting impulse controls.By means of BSDE ... This paper focuses on zero-sum stochastic differential games in the framework of forwardbackward stochastic differential equations on a finite time horizon with both players adopting impulse controls.By means of BSDE methods,in particular that of the notion from Peng’s stochastic backward semigroups,the authors prove a dynamic programming principle for both the upper and the lower value functions of the game.The upper and the lower value functions are then shown to be the unique viscosity solutions of the Hamilton-Jacobi-Bellman-Isaacs equations with a double-obstacle.As a consequence,the uniqueness implies that the upper and lower value functions coincide and the game admits a value. 展开更多
关键词 dynamic programming principle(DPP) forward-backward stochastic differential equations(FBSDEs) Hamilton-Jacobi-Bellman-Isaacs(HJBI) impulse control stochastic differential games value function viscosity solution
原文传递
Stochastic Differential Games with Reflection and Related Obstacle Problems for Isaacs Equations
10
作者 Rainer BUCKDAHN 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 2011年第4期647-678,共32页
In this paper we first investigate zero-sum two-player stochastic differential games with reflection, with the help of theory of Reflected Backward Stochastic Differential Equations (RBSDEs). We will establish the d... In this paper we first investigate zero-sum two-player stochastic differential games with reflection, with the help of theory of Reflected Backward Stochastic Differential Equations (RBSDEs). We will establish the dynamic programming principle for the upper and the lower value functions of this kind of stochastic differential games with reflection in a straightforward way. Then the upper and the lower value functions are proved to be the unique viscosity solutions to the associated upper and the lower Hamilton-Jacobi-Bettman-Isaacs equations with obstacles, respectively. The method differs significantly from those used for control problems with reflection, with new techniques developed of interest on its own. Further, we also prove a new estimate for RBSDEs being sharper than that in the paper of E1 Karoui, Kapoudjian, Pardoux, Peng and Quenez (1997), which turns out to be very useful because it allows us to estimate the LP-distance of the solutions of two different RBSDEs by the p-th power of the distance of the initial values of the driving forward equations. We also show that the unique viscosity solution to the approximating Isaacs equation constructed by the penalization method converges to the viscosity solution of the Isaacs equation with obstacle. 展开更多
关键词 stochastic differential games value function reflected backward stochastic differential equations dynamic programming principle Isaacs equations with obstacles viscosity solution
原文传递
Stochastic Optimal Control for First-Passage Failure of Nonlinear Oscillators with Multi-Degrees-of-Freedom
11
作者 高阳艳 吴勇军 《Journal of Shanghai Jiaotong university(Science)》 EI 2013年第5期577-582,共6页
To enhance the reliability of the stochastically excited structure,it is significant to study the problem of stochastic optimal control for minimizing first-passage failure.Combining the stochastic averaging method wi... To enhance the reliability of the stochastically excited structure,it is significant to study the problem of stochastic optimal control for minimizing first-passage failure.Combining the stochastic averaging method with dynamical programming principle,we study the optimal control for minimizing first-passage failure of multidegrees-of-freedom(MDoF)nonlinear oscillators under Gaussian white noise excitations.The equations of motion of the controlled system are reduced to time homogenous difusion processes by stochastic averaging.The optimal control law is determined by the dynamical programming equations and the control constraint.The backward Kolmogorov(BK)equation and the Pontryagin equation are established to obtain the conditional reliability function and mean first-passage time(MFPT)of the optimally controlled system,respectively.An example has shown that the proposed control strategy can increase the reliability and MFPT of the original system,and the mathematical treatment is also facilitated. 展开更多
关键词 stochastic averaging method dynamical programming principle backward Kolmogorov(BK) equation Pontryagin equation
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部