We consider a class of stochastic control problems where the state pro- cess is a probability measure-valued process satisfying an additional martin- gale condition on its dynamics, called measure-valued martingales (MVMs). We establish the ‘classical’ results of stochastic control for these problems: specifically, we prove that the value function for the problem can be charac- terised as the unique solution to the Hamilton-Jacobi-Bellman equation in the sense of viscosity solutions. In order to prove this result, we exploit structural properties of the MVM processes. Our results also include an appropriate version of Itô’s formula for controlled MVMs. We also show how problems of this type arise in a number of applications, including model-independent derivatives pricing, the optimal Skorokhod em- bedding problem, and two player games with asymmetric information.
Controlled measure-valued martingales: a viscosity solution approach
Sara Svaluto-Ferro
2024-01-01
Abstract
We consider a class of stochastic control problems where the state pro- cess is a probability measure-valued process satisfying an additional martin- gale condition on its dynamics, called measure-valued martingales (MVMs). We establish the ‘classical’ results of stochastic control for these problems: specifically, we prove that the value function for the problem can be charac- terised as the unique solution to the Hamilton-Jacobi-Bellman equation in the sense of viscosity solutions. In order to prove this result, we exploit structural properties of the MVM processes. Our results also include an appropriate version of Itô’s formula for controlled MVMs. We also show how problems of this type arise in a number of applications, including model-independent derivatives pricing, the optimal Skorokhod em- bedding problem, and two player games with asymmetric information.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.