Infinite horizon average cost dynamic programming subject to ambiguity on conditional distribution
Paper in proceeding, 2016

This paper addresses the optimality of stochastic control strategies based on the infinite horizon average cost criterion, subject to total variation distance ambiguity on the conditional distribution of the controlled process. This stochastic optimal control problem is formulated using minimax theory, in which the minimization is over the control strategies and the maximization is over the conditional distributions. Under the assumption that, for every stationary Markov control law the maximizing conditional distribution of the controlled process is irreducible, we derive a new dynamic programming recursion which minimizes the future ambiguity, and we propose a new policy iteration algorithm. The new dynamic programming recursion includes, in addition to the standard terms, the oscillator semi-norm of the cost-to-go. The maximizing conditional distribution is found via a water-filling algorithm. The implications of our results are demonstrated through an example.

Optimal control

Process control

Heuristic algorithms

Aerospace electronics

Markov processes

Dynamic programming

Author

I. Tzortzis

University of Cyprus

C.D. Charalambous

University of Cyprus

Themistoklis Charalambous

Chalmers, Signals and Systems, Kommunikationssystem, informationsteori och antenner

Proceedings of the IEEE Conference on Decision and Control

0743-1546 (ISSN)

Vol. 2016-February 7171-7176
978-1-4799-7886-1 (ISBN)

Subject Categories (SSIF 2011)

Computer and Information Science

DOI

10.1109/CDC.2015.7403350

ISBN

978-1-4799-7886-1

More information

Created

10/7/2017