skip to content

SDDP with objective function state variables: optimizing river-chain releases over a time horizon

Presented by: 
Tony Downward University of Auckland
Wednesday 10th April 2019 - 15:00 to 16:00
INI Seminar Room 2
In this talk I will present an extension to stochastic dual dynamic programming that permits objective function coefficients to be state variables. We show that the expected cost-to-go function can be approximated by saddle-cut lower bounds, and prove almost-sure convergence to the optimal policy in a finite number of iterations. We apply this algorithm to a hydro river-chain optimization problem with uncertain inflows, and prices that are elastic. We model the price-state as an auto-regressive process with mean reversion and show how the agent adapts its use of water, given the price-state and reservoir levels.

University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons