Dynamic programming and stochastic control

Master

In Maynard (USA)

Price on request

Description

  • Type

    Master

  • Location

    Maynard (USA)

  • Start date

    Different dates available

The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.

Facilities

Location

Start date

Maynard (USA)
See map
02139

Start date

Different dates availableEnrolment now open

Questions & Answers

Add your question

Our advisors and other users will be able to reply to you

Who would you like to address this question to?

Fill in your details to get a reply

We will only publish your name and question

Reviews

Subjects

  • Programming
  • Systems
  • Project
  • Decision Making

Course programme

Lectures: 2 sessions / week, 1.5 hours / session


Recitations: 1 session / week, 1 hour / session


The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.


We will place increased emphasis on approximations, even as we talk about exact Dynamic Programming, including references to large scale problem instances, simple approximation methods, and forward references to the approximate Dynamic Programming formalism. However, the more (mathematically) formal parts of approximate Dynamic Programming that require a good understanding of the exact Dynamic Programming material, will be the focal point of the last part of the course.


The course will roughly follow this schedule:


Solid knowledge of undergraduate probability, at the level of 6.041 Probabilistic Systems Analysis and Applied Probability, especially conditional distributions and expectations, and Markov chains. Mathematical maturity and the ability to write down precise and rigorous arguments are also important. A class in analysis (e.g. 18.100C Real Analysis) will be helpful, although this prerequisite will not be strictly enforced.


Bertsekas, Dimitri P. Dynamic Programming and Optimal Control, Volume I. 3rd ed. Athena Scientific, 2005. ISBN: 9781886529267.


Bertsekas, Dimitri P.Dynamic Programming and Optimal Control, Volume II: Approximate Dynamic Programming. 4th ed. Athena Scientific, 2012. ISBN: 9781886529441.


The two volumes can also be purchased as a set. ISBN: 9781886529083.


Errata (PDF)


Videos from a 6-lecture, 12-hour short course at Tsinghua University, Beijing, China, 2014. Available on the Tsinghua course site and on Youtube. Based on the course textbook.


A term paper or project will be required, of one of the following types:


There will be short project presentations during exam week. A fairly complete version of your paper needs to be handed in before the presentation. More detailed instructions, together with pointers to the literature and possible topics can be found in the Project Description.


There will be one midterm and 9 problem sets.


Don't show me this again


This is one of over 2,200 courses on OCW. Find materials for this course in the pages linked along the left.


MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.


No enrollment or registration. Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates.


Knowledge is your reward. Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW.


Made for sharing. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.)


Learn more at Get Started with MIT OpenCourseWare


Dynamic programming and stochastic control

Price on request