
This lecture note introduces Markov Decision Process in the context of State Space Modelling. It briefly introduces Bread First Search, Depth Fist Search and A* Search. Then Markov Decision Process is explained in the context of sequential decision problems followed by Value and Policy Iteration algorithms. Two case studies demonstrated applications of MDP in developing a Cognitive Model as part of Inclusive User Model () and a robot navigation problem.
Lecture Notes: