I am wondering if somebody can tell me anything about the practical differences between using Markov Decision Processes and and Bayesian Networks in reasoning about probabilistic processes?
2026-03-25 15:56:48.1774454208
On
Bayesian Network vs Markov Decision Process
2.3k Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
2
There are 2 best solutions below
0
On
Read chapter 7 "Making Decisions" of David Barber's book on Bayesian ML. http://web4.cs.ucl.ac.uk/staff/D.Barber/textbook/090310.pdf
From what little I know about BN, they are just representations of Markov Chains (MC). That is, you know the current state $x$ and that gives you the distribution of the successor state given by $P(x)$. I guess, in some cases you will also have $P$ dependent on the current time, or maybe values of some hidden states. Nevertheless, the bottom line is that the dynamics is given to you, and you can only talk about "what is the probability of this or that".
In MDP the setting is similar with a crucial difference that $P(x,u)$ depends on the input $u$ which is provided externally. In that case, we are talking about optimization - how to choose $u$ optimally to maximize/minimize probability of this or that.
Note that once you've fixed the way you choose $u$, that will give you a fixed dynamics which can possibly be represented as a BN.