I am reading Mathematical Statistics - A Decision Theoretic Approach by Ferguson (Academic Press 1967).
A game in the book is defined as a triple $(\Theta, \mathcal A, L)$, where $L$ is a function from $\Theta\times \mathcal A\to \mathbb R$. The set $\Theta$ is to be thought of as possible "states of nature" (spooky language), $\mathcal A$ is to be thought of as the set of possible actions one can take, and $L$ is the loss one incurs when an action is taken given a state of nature.
This is formally understand. But then the author, on pg 7, defined a statistical decision problem as:
A statistical decision problem is a game $(\Theta, \mathcal A, L)$ coupled with an experiment involving a random observable $X$ whose distribution $P_\theta$ depends on the state $\theta\in \Theta$ chosen by nature.
It is not clear to me what the author intends to convey. By an "experiment" I suppose the author means a probability space, and by a random observable I suppose he means a random variable.
But what does it mean to say that the distribution of $X$ depends on the state $\theta$? Does he mean that we actually have a whose set of random variables parameterized by $\Theta$?
Please feel free to add any intuition and elaboration of these concepts. I am a complete beginner here.
Also, if you can suggest a more recent reference for reading this material then that is more than welcome.
Here is how I think of it, and to make it more practical, I provide the standard example of a binary decision that is both forced (you can't decide to wait and see what else you can learn) and factual (you are simply trying to decide truth, rather than trying to decide on some action based upon what you think truth is):
"State of nature"="Truth variable" - This is a random variable, the value of which you can never really know. For example, H, with two states:
H1 = there is a running car behind the door
H0 = there is not a running car behind the door
"Actions"="Decision variable" - This is a random variable representing your decision. For example, D, again with two states:
D1 = I decide there is a running car behind the door
D0 = I decide there is not a running car behind the door
"Experiment"="Observation" - Some data you can obtain (arising from the "experiment", if you choose to think of it that way) that might help inform your decision. The data is cast as a particular exemplar of a random variable, with density conditioned upon the truth variable. For example,
x (the random variable) = the noise level measured in front of the door, with
p(x|H1) = density of x if car is running behind door
P(x|H0) = density of x if no car is running behind door
X (the exemplar) = the actual measurement of the noise level in front of the door
"Loss"="Objective function" - This describes what constitutes a better result and what constitutes a worse result. This choice is arbitrary, but for all problems, one possible objective is a random consequence variable C, again with two states:
C1 = good decision (here if D=H)
C0 = bad decision (here if D~=H)
Hope this helps.