Setup
I'm trying to find condition(s) that characterize the solution to a statistical decision problem. The environment is as follows.
- $\Omega$ is a finite set of states of the world.
- A decision maker has prior $f_0$ about the distribution of states.
- $g(\cdot|\omega)$ is a conditional distribution of signals when the realized state is $\omega$
The decision problem has two stages:
- Stage 1: the decision maker decides how many observations to draw from $g(\cdot|\omega)$; each observation incurs a constant cost of $c$.
- Stage 2: the decision maker makes a decision to maximize an objective function $\phi(a,\omega)$, where $\phi$ is strictly concave in $a$ for each $\omega$.
Let $n\in\mathbb Z_+$ be the number of observations, and denote the posterior (obtained using Bayes' rule) by $f_n$. Given the concanvity assumption, there is a unique decision $a^*(n)$ that is optimal in stage 2. Then the decision problem can be formulated as $$\max_{\;n\in\mathbb Z_+} \sum_{\omega\in\Omega}f_n(\omega)\phi(a^*(n),\omega)-cn.\tag{1}$$
Question
As stated, the problem is pretty general. I was wondering if there are the conditions (analogous to first and second order conditions for optimization problems in calculus courses) that characterize the solution to $(1)$? If not, what minimum extra structures does the problem need in order to have one?
This is a mixed integer and convex problem. The general way to solve problems of your type is to solve for the optimum value $val(n)$ for each $n$, and then take the best $n$. However, I think your problem is not clearly specified and is actually more complicated than you give (see below).
The definition of $f_n(\omega)$ is unclear to me. I think this would actually depend on the observations, call them $\{S_1, \ldots, S_n\}$ and assume they are discrete for simplicity. In that case, the problem is more complicated since the choice of $a$ does not just depend on $n$, it also depends on the observations $\{S_1, \ldots, S_n\}$.
So you would have:
\begin{align} &f_n(x, s_1, \ldots, s_n) \\ &=Pr[\omega=x|S_1=s_1, \ldots, S_n=s_n] \\ &= \frac{Pr[S_1=s_1, \ldots, S_n=s_n|\omega=x]Pr[\omega = x]}{\sum_{y\in\Omega} Pr[S_1=s_1, \ldots, S_n=s_n|\omega=y]Pr[\omega=y]} \end{align}
So if you choose a given $n \in \{0, 1, 2, \ldots\}$, you observe $S_1, \ldots S_n$ and then choose $a\in\mathcal{A}$ to maximize $\sum_{x \in \Omega} f_n(x, S_1, \ldots, S_N)\phi(a, x)$. In that case, the expected utility given you choose $n$ is:
$$ val(n) = E\left[\max_{a \in \mathcal{A}}\left[\sum_{x\in\Omega}f_n(x, S_1, \ldots, S_n)\phi(a,x)\right]\right] - nc $$
where the expectation is with respect to the random $S_1, \ldots, S_n$ observations given you choose to look at $n$ of them (and so the choice of the optimal $a$ depends on the realizations of $S_1, \ldots, S_n$).
If the $\phi(a,x)$ function satisfies $0 \leq \phi(a,x) \leq \phi_{max}$ for all $(a,x) \in \mathcal{A} \times \Omega$, then you can restrict attention to values $n$ that satisfy $\phi_{max} - nc \geq 0$. So you only need to consider non-negative integers $n$ that are less than or equal to $\phi_{max}/c$. So compute $val(n)$ for $n \in \{0, 1, \ldots, \lfloor \phi_{max}/c\rfloor\}$, then pick the maximizing $n^*$.