Let $Y\sim\text{Bernoulli}(\mu_0$) and assume this coincides with the prior of an agent (i.e. their prior is "correct").
Let signals $\boldsymbol{X}:=(X_1,X_2)$ have a known joint conditional probability distribution $P(\boldsymbol{x}|y):=\text{Prob}\{\boldsymbol{X}=\boldsymbol{x}|Y=y\}$. The agent chooses to observe the realization(s) of neither, one, or both signal(s). Formally, the agent chooses a set $$\mathcal{D}\in \{\emptyset,\{1\},\{2\},\{1,2\}\}\equiv 2^{\{1,2\}};$$ given $\mathcal{D}$, they observe the realization(s) of signals in $\{X_i:i\in\mathcal{D}\}$. Given their observations, they update their beliefs according to Bayes' rule. Let $\mu_1$ denote the probability of event "$Y=1$" assigned by said posterior belief. Assume that the agent chooses an action $a\in\{0,1\}$ according to $\alpha(\mu_1)$. (I.e. $\alpha:[0,1]\to\{0,1\}$ maps posterior beliefs to actions. E.g. $\alpha$ minimizes the expected loss given $\mu_1$ and some loss function.)
Questions:
$\qquad$ 1. What is the probability of a false positive given observation choice $\mathcal{D}$? That is, $$\text{Prob}\{\alpha(\mu_1)=1|Y=0, \mathcal{D}\}=\,?$$
$\qquad$ 2. What is the probability of a false positive given an observed data $\{x_i:i\in\mathcal{D}\}$? That is,
$$\text{Prob}\{\alpha(\mu_1)=1|Y=0, \text{observed } ``X_i = x_i\!" \ \forall i \in\mathcal{D} \}=\,?$$
I am not sure one can make things very explicit. Is this an exercise or a modelling question for work /self study ?
Anyway I try to write down my thoughts on the first point.
Let's take $x$ the vector of observed variables and $y$ the "latent" variable. $x$ is a single variable or a vector according to how many variables we decide (or 'the agent decides') to observe.
Let's call $R$ the rejection region, i.e. after observing $x$ we declare $y=1$ if $p(y=1|x)\in R$, which is determined by the function $\alpha$ as $R=\alpha^{-1}(1)$.
Now the probability of having a false positive is $p(R|y=0)$. This can be written as:
$$p(R|y=0)=\sum_{x| p(y=1|x)\in R}p(x|y=0)$$
where more explicitely $p(y=1|x)=\frac{p(x|y=1)p(y=1)}{p(x)}=\frac{p(x|y=1)p(y=1)}{\sum_{y=0,1}p(y)p(x|y)}$
So given function $\alpha,p(y)$ and $p(y|x)$ we can recover our probability of type 1 error (false positive). Note that the summation over $x$ becomes an integral if the variable is not discrete.