I have some doubts while calculating $E[1_A|1_B]$. I will use fact that $E[1_A]=P(A)$ and $E[X|H]=\frac{E[1_HX]}{P(H)}$
My work so far
I will do the proof for two cases : $x \in B$ and $x \notin B$.
$E[1_A|1_B(x)=1]$ (it means that $x \in B$)
$$E[1_A|1_B(x)=1]=\frac{E[1_{\{1_{B}=1\}}\cdot 1_A(x)]}{P(1_B(x)=1)}$$
Now let's consider term
$1_{\{1_{B}(x)=1\}}(x)= 1 $ when $x \in \{1_B(x)=1\}$ and $0$ otherwise. But $x \in \{1_B(x)=1\} \Leftrightarrow x \in B$ and that's the case we are considering. It means that $1_{\{1_{B}(x)=1\}}(x)=1 $
Back to previous equation :
$$E[1_A|1_B(x)=1]=\frac{E[1_{\{1_{B}=1\}}\cdot 1_A(x)]}{P(1_B(x)=1)}=\frac{E[1_A(x)]}{P(1_B(x)=1)}=\frac{P(A)}{P(1_B(x)=1)}$$
Analogously :
$$E[1_A|1_B(x)=0]=\frac{P(A)}{P(1_B(x)=0)}$$
Am I correct with my justification ?
There is no need to introduce the outcome $x$.
Let $\mathcal A$ and $\mathcal B$ be events of the probability space, and $\mathbf 1_{\mathcal A}$ and $\mathbf 1_{\mathcal B}$ their indicator random variables.
As such the event that $\mathbf 1_{\mathcal B}=1$ is the event of $\mathcal B$, and the event that $\mathbf 1_{\mathcal B}=0$ is the complement event of $\mathcal B^{\small\complement}$. So for instance:
$$\mathsf E(\mathbf 1_{\mathcal A}\mid\mathbf 1_{\mathcal B}=1)=\mathsf E(\mathbf 1_{\mathcal A}\mid \mathcal B)$$
You know that when conditioning a random variable (say $Z$) over an event such as $\mathcal B$, the rule is as you stated. $$\mathsf E(Z\mid\mathcal B)=\dfrac{\mathsf E(Z\,\mathbf 1_{\mathcal B})}{\mathsf P(\mathcal B)}$$
Now, when conditioning over a Bernoulli distributed random variable, say $Y$ then we use the rule that: $$\mathsf E(Z\mid Y)=\mathsf E(Z\mid Y{=}1)\,\mathbf 1_{Y=1}+\mathsf E(Z\mid Y{=}0)\,\mathbf 1_{Y=0}$$
Indicator random variables are, of course, Bernoulli distributed.
Putting these together:-
$$\begin{align}\mathsf E(\mathbf 1_{\mathcal A}\mid\mathbf 1_{\mathcal B}) &=\mathsf E(\mathbf 1_{\mathcal A}\mid \mathbf 1_{\mathcal B}{=}1)\,\mathbf 1_{\mathbf 1_{\mathcal B}=1}+\mathsf E(\mathbf 1_{\mathcal A}\mid \mathbf 1_{\mathcal B}{=}0)\,\mathbf 1_{\mathbf 1_{\mathcal B}=0}\\[1ex]&=\mathsf E(\mathbf 1_{\mathcal A}\mid \mathcal B)\,\mathbf 1_{\mathcal B}+\mathsf E(\mathbf 1_{\mathcal A}\mid \mathcal B^{\small\complement})\,\mathbf 1_{\mathcal B^{\small\complement}}\\[1ex]&=\dfrac{\mathsf E(\mathbf 1_{\mathcal A}\mathbf 1_{\mathcal B})}{\mathsf P(\mathcal B)}\mathbf 1_{\mathcal B}+\dfrac{\mathsf E(\mathbf 1_{\mathcal A}\mathbf 1_{\mathcal B^{\small\complement}})}{\mathsf P(\mathcal B^{\small\complement})}\mathbf 1_{\mathcal B^{\small\complement}}\\[1ex]&=\dfrac{\mathsf P(\mathcal A\cap\mathcal B)}{\mathsf P(\mathcal B)}\mathbf 1_{\mathcal B}+\dfrac{\mathsf P(\mathcal A\cap\mathcal B^{\small\complement})}{\mathsf P(\mathcal B^{\small\complement})}\mathbf 1_{\mathcal B^{\small\complement}}\\[1ex]&=\mathsf P(\mathcal A\mid\mathcal B)\,\mathbf 1_{\mathcal B}+\mathsf P(\mathcal A\mid\mathcal B^{\small\complement})\,\mathbf 1_{\mathcal B^{\small\complement}} \end{align}$$
And so...
More or less, yes. Your notation was just a little confusing.