Consider the probability space $(\Omega, \mathcal{A},\mathbb{P})$, where $\Omega = (-1,1), \mathcal{A} = \mathcal{B((-1,1))}$ and $\mathbb{P}$ is the uniform distribution on $(-1,1)$. For an integrable RV $X$ compute the conditional expectation $\mathbb{E}(X \mid \mathcal{F})$, where $\mathcal{F} = \{A \in \mathcal{A} : A = -A \}$.
Remark: As usual $\mathcal{B}(\cdot)$ denotes the Borel Algebra.
I know that for $\mathbb{E}(X \mid \mathcal{F})$ is uniquely defined to via the conditions $\mathcal{F}$-measurable, $\mathbb{E}(X \mid \mathcal{F}) \in L^1(\mathbb{P})$ and $\int_A \mathbb{E}(X \mid \mathcal{F}) d \mathbb{P} = \int_A X d \mathbb{P}$.
So, since $\mathcal{F}$ consists of symmetric intervals around $0$ I suppose (intuitively) that $\mathbb{E}(X \mid \mathcal{F}) = 0$.
However, I don't know how to formally argue (I am not good at measure theory) that in our case we have
$$\int_A X d \mathbb{P} = 0 \quad (\forall A \in \mathcal{A}).$$
Could you please give me a hint?
Edit: Thanks to LucaMac's comment I realise that my idea was flawed, but what about $\mathbb{E}(X \mid \mathcal{F}) = X(0)$?
This answer assumes that $X$ is an arbitrary random variable defined on the probability space. So $X$ does not necessary live in $(-1,1)$. I can't infer from the question if this is what's intended in the exercise.
Another option would be to assume that $X$ is the canonical random variable, so $X(\omega ) = \omega$, in which case the answer is indeed 0.
Let's introduce some intuition. $X$ is the observable result of an underlying randomness $\omega$ that lies in $(-1,1)$.
Now you're considering the $\sigma$-algebra $\mathcal F$ which consists of symmetric sets. A sigma-algebra represents the collection of questions one allows to ask about the random process. Here you are allowed to ask questions of the form "is $\omega$ between -0.5 and 0.5?" and "is $\omega$ equal to 0.3 OR -0.3". On the other hand "is $\omega$ equal to 0.1" is not a valid question. In other words you are allowed to know the value of $|\omega|$ but not of $\omega$ itself.
Now you want to compute the conditional expectation $E[X|\mathcal F]$. The conditioning means that you assume that you know in what sets of $\mathcal F$ your $\omega$ lies, but you average over the remaining knoweldge. For that $\mathcal F$, you know what $|\omega|$ is, but you average over the possible value of the sign (only two values here).
So you can try to show that the random variable $E[X|\mathcal F] = \dfrac 12 (X(|\omega|) + X(-|\omega|))$ verifies the two axioms of the definition of conditional expectation.