I am currently busy understanding conditional expectations in measure theoretical sense, i.e. given a prob. space $(\Omega,\mathcal{A},P)$ and $X\in L^1(P)$, then $$E(X|A)=\frac{E(X\cdot1_A)}{P(A)}$$ for $a\in\mathcal{A}$ with $P(A)>0$ and $$\int_A E(X|\mathcal{A})dP=\int_AX dP$$ for all $A\in\mathcal{A}$.
More specifically,
how do I find
(i) $E(X|X^2)$ if $X$ is a.s. non-negative
(ii) $E(X|X^2)$ if $X$ is symmetric?
I am a bit confused how to use all these versions of the conditional expectation, since $X^2$ is also a random variable $\Omega\rightarrow\mathbb{R}$ and not a set in the sigma-algebra/
$\newcommand{\1}{\mathbf{1}}\newcommand{\E}{\mathbb{E}}\newcommand{\P}{\mathbb{P}}$From the comments I can understand that you have the first part already. One needs to be careful with measurability, though. You have $|X|$ is $\sigma(X^2)$-measurable and hence $\E[|X|\mid\sigma(X^2)]=|X|$ a.s. . We also know $X=|X|$ a.s. by assumption. So it follows (after some straightforward reasonings) that $\E[X\mid \sigma(X^2)]=X$.
Second part. I like your idea for writing $X=|X|\left( \1_{X> 0}-\1_{X<0}\right)$. I use that, but I think there is a shorter way of finishing with that idea, but I'll just go on with mine which a little bit long. As you said in the comments, one has $$\E[X|\sigma(X^2)]=\E[|X|\left( \1_{X>0}-\1_{X<0}\right)\mid \sigma(X^2)]=|X|\E[ \1_{X> 0}-\1_{X<0}\mid\sigma(X^2)]$$ And now we use linearity: $$|X|\E[ \1_{X\geq 0}-\1_{X<0}\mid\sigma(X^2)]=|X|\left(\E[\1_{X> 0}\mid \sigma(X^2)]-\E[\1_{X<0}\mid\sigma(X^2)]\right)$$ We will focus on $\E[\1_{X>0}\mid\sigma(X^2)]$. Just for clarity define \begin{align} C:=\{ (X^2)^{-1}(B)\subset \Omega \mid B\in \mathcal B \} \end{align} where $\mathcal B$ is the borel $\sigma$-algebra. By definition $\sigma(X^2)=\sigma(C)$. Since $C\subset \sigma(C)=\sigma(X^2)$ we have: \begin{align} \int_A\E[\1_{X>0}\mid \sigma(X^2)]\,d\P=\int_A \1_{X>0}\,d\P \ \ \ \ \ \text{ for all } A\in C \end{align} For $A\in C$ there is $B\in\mathcal B$ such that $(X^2)^{-1}(B)=A$ so: \begin{align} \int_A \1_{X>0}\,d\P&=\P(A\cap X>0)=\P(X^2\in B \ ,\ X>0) \end{align} Before going further define for an arbitrary set $U\subset\mathbb R$: \begin{align} \sqrt[]{U^+}:=\{ u \in \mathbb R \mid \exists_{v\in U}: v>0 \wedge \sqrt[]{v}=u\} \end{align} Now we use symmetry of $X$: \begin{align} \int_A 1_{X>0}\,d\P &= \P(X\in \sqrt[]{B^+} \ , \ X>0)\\ &=\P(-X\in \sqrt[]{B^+}\ ,\ -X>0)\\ &=\P(X^2\in B \ , \ X<0)\\ &=\int_A \1_{X<0}\,dP \end{align} This holds for all $A$ in $C$.
Notice that $\mu(A):=\int_A \1_{X>0}\,d\P$ and $\nu(A):=\int_A \1_{X<0}\,d\P$ are both measures and equal on a $\pi$-system. It follows by the uniqueness of measure that $\mu(A)=\nu(A)$ for all $A\in \sigma(C)=\sigma(X^2)$. This implies then: \begin{align} \int_A\E[\1_{X>0}\mid \sigma(X^2)]\,d\P=\int_A \1_{X>0}\,d\P=\int_A \1_{X<0}\,d\P = \int_A\E[\1_{X<0}\mid \sigma(X^2)]\,d\P \end{align} for all $A\in\sigma(X^2)$. By the uniqueness of the conditional expectation we can finally conclude that: $$ \E[X\mid \sigma(X^2)]=0 \ \ \ \text{ a.s. }$$