I am looking at a real-value random variable $A$ that is defined as
\begin{equation} A = \mu_A.x+n_A \end{equation}
where $n_A\sim\mathcal{N}(0,\sigma_A^2)$. Also $\mu_A = \frac{\sigma_A^2}{2}$ and $x\in[+1,-1]$ with equal probability i-e $P(X=-1)=P(X=1)=1/2$.
I am defining $I_A$ (mutual information) as
\begin{equation} I_A = 1-{\int_{-\infty}^{+\infty} \frac{e^{-((\xi-(\sigma_A^2/2).x)^2)/2\sigma_A^2}}{\sqrt{2\pi}\sigma_A}\log [1+e^\xi]d\xi} \tag{*} \end{equation}
where $0\leq I_A \leq 1$. For abbreviation I define
\begin{equation} J(\sigma) := I_A(\sigma_A=\sigma) \tag{**} \end{equation}
$J(\cdot)$ is monotonically increasing in $\sigma =2/\sigma_n$ and is reversible.
\begin{equation} \sigma_A = J^{-1}(I_A) \tag{***} \end{equation}
My question is what is the closed form expression of $\sigma_A$. I know that we can use Q-function for computing the tail probability of a Gaussian function but here I have $log [1+e^\xi]$ factor.
This might sound simple but I don't have good calculus skills and so am shaky in making an attempt here. I would appreciate any help.
Thank you
$\underline{BACKGROUND}$
The conditional probability distribution function is given as
\begin{equation} p_A(\xi\mid X=x) = \frac{e^{-((\xi-(\sigma_A^2/2).x)^2)/2\sigma_A^2}}{\sqrt{2\pi}\sigma_A}\tag{1} \end{equation}
The mutual information $I_A=I(X;A)$ between $X$ and $A$ is defined as
\begin{equation} I_A = \sum_{x\in X}\hspace{1mm}{\int_{-\infty}^{+\infty} p(x,\xi)\log \left(\frac{p(x,\xi)}{p(x)p(\xi)}\right)d\xi} \tag{2} \end{equation}
Applying law of total probability and using (1), we get
\begin{equation} I_A = \frac{1}{2}.\sum_{x=-1,+1}{\int_{-\infty}^{+\infty} p_A(\xi\mid X=x)\log \left(\frac{2.p_A(\xi\mid X=x)}{p_A(\xi\mid X=-1)+p(\xi\mid X=1)}\right)d\xi} \tag{3} \end{equation}
and
\begin{equation} 0\leq I_A \leq 1 \tag{4} \end{equation}
With (2) and (3) I get
\begin{equation} I_A = 1-{\int_{-\infty}^{+\infty} \frac{e^{-((\xi-(\sigma_A^2/2).x)^2)/2\sigma_A^2}}{\sqrt{2\pi}\sigma_A}\log [1+e^\xi]d\xi} \tag{5} \end{equation}