We'll considering a 2-layed neural network $y: \mathcal{R}^D \to \mathcal{R}^k$ of the form $$ y(x,\Theta,\sigma)_k=\sum_{j=1}^M \omega_{kj}^{(2)}\sigma(\sum_{i=1}^D \omega_{ji}^{(1)}x_i+\omega_{j0}^{(1)})+\omega_{k0}^{(2)} $$ where $k \in [K]$, with parameters $\Theta=(\omega^{(1)},\omega^{(2)})$ and logistic function $\sigma(x)=\frac{1}{1+e^{-x}}$. We need to create a new neural network by finding parameters $\Theta'=(\tilde{\omega}^{(1)},\tilde{\omega}^{(2)})$ and $tanh(x)=\frac{e^{2x}-1}{e^{2x}+1}$ activation function, such that both neural networks produces the same output given the same input.
To get a feel of how to tackle this problem, I looked over a neural network with no hidden layer. Suppose the input $x=(x_1,x_2)$ is two dimensional. The $k^{th}$ entry of the output is given by $\sigma(\omega_{k,1}x_1+\omega_{k,2}x_2)$. Maybe it is possible to make this equals to $tanh(\tilde{\omega_{k,1}}x_1+\tilde{\omega_{k,2}}x_1)$ for some $\tilde{\omega_{k,1}},\tilde{\omega_{k,2}}$. Here is a relation ship between the two activation functions $2\sigma(2x)-1=tanh(x)$ and from that we get $\sigma(\omega_{k,1}x_1+\omega_{k,2}x_2)=\frac{tanh(\frac{\omega_{k,1}x_1+\omega_{k,2}x_2}{2})+1}{2}$.