I know sigmoidal functions in one variable is given by $\tfrac{1}{1+exp(-x)}$. Can anybody help me how the same will look like in two variable.
Sigmoidal function in two variable
64 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 2 best solutions below
On
Suppose a layer in a neural net contains $n$ neurons, whose values are determined by the $m$ neurons in a previous layer. We need an activation function $\sigma(x)$, which in your case is $\frac{1}{1+\exp(-x)}$. The $i$th neuron in the $n$-neuron layer scores $y_i:=\sigma(\sum_{j=1}^mW_{ij}x_j+b_i)$, where $x_j$ is the score of the $j$th neuron in the $m$-neuron layer. All you need is the parameters $W_{ij}$ comprising a weight matrix and the parameters $b_i$ comprising a bias vector. If we say any function $f:\Bbb R\mapsto\Bbb R$ extends to acting on vectors viz. $(f(v))_i=f(v_i)$, we have the more succinct $y=\sigma(Wx+b)$. I'll leave it to you to decide whether "in two variables" means $m=2$, $n=2$ or both.
I would suggest reading A visual proof that neural nets can compute any function it tries to give a visual explanation of the universal approximation theorem in one and two dimension before generalizing to n dimensions, citing:
In my case, the best of my understating comes from reading it, also you can try playing with wolfram $\frac{1}{1+e^{-x-y}}$