Proof a single neuron with linear treshold unit as function can't sole XOR problem

41 Views Asked by At

Suppose that a neural unit with activation function $sign(net) = 1 $ if $ net \geq 0 $, and $0$ otherwise, could solve the XOR problem.

Then we would have weights $w_1,w_2$ and a bias $b$ such that

$1\times w_1 + 1 \times w_2 + b < 0 $ (since the sign should assign $0$ to $1,1$)

$1\times w_1 + 0 \times w_2 + b \geq 0 $ (since the sign should assign $1$ to $1,0$)

$0\times w_1 + 1 \times w_2 + b \geq 0 $ (since the sign should assign $1$ to $0,1$)

$0\times w_1 + 0 \times w_2 + b < 0 $ (since the sign should assign $0$ to $0,0$)

Should this lead to a contradiction that shows such perceptron can't exist, for instance

if $ b < 0 $ and $w_1 + w_2 + b < 0 $ then

$w_1 + w_2 + 2b < 0 $

and if $w_1 + b \geq 0 $ and $w_2 + b \geq 0 $

then $w_1 + w_2 + 2b \geq 0 $ and this is a contradiction.