Manually setting up integral weights of a 2-2-2 neural network.

51 Views Asked by At
  • inputs and outputs are both binary.
  • hidden layer and output layer have biases.
  • function to be modelled is f(i1, i2) = 2 - i1 + i2
  • transfer function is sigmoid at hidden and output layers, and the final outputs are obtained by hard limiting at 0.5 (i.e, final_output_i = [outout_i >= 0.5]).
  • weights should be integral and in the range [-20, 20].

What I have tried so far:

  • Noticed that output_1 is XOR(i1, i2), and output_2 is IMPLIES(i1, i2).
  • Attempted back-propagation with 1000 epochs. Did not get 100% train accuracy.

I cannot find a better way than to simply make guesses about some weights and try to make others conform to them. What would be a better approach to this problem?

1

There are 1 best solutions below

0
On

Expanding my comments into an answer that doesn't quite fit your criteria (and correcting what turned out to be shoddy intuition in the comments). I'm using ReLU neurons rather than sigmoid, I hope that is still useful!

I think the network in the image below does the required task. Hopefully it is clear what I mean, the two hidden neurons are each only active for one datapoint, either the 10 input or the 01 input. Then the first output is active unless the input is 10, i.e. unless the first hidden neuron is active, and the other output neuron is activated by either hidden neuron.

Does that help? I find sigmoids hard to think about, but I suspect you could design a network using sigmoids with similar ideas?

enter image description here