neural network resolving problem

77 Views Asked by At

I'm noticing something strange, i followed an example of coding a fully connected neural network in 3 layers. it uses backpropagation, and it works great. For example using Sweep optimization (ea sweeping trough starting train variables) this program can resolve the Irish flower data set with 99.16% accuracy (as total of train and validation data. I think thats an extreme score and its even so that i can bread multiple neural nets with such a score. So those networks are working great. (standard neural nets like 3:4:3)

But i wanted to put them trough other tests as wel. Then i thought lets try calculating RGB to HSL color space. Turns out the networks score 100% if try
RGB to H
or RGB to S
or RGB to l

But one network doing it all ea RGB to HSL ... it seams impossible. Is there something fundamentally difficult different that makes that problem that different. I would have thought that the nodes would weight balance and eventually seperate channels binding right output to right hidden nodes. But somehow it doesnt happen even if i triple the hidden nodes.

the code i use is based on this code in case your interested https://gist.github.com/atifaziz/9462430 (although it hasnt the sweep optimizer probaply reaching 85% accuracy).

I'm new to neural networks, and i like to know if there is some specific reason as of why a 3 layer neural net seams hopeless in resolving RGB to HSL Where it takes random RGB values to output HSL values; while having only one output for H or S or L result in 100% resolving. Is RGB to HSL somehow a different category problem. It doesnt seam to matter how many hidden nodes i use (but i only have 1 hidden layer, i'm not yet understanding the coding math behind deep neural networks).

So in short is there some mathematical reason relating to the backpropagition method. that a 3 layer network (input-hidden-output) can not be resolved ? As I am curious to know why the network cannt resolve it.

Sweep training that i use takes hundreds of starting variations, and each of them gets trained 5000 epochs which is more then enough for the irish flower set (thats resolved within 100 epochs and optimal in 1000 epochs).

(better explained i want to know if there is some fundamental reason why it doesnt get resolved)