I read papers about Implicit Neural Representation or Cordinate-based MPL. In these papers, they state that "this kind of MLP has the capability of learning how to map the coordinate to its value." I don't really understand this sentence. in other words, how this MLP different from conventional MLP in terms of learning?
In classic MLP, we know that each feature in an input vector will be assigned with a weight+bias then it will be passed through the hidden layer, then calculate the error or loss function and then backpropagation to re-adguset the weight in such way we reduce the error. and that happens with all the data points in training set.
okay, what exactly happens with coordinate based MLP? how can it learn by just taking a sparse measurement? minimal data example?
And why it works sometimes with prior embedding neural network and sometime without it??