I'm learning about recurrent neural networks (RNNs) and I'm a little confused as to why we can't train FFNNs to work with sequencing data. Say, for example, we wanted to train a neural network to predict the next longitude and latitude of a boat travelling through the ocean, given its current position. With an RNN (based on my understanding) we'll train our network by providing labelled data where the previous position of the boat is known, and see how well the predicted result compares with the actual next position of the boat. Can't the same thing be done with an FFNN? We provide our network with the training data, that way it'll learn the general movement of the boat and be able to predict the next position given the current position.
If you don't like this example, another one that's confusing me is training a neural network to predict the next Fibonacci number in the sequence. We can give our FFNN a training set containing all the numbers in the sequence up to some number n and see if it can learn the general pattern.
Can someone help clear up this confusion, please? Thank you.
FFNNs allow to traverse only from input to output and there do not exist feedback loops. A classical case for this situation are learning tasks classification of items beased on features.
A RNN solves this restriction of a FFNN, since they have an internal state to process sequences of variable length. A classical case for RNN is speech detection (earlier data relate to next data).
Since the calculation ("prediction" is the better term in the context of machine learning) of a Fibonacci sequence member requires us to know the two predecessors, a FFNN is not a suitable technique here. The RNNs have a kind of memory which might be used in Fibonacci case. A very good explanation can be found in the article The Story of Long Short-term Memory (RNN)