The question is about a new result by Cabessa & Siegelmann. http://binds.cs.umass.edu/papers/2014_cabessa.pdf : "In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power — as the static analog neural networks — irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating".
But it is my understanding that any neural net with rational weights can be simulated by an algorithm to any degree of precision we want (or perhaps this article shows this is wrong). I also always believed, as Martin David claimed, that hypercomputation is trivial in the sense that "... if non computable inputs are permitted then non computable outputs are attainable." . However this article seems to show this is not so.
My question is: Is this serious work? What am I missing? The article uses too much jargon and logic, and is very difficult for me to follow (my background is physics).
Someone with more knowledge than I can weigh in here, but from my cursory reading the core concept is that one can encode an arbitrary oracle in the weights - in the real case this is done by using a real number to encode an infinite string of bits, whereas in the plastic rational case it's done using the time-varying weights to encode that same infinite string of bits. In either case, though, the expressive power is all in the weights, and using any computable set of weights will reduce the power of the NNs back into the realm of computability.