Today, I came across the Fourier series. The lecturer told us that, apart from solving PDEs (as we are using them), they constitute the foundation of signal processing. Hence, I went online and had a look at the Wikipedia page and I saw a quote saying that signal processing can be done with 17th century mathematics. Are there unexplored fields of mathematical signal processing?
What I want to know is, apart from the engineering problems or technological problems, is there some advancement in the mathematical methods for signal processing? I want to do two dissertations in my final year, which is possible at my university, and so I wanted to know if there would anything new to explore or talk about in my dissertation, regarding signal processing.
For example, of course the technology of signal processing keeps advancing, but is the maths behind it staying the same? Or is the advancement in signal processing just a matter of engineeristic improvement?
There is something called compressed sensing which is quite heavily studied nowadays. The gist of it is, how many samples do we need in order to be able to reconstruct a signal? For the most part, a result known as the Nyqvist-Shannon sampling theorem has been used to answer this, as it gives us a sufficient condition for perfectly reconstructing signals from samples, and many have applied this theorem in practice without thinking too much about it.
However, in recent years, people have realized that one can do better, and capture just as much information about a signal with fewer samples, which of course is really nice. The main task is then figuring out when and how one can do so. There is a lot of math involved in this, as well as mathematicians researching the topic (including, for example, Terry Tao), so you can try looking into it.