The Nyquist limit for interpolation by trig functions states that one must give at least two data points per wavelength, because data just above and just below this "folding frequency" cannot be distinguished. I am currently playing with Hermite interpolation, providing the first few derivatives as well as the function values, and find that the Nyquist limit no longer applies. This cannot be a new observation, but I don't find anything about it online. Maybe I have not got the appropriate vocabulary?
2026-03-29 17:25:10.1774805110
Frequency bound for Hermite interpolation'
116 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
Since you're looking for a Shannon-like theorem that involves the derivatives you can have a look at the following stackexhcange post which refers to the following paper by Papoulis. The theorem there is more powerful as you can take $m$ linear functionals (with some constraints on those) and if you sample at a frequency higher than $1/m$ the Nyquist rate you will get a perfect reconstruction. In the paper the author specifically shows some examples involving the derivatives. Note that the reconstruction function is not necessarily a $sinc$ function anymore. In essence if you provide $m$ times more non-redundant information you can expect to require a rate that is $m$ times lower.
Note however, that the $sinc$ kernel isn't even the best kernel to formulate a sampling theorem for trigonometric polynomials in practice. Instead the Dirichlet kernel $$D_n(x) = \frac{\sin\left(\left(n+\frac{1}{2}\right)\pi x\right)}{\sin\left(\frac{\pi x}{2}\right)}$$ allows one to formulate a similar result as the sampling theorem for trigonometric polynomials of order $n$, where you only need to sum up finitely many terms (unlike the usual case where you need infinitely many terms).
As far as Hermite interpolation goes there is the following paper by Shin that provides an expansion of a function in terms of generalized Hermite interpolating polynomials involving samples of the function and its derivatives, and the author also formulates a sampling theorem for this at the end. You can see a similar result to the one from Papoulis' paper where the required sampling frequency $\sigma'$ scales inversely proportionally to the amount of data provided $N$.