Inverse Distance Weighting is Weighted Average that uses Inverse Distances as Weights.
We need to approximate function $z(x)$ and have bunch of sample points $(x_i,z_i)$. And for any $x$ we calculating approximation as:
$$z(x) = {{\sum z_i w_i} \over \sum w_i}$$
$$w_i = distance(x, x_i)^{-1}$$
The problem is, when the $x$ is the same as one of the points, the distance will be zero, and $w_i=0^{-1}$ going to be $\infty$. In such cases the exact point itself should be used as approximation. Or for numerical computations using some minimal $\delta$ radius to detect elements with zero distance.
And (I guess?) it's not numerically stable as for small distances the inverse will be huge, and numerically it's not good to divide or multiply too small or too large numbers for precision loss.
So, aren't there some sort of more stable implementation of this idea? That points with smaller distances should have larger weight? That would handle same points and very close points more reliably?
A weighted average is always equivalent to one of the form: $$u_i=\frac{w_i}{\sum w_i}.$$
Then the weighted average is:
$$\sum u_iz_i.$$
The advantage of the $u_i$ is that they are in the range $[0,1],$ and they add up to $1.$
When $d(x,x_i)$ is small, you can prefer to compute $$u_i=\frac1{1+\sum_{j\neq i}\frac{w_j}{w_i}}=\frac1{1+\sum_{i\neq j}\frac{d(x,x_i)}{d(x,x_j)}}$$
It’s slightly more complicated when there are many equal $x_i,$ or very close to equal, but you can avoid floating point errors if you can compute $u_i.$