Linear approximation using $2D$ Linear Interpolation

65 Views Asked by At

Probably it is a simple question. but even after several hours of research, I could not find anything relevant.

I have a function $f(x,y) = xy$, with $x$ and $y$ that belong to bounded, continuous intervals. I need to find a linear approximation that results in something like this, $$f(x,y) \approx k_1 x + k_2 y + k_{\text{off}}$$ I was thinking about using $2D$ linear interpolation (not bilinear, since it is not linear), but I could not find anything.

how can I compute $k_1, k_2$ and $k_{\text{off}}$ parameters?

Thank you!

2

There are 2 best solutions below

0
On BEST ANSWER

Proceeding with the least-squares approach:

Let $M = [a,b] \times [c,d]$ be the domain of $f$. We wish to minimize $S(k_1, k_2, k_0) = \int\limits_{M} (f(x,y)-k_1 x - k_2 y - k_0)^2 dxdy$ (the square of the approximation error $L^2$ norm) over the set of the parameters $\Bbb R^3$. Denote $\langle g,h\rangle = \int\limits_M g(x,y) h(x,y) dxdy$. Then in the minimum we have

$\begin{cases}0=\frac12\partial S/\partial_{k_0} = k_0 \langle 1,1\rangle + k_1 \langle 1,x\rangle + k_2 \langle 1,y\rangle - \langle 1,f\rangle,\\ 0=\frac12\partial S/\partial_{k_1} = k_0 \langle x,1\rangle + k_1 \langle x,x\rangle + k_2 \langle x,y\rangle - \langle x,f\rangle,\\ 0=\frac12\partial S/\partial_{k_2} = k_0 \langle y,1\rangle + k_1 \langle y,x\rangle + k_2 \langle y,y\rangle - \langle y,f\rangle.\end{cases}$

Were the basis functions orthogonal, we'd use formulae alike the one from your comment (which reads $k_1 = \langle f,x\rangle/\langle x,x\rangle$). But wrt the $L^2$ inner product our current ones are not. We could either continue with solving the system for $k_i$ or Gram—Schmidt orthogonalize the basis.

The second option is preferrable. Calculating $\langle 1,1\rangle = (b-a)(c-d)$ and $\langle 1,x\rangle = (b^2-a^2)(d-c)/2$, we get that $1,x-(b+a)/2$ form an orthogonal basis in the linear span of $1$ and $x$. Then orthogonalize $y$ wrt the new basis, we'll get $y - (d+c)/2$ (the fact that $\langle y, x-(b+a)/2\rangle = 0$ is a coincidence that occurs because of the relations of the basis to the domain of integration, in general it won't be true). Then just rewrite the system in the new basis (with new coefficients $m_i$):

$\begin{cases}0=\frac12\partial S/\partial_{m_0} = m_0 \langle 1,1\rangle - \langle 1,f\rangle,\\ 0=\frac12\partial S/\partial_{m_1} = m_1 \langle x-(b+a)/2,x-(b+a)/2\rangle - \langle x-(b+a)/2,f\rangle,\\ 0=\frac12\partial S/\partial_{m_2} = m_2 \langle y - (d+c)/2,y - (d+c)/2\rangle - \langle y - (d+c)/2,f\rangle\end{cases}$

and calculate the coefficients.


Why the $L^2$ norm instead of other ones? Well, it's because with $L^2$ the method is standardized, you just have to compute some integrals to orthogonalize the basis (this can be e.g. done numerically to a high enough precision). For other norms, such as $L^1$ and $L^\infty$, the minimization tasks are AFAIK considerably more difficult even in 1D.

0
On

This is a simple question indeed. You have

$ f(x,y) = x y $

The 1st degree Taylor expansion of this function at $(x_0, y_0)$ is given by

$ f(x,y) = f(x_0, y_0) + \nabla f (x_0, y_0) \cdot (x - x_0 , y - y_0 ) $

We now have

$ \nabla f (x, y) = \begin{bmatrix} y \\ x \end{bmatrix} $

Therefore, the linear approximation of $f$ near $(x_0, y_0)$ is

$ f(x, y) = x_0 y_0 + y_0 (x - x_0) + x_0 (y - y_0) = y_0 x + x_0 y - x_0 y_0 $

For example if $(x_0, y_0) = (1, 2) $ and $(x, y) = (1.3, 1.9)$

Then the exact value of the function is $f(1.3, 1.9) = 2.47 $, and the approximate value is

$ 2 (1.3) + (1)(1.9) - 2 = 2.5 $