How to fit an affine transformation in which some coefficients are zero?

1.4k Views Asked by At

I've also asked this question on StackOverflow (https://stackoverflow.com/questions/57683140/how-to-fit-an-affine-transformation-which-consists-of-scaling-and-translation-on), but thought I might also ask it here. I am working on an app in which I'd like to create overlays of images on maps. In order to overlay them properly, I need the coordinates of the corners of the (rectangular) image.

To this end, I determine a bunch of waypoints $\left(x_i, y_i\right)$ in the image (these are pixel coordinates) and the corresponding latitude and longitude, $\left(lng_i, lat_i\right)$. I expected the following to hold:

$$ \begin{align} lng(x) &= cx + a\\ lat(y) &= -cy + b \end{align} $$

For the $m$ waypoints, this leads to the following equations:

$$ \begin{bmatrix} lng_1 & lat_1 & 1 \\ \vdots & \vdots & \vdots \\ lng_m & lat_m & 1 \end{bmatrix} = \begin{bmatrix} x_1 & y_1 & 1 \\ \vdots & \vdots & \vdots \\ x_m & y_m & 1 \end{bmatrix} \begin{bmatrix} c & 0 & 0 \\ 0 & -c & 0 \\ a & b & 1 \end{bmatrix} $$

My question: how do I solve this system of equations in the least-squares sense for $a$, $b$, and $c$?

(So far, I've applied numpy.linalg.lstsq to get a general approximation for the rightmost matrix, but this doesn't satisfy the constraints on the coefficients given above).

2

There are 2 best solutions below

0
On BEST ANSWER

I suppose you mean to minimise the sum of squared differences \begin{aligned} \sum_{i=1}^m \left[\left(\operatorname{lng}_i-cx_i-a\right)^2 + \left(\operatorname{lat}_i+cy_i-b\right)^2\right] &=v^TAv-2u^Tv+K, \end{aligned} where $K=\sum_{i=1}^m \left(\operatorname{lng}_i^2+\operatorname{lat}_i^2\right)$ and \begin{aligned} A=\pmatrix{m&0&\sum_{i=1}^mx_i\\ 0&m&-\sum_{i=1}^my_i\\ \sum_{i=1}^mx_i&-\sum_{i=1}^my_i&\sum_{i=1}^m\left(x_i^2+y_i^2\right)}, \ u=\pmatrix{\sum_{i=1}^m \operatorname{lng}_i\\ \sum_{i=1}^m \operatorname{lat}_i\\ \sum_{i=1}^m \left(\operatorname{lng}_ix_i-\operatorname{lat}_iy_i\right)}, \ v=\pmatrix{a\\ b\\ c}. \end{aligned} This is a standard least square problem. A global minimiser is given by $$ v=A^+u $$ where $A^+$ denotes the Moore-Penrose pseudoinverse of $A$. In practice we don't calculate $A^+$ explicitly and multiply it by $u$ to get $v$. Rather, we obtain $v$ by solving $Av=u$. Thus v = numpy.linalg.lstsq(A, u).

0
On

I ended up implementing the approach which appears to be described in these lecture notes, https://web.cs.ucdavis.edu/~yjlee/teaching/ecs189g-spring2015/lee_lecture10_fitting.pdf. Firstly, I realized I had to change the model slightly to

$$ \begin{align} \text{lat}(x) &= cx + a \\ \text{lng}(y) &= dy + b \end{align} $$

to account for the fact that the scaling factor from degrees to distance is different in the latitudinal and longitudinal direction (unless you're at the equator).

Suppose we have two waypoints, then this relation can be expressed as

$$ \begin{bmatrix} \text{lng}_1 \\ \text{lat}_1 \\ \text{lng}_2 \\ \text{lat}_2 \end{bmatrix} = \begin{bmatrix} x_1 & 0 & 1 & 0 \\ 0 & y_1 & 0 & 1 \\ x_2 & 0 & 1 & 0 \\ 0 & y_2 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} c \\ d \\ a \\ b \end{bmatrix} $$

(For more waypoints, simply continue to vertically stack the input and output data). This equation is of the form $ax = b$ where we want to compute $x$ so as to minimize the Euclidean 2-norm $\left\lVert ax - b \right\lVert^2$, which is exactly the form described in numpy.linalg.lstsq.

I find this a bit simpler because you can construct the matix from the data directly without having to sum or square it. (It probably uses more memory but that shouldn't be an issue for a limited number of waypoints).