Stability issue with least squares solver

69 Views Asked by At

I am using a least squares solver to auto-calibrate delta printers in firmware. The system measures height errors at 16 points, computes the height derivative with respect to the calibration parameters, and uses linear least squares to adjust the calibration parameters so as to minimise the sum of the squares of the height errors.

I have just added another two calibration parameters to the original 6, and the problem I am getting is that these extra 2 are jumping around when I repeat the calibration operation. Here is a typical calculation, with values displayed to 3dp:

Height errors:
  0.008 0.141 -0.113 -0.126 -0.017 0.119 -0.195 -0.138 0.006 0.151 0.133 0.025 -0.084 -0.019 -0.032 0.007

Derivative matrix:
  0.042   0.035   0.923   0.167   0.069  -0.058   0.000  -0.933
 -0.071   0.260   0.812   0.167  -0.085  -0.402  -0.467  -0.808
 -0.096   0.541   0.555   0.168  -0.001  -0.628  -0.808  -0.467
 -0.071   0.799   0.271   0.173   0.083  -0.490  -0.933   0.000
  0.043   0.913   0.044   0.178  -0.071   0.005  -0.808   0.467
  0.270   0.801  -0.071   0.175  -0.419   0.499  -0.467   0.808
  0.551   0.542  -0.093   0.197  -0.619   0.608   0.000   0.899
  0.790   0.270  -0.060   0.216  -0.462   0.386   0.437   0.757
  0.887   0.053   0.060   0.223  -0.005   0.077   0.749   0.432
  0.786  -0.065   0.279   0.213   0.450  -0.065   0.871   0.000
  0.547  -0.097   0.550   0.192   0.604   0.001   0.773  -0.446
  0.268  -0.075   0.807   0.169   0.411   0.088   0.463  -0.802
  0.176   0.168   0.656   0.457   0.116  -0.111   0.000  -0.467
  0.177   0.644   0.179   0.463  -0.117   0.002  -0.404   0.233
  0.610   0.190   0.200   0.485  -0.002   0.106   0.349   0.201
  0.336   0.327   0.338   0.563   0.000   0.000   0.000  -0.000

Normal matrix
  3.346   0.935   0.965   1.464  -0.031   0.982   2.396   1.401   0.007
  0.935   3.449   0.922   1.364  -0.989   0.004  -2.618   1.506   0.243
  0.965   0.922   3.562   1.378   0.973  -0.969  -0.025  -3.032  -0.116
  1.464   1.364   1.378   1.397  -0.018   0.044   0.054   0.063   0.062
 -0.031  -0.989   0.973  -0.018   1.559  -0.780   1.106  -1.956  -0.265
  0.982   0.004  -0.969   0.044  -0.780   1.609   1.163   1.952   0.038
  2.396  -2.618  -0.025   0.054   1.106   1.163   5.219  -0.086  -0.288
  1.401   1.506  -3.032   0.063  -1.956   1.952  -0.086   5.336   0.310

Solved matrix
  3.346   0.000   0.000   0.000   0.000   0.000   0.000   0.000  -1.129
  0.000  -3.288   0.000   0.000   0.000   0.000   0.000   0.000   1.662
  0.000   0.000  -3.679   0.000   0.000   0.000   0.000   0.000  -2.934
  0.000   0.000   0.000   0.288   0.000   0.000   0.000   0.000   0.022
  0.000   0.000   0.000   0.000   0.716   0.000   0.000   0.000  -0.032
  0.000   0.000   0.000   0.000   0.000   0.559   0.000   0.000  -0.139
  0.000   0.000   0.000   0.000   0.000   0.000   0.007   0.000  -0.001
  0.000   0.000   0.000   0.000   0.000   0.000   0.000   0.007   0.006

Solution:
 -0.337  -0.505   0.797   0.078  -0.045  -0.249  -0.072   0.815

You can see that the absolute values of the diagonal elements in the last two rows of the solution matrix are small (0.007) compared with the others (0.288 to 3.346). These last two rows correspond to the new configuration parameters, the ones whose values jump around. Does this mean that the problem is ill-conditioned and therefore sensitive to small errors in the input measurements and rounding errors in the calculation? I have a suspicion that the effect of the 2 new parameters can be approximately compensated by adjusting the other 6 parameters, so this could be the case.

I currently use Gauss-Jordan elimination to solve the matrix, but from what I have read I think it might be better to use QR decomposition (which I am not familiar with). I changed from single to double precision matrix storage and arithmetic, but it doesn't seem to have improved things.

Thanks in advance - David

1

There are 1 best solutions below

0
On

Problem

This is the linear system: $$ \begin{align} % \mathbf{A} x &= b \\ % \left[ \begin{array}{rrrrrrrr} 0.042 & 0.035 & 0.923 & 0.167 & 0.069 & -0.058 & 0. & -0.933 \\ -0.071 & 0.26 & 0.812 & 0.167 & -0.085 & -0.402 & -0.467 & -0.808 \\ -0.096 & 0.541 & 0.555 & 0.168 & -0.001 & -0.628 & -0.808 & -0.467 \\ -0.071 & 0.799 & 0.271 & 0.173 & 0.083 & -0.49 & -0.933 & 0. \\ 0.043 & 0.913 & 0.044 & 0.178 & -0.071 & 0.005 & -0.808 & 0.467 \\ 0.27 & 0.801 & -0.071 & 0.175 & -0.419 & 0.499 & -0.467 & 0.808 \\ 0.551 & 0.542 & -0.093 & 0.197 & -0.619 & 0.608 & 0. & 0.899 \\ 0.79 & 0.27 & -0.06 & 0.216 & -0.462 & 0.386 & 0.437 & 0.757 \\ 0.887 & 0.053 & 0.06 & 0.223 & -0.005 & 0.077 & 0.749 & 0.432 \\ 0.786 & -0.065 & 0.279 & 0.213 & 0.45 & -0.065 & 0.871 & 0. \\ 0.547 & -0.097 & 0.55 & 0.192 & 0.604 & 0.001 & 0.773 & -0.446 \\ 0.268 & -0.075 & 0.807 & 0.169 & 0.411 & 0.088 & 0.463 & -0.802 \\ 0.176 & 0.168 & 0.656 & 0.457 & 0.116 & -0.111 & 0. & -0.467 \\ 0.177 & 0.644 & 0.179 & 0.463 & -0.117 & 0.002 & -0.404 & 0.233 \\ 0.61 & 0.19 & 0.2 & 0.485 & -0.002 & 0.106 & 0.349 & 0.201 \\ 0.336 & 0.327 & 0.338 & 0.563 & 0. & 0. & 0. & 0. \\ \end{array} \right] % \left[ \begin{array}{c} x_{1} \\ x_{2} \\ x_{3} \\ x_{4} \\ x_{5} \\ x_{6} \\ x_{7} \\ x_{8} \end{array} \right] % &= % \left[ \begin{array}{r} 0.008 \\ 0.141 \\ -0.113 \\ -0.126 \\ -0.017 \\ 0.119 \\ -0.195 \\ -0.138 \\ 0.006 \\ 0.151 \\ 0.133 \\ 0.025 \\ -0.084 \\ -0.019 \\ -0.032 \\ 0.007 \end{array} \right] % \end{align} $$

Results

Mathematica solution: $$ x_{LS} = \left[ \begin{array}{r} 0.349316 \\ 0.505902 \\ -0.810259 \\ -0.0772344 \\ 0.044439 \\ 0.253701 \\ 0.0658139 \\ -0.827594 \end{array} \right] $$ Singular values are $$ \sigma = \left\{3.06221, 2.85034, 2.54735, 0.943822, 0.59983, 0.484301, 0.0539269, 0.0465302 \right\} $$ The matrix rank $\rho=8$, matching the number of columns $n=8$. The condition number is $$ \kappa_{2} = \frac{\sigma_{1}}{\sigma_{8}} \approx 65.8 $$ Total error = $\lVert \mathbf{A}x - b \rVert_{2}^{2} = 0.107151$.