Minimizing $\| x A - B\|_F^2$

248 Views Asked by At

I have $2$ known grayscale images ($256 \times 256$ matrices) $A$ and $B$ and want to find the unknown variable $x$ so that

$$\text{minimize} \quad \| xA-B\|_F^2$$

The goal is to find the best multiplier $x$ that is able to scale the brightness of $A$ to $B$.

I don't know how I should approach a problem like this and how to solve it. Is it an optimization problem? If so, then of what kind and how can I solve it? I'd also appreciate mentioning books or references in this field of work.

Thanks.

3

There are 3 best solutions below

0
On BEST ANSWER

Let $f : \mathbb R \to \mathbb R_0^+$ be defined as follows

$$f (x) := \| x \mathrm A - \mathrm B \|_{\text{F}}^2 = x^2 \| \mathrm A \|_{\text{F}}^2 - 2 x \langle \mathrm A , \mathrm B \rangle + \| \mathrm B \|_{\text{F}}^2$$

Taking the 1st derivative and finding where it vanishes, we obtain the least-squares solution

$$\hat{x} := \frac{\langle \mathrm A , \mathrm B \rangle}{\| \mathrm A \|_{\text{F}}^2} = \frac{\langle \mathrm A , \mathrm B \rangle}{\langle \mathrm A , \mathrm A \rangle}$$

2
On

Denote $(A_i)$ and $(B_i)$ the components of both images.

Minimizing

$$f_2(x)=\sum(xA_i-B_i)^2$$ is quite simple and leads to the solution

$$X= \frac{\sum A_iB_i}{\sum A_i^2}$$

Trying to minimize

$$f_1(x)=\sum \vert xA_i-B_i \vert$$ is harder as this map is not always differentiable.

0
On

To find the best fit for $B=xA$, you might formulate one of three seemingly equivalent cost functions: $$\eqalign{ &(1)\;\quad &\min_x \;\|xA-B\|_F^2 \\ &(2)\;\quad &\min_x \;\|A-\tfrac{1}{x}B\|_F^2 \\ &(3)\;\quad &\min_x \;\|{\sqrt x}A-\tfrac{1}{{\sqrt x}}B\|_F^2 \\ }$$ each of which can be solved using the method that Rodrigo outlines for problem $(1)$ $$\eqalign{ &x_1 = \frac{\langle A,B\rangle}{\langle A,A\rangle},\quad &x_2 = \frac{\langle B,B\rangle}{\langle A,B\rangle},\quad &x_3 = {\rm sign}\Big(\langle A,B\rangle\Big){\sqrt\frac{\langle B,B\rangle}{\langle A,A\rangle}} }$$ The advantage of the third formula is that it provides a usable value when the matrices are nearly perpendicular, i.e. $\langle A,B\rangle\sim 0.\,$ When the matrices are parallel, all three formula are identical. In any other situation, it's not obvious which formula is "better".

You could also multiply each side of the fit equation by a third matrix $M$, and solve for $x$. $$\langle B,M\rangle=x\langle A,M\rangle \quad \implies x=\frac{\langle B,M\rangle}{\langle A,M\rangle}$$ Note that setting $M\!=\!A$ yields $x_1$, while $M\!=\!B$ yields $x_2.\,$

Another sensible choice might be a matrix, each of whose elements equals the mean grayscale value. This is equivalent to swapping the Frobenius norm for the Manhattan norm.

Or set $M=\tfrac{1}{2}(A+B)$, which yields the mediant of $(x_1,x_2)$.