How do I solve Euler Lagrange equation for image de-blurring?

317 Views Asked by At

This is one of the two Euler Lagrange equations for de-blurring which I need to solve: $$ u_r(-x,-y)\star\big(u_r(x,y)\star k-u_0\big) - \lambda_1\nabla \cdot \bigg(\frac{\nabla k}{|\nabla k|}\bigg) = 0 $$

$u_r$ is the reference image, which is known. $u_0$ is the original blurred image. $k$ is the blurring kernel, unknown. lambda is a constant which is also known, or rather empirically determined. I need to solve this equation for $k$.

I tried solving it from the Fourier domain, but the results were disappointing due to some reason. The output image did not look much different from the input image, but pixel level difference of 2 or 3 gray-scale levels were there.

In all the papers that I found, they say that they solved the equation in code, using lagged diffusivity to make it linear and then using the conjugate gradient or fixed point method. But I can't get my head around this, because the kernel $k$ which is being convolved with image $u_r$, is an unknown. How do I implement it then, in code? I can't if the unknown $k$ is in a convolution.