I am new to image processing and optimization. Let y be a noisy image described by the relationship y = x+n, where u is a noise-free image and n is the noise. The goal is to recover x from n.
min || y-x ||^2_2+lambda|| x ||^2_2
It should be solved by the gradient method for the optimization problem. Here is the code that I wrote, but I cannot get the correct answer. If anyone can help me, I would greatly appreciate it.
import numpy as np
import math
from PIL import Image
from skimage import img_as_float
from skimage.util import random_noise
import math
from numpy import linalg as LA
from skimage.metrics import peak_signal_noise_ratio
import scipy.ndimage.filters
#define image
img = img_as_float(np.random.random((2, 2)))
#define noise
noise = np.random.normal(0,1,size=(2,2))*0.2
#add noise to image
img_noise=img+noise
p = 0.0001 #gradient step
N = 600 #number of iterations
Lambda = 5
Cur_u = img_noise
weight_history = []
print('img=',img)
for k in range(N):
Prev_u=Cur_u
Cur_u = Prev_u - p*(2*(Prev_u-img_noise)+2*Lambda*Prev_u)
weight_history.append(Cur_u)
[enter image description here][1]
print(weight_history)
psnr = peak_signal_noise_ratio(img,img_noise)
psnr = peak_signal_noise_ratio(img,weight_history[-1])
I want that the denoising image becomes more closer to the original image. But I do not see any denoising. I cannot see any difference between noise and denoising. I use psnr and it shows that no change happens. I change learning rate and iteration but I do not see any correct result.