The question, briefly:
I am interested in using "Total Variation Denoising" in order to recover a 2-dimensional signal (in particular, an image). In the existing literature, many authors use the $L_1$ and $L_2$ norms in their denoising algorithms. For instance, in page 6 of this paper, the Variational method is given as:
$$ \min_{u} \lambda F(u) + \frac{1}{2} \int_{\Omega} |u(x) - g(x)|^2 \,dx $$ where $F(u) = \frac{1}{2}\int_{\Omega}\|\nabla u\|^{2}$. Here, $g:\Omega \to \Bbb R$ is the given signal, and the function $u:\Omega \to \Bbb R$ attained through minimization is meant to be the "denoised" version of this signal.
My questions are as follows:
What do I get from the $L_2$ norm of the gradient ($F$ in this example)? What do i get back? Do I get back a gradient? (i think so). But How did i manipulate it? What is this square sum of Gradients?
Some papers use the $L_1$ norm instead of an $L_2$ norm. Why choose one over the other? For instance, in this paper, the authors just state that the correct norm for this is $L_1$ not $L_2$. I have heard that the $L_2$-norm "deletes sharp edges" while the $L_1$ norm preserves them, but I don't really understand how.
I have used the search function but did not find my answer or I found it and did not understand it.
I have to use the $L_1$ and $L_2$ norm on a 2 Dimensional Signal (image). My topic is Total Variation De-noising. The Total Variation uses $L_1$ Norm. I have look up what the TV is. More likely I believe the TV is already the $L_1$ norm. Because it takes extreme points of a function and sums it up. So it tells me the oscillation of the function. This means how the function moves. (so far I look it up)
Now I should understand what the norm of gradients is. I know the formulas and have look them up!
It is more like to understand what exactly do I calculate with it.
So I have to gradients in $x$ and $y$ and now I make the $L_2$ norm out of them. What do I get? What is this? Why do I do it?
The gradient tells me how the image changes in that point I have the gradient. When I make now an $L_1$ Norm over several points / gradients. I get the gradient which tells me how much the stuff changes (positive and negative gets just summed up) So I know how big the changes are in this point.
I have this paper.
Page 7 I got 2 examples in Figure 2: When I go over the points and the colour values I have no denoising. Because i have no connection from my data to the picture. So it is the right thing to use the Gradient because he makes a connection to the picture and the information in the pictures.
When I take a look at the example i can clearly see two things: First the colours are not enough no connection Second the gradient with the $L_2$ norm makes everything smooth.
I have to understand now why this happen with the $L_2$ Norm. And for this I have to understand what is the $L_2$ Norm of a gradient. What happens there? What do I calculate?
mfg Christoph
edit: I was ask to make my question more clear: I know the $L_1$ and $L_2$ for points in a room. That what I want to know is, what is the $L_1$ and $L_2$ for gradients? What comes out. My thought on $L_1$ is that i will sum up all Gradients and get a new Gradient which will point into the Main Direction from all Gradients. (I imagine on images for denoising this is important because the noise will be the only gradient going wrong, and the sum gives me a good gradient which still goes into the right direction). For the $L_2$ I have no idea what there happens. What do I do to the Gradients? how do i sum them up to a new one?
I also should add examples: paper with example, in fig 1 on this paper we can see the noisy signal. This signal should become a better one. For this reason the authors going to use the $L_1$ Norm. Now the Gradients they use are on the noisy Signal. I expect that they sum up the Gradients of the signal to create a main direction of the Gradient and use this one for denoising. They just stat that the correct norm for this is $L_1$ not $L_2$. I have seen now much examples on papers like in this paper page 7 Figure 2. $L_2$ is deleting sharp edges $L_1$ is preserving this edges. To understand now why this happen. I try to find out what the $L_2$ norm is doing with my gradients. Because losing sharp edges has to do something with how I just calculate this stuff.
page 6 on paper the Variational method is given as:
$\min_{u} \lambda F(u) + \frac{1}{2} \int_{\Omega} |u(x) - g(x)|^2 \,dx$
The point now is that they are using for the regulation function (term?) now the $L_2$ of the Gradient. they are stating it under Figure 2 as $F(u)=\frac{1}{2}\int_{\Omega}||\nabla u||^{2}$. I would like to understand, what do i get out of this? What do i get back. Do I get back a gradient? (i think so). But How did i manipulate it? What is this square sum of Gradients?