I'm working on the following image processing problem.
If $I$ is an $m\times n$ real valued matrix (an image), and $\overset{\rightharpoonup}{I}$ is a vector with each $I_{i,j}$ as components, and $I_0$ is the original noisy image. We define $$E(\overset{\rightharpoonup}{I}) = \sum_{i=0}^m \sum_{j=0}^n \sqrt{(I_{i+1,j} - I_{i,j})^2 + (I_{i,j+1} - I_{i,j})^2 + \epsilon^2} + \lambda \sum_{i=0}^m \sum_{j=0}^n (I_{i,j} - (I_0)_{i,j})^2 $$ Find $\partial E / \partial I_{i,j}$ and find the gradient descent flow $(I_t)_{i,j}$.
This seems incomprehensible to me. I'm not looking for the answer, but I'm curious if someone could help me parse this question. This is what I'm thinking so far:
This equation is a discretization of the following:
$$ \iint_R |\nabla I|\, \mathrm{d}x\,\mathrm{d}y + \lambda \iint_R (I - I_0)^2\, \mathrm{d}x\,\mathrm{d}y$$
But I'm thrown off by the notation in $\partial E / \partial I_{i,j}$. Should I literally take the partial derivative of the above equation with respect to $I_{i+1}$? I really appreciate your input.