This is a digital image processing problem. How to cope with the linear system ?
$$(\beta+\mathrm{div}^T\mathrm{div})g=\mathrm{div}^Th+b$$
where $g\in \mathbb{R}^n \times \mathbb{R}^n$, the known $h,b \in \mathbb{R}^n,\beta \in \mathbb{R}$,and operator $\mathrm{div} = -\nabla^T$. I want to obtain $g$ from the equation.
How to understand the divergence of the image gradient (like $g = \mathbb{R}^n \times \mathbb{R}^n$)? Is it similar to classical calculus?
$h$ is a vector field, a function $\mathbb{R}^n \to \mathbb{R}^n$. Thus, at each point $x \in \mathbb{R}^n$, $h(x)$ is an $n$-vector. The divergence is thus easily computed simply applying the given operator.
However, since we're talking about digital image processing, we are dealing with a discrete vector field. That is, $x \in \mathbb{Z}^n$. We need to use a discrete approximation to the derivative to compute the divergence. The finite difference approximation is good to use in the case there is no noise. For noisy data, it is best to use a regularized derivative such as the Gaussian derivative filter (convoluting with the derivative of a Gaussian, see here for details).
To obtain $g$, one would likely need to use an iterative optimization approach such as gradient descent.