For $\mathbf F:\mathbb{R}^2 \rightarrow \mathbb{R}^2$, divergence of a vector field can be calculated by: $$div\,F = \frac{\partial F}{\partial x}+\frac{\partial F}{\partial y}$$
But, divergence can also be seen as a measure of the change in density of a fluid flowing according to a given vector field.
Now, if we consider a two-dimensional vector field $\overrightarrow F(x,y) = \begin{bmatrix} \frac{-x}{x^2+y^2}\\ \frac{-y}{x^2+y^2}\end{bmatrix}$, and visualize it by plotting vectors on a regular grid, we get something like this:
I can see that the vectors are clearly converging at the origin from the above plot. But calculating the divergence for the Vector field $\overrightarrow F$, I get:
$$div\,F = \frac{\partial }{\partial x}\biggl(\frac{-x}{x^2+y^2}\biggr)+\frac{\partial }{\partial y}\biggl(\frac{-y}{x^2+y^2}\biggr)=\frac{(x^2-y^2)}{(x^2+y^2)^2}+\frac{(y^2-x^2)}{(x^2+y^2)^2}=0$$
Numerically, we get zero convergence at every point in the two-dimensional space not just origin. But constructing an infinitesimally small circle at the origin (or anywhere on the grid for that matter), each vector intersecting the circle is pointing inward, and thus it should have some convergence, intuitively speaking. So, even the flow curves solution (like used here) don't seem to work here.
Then, why is it that we can "see" the field converging but calculating it gives zero convergence. Why is there a seeming contradiction between the two methods? Am I missing something very obvious here?
Any help will be very much appreciated.
