I am struggling at understanding some steps of the uniqueness proof of classical solutions of the incompressible Euler equations $$ \partial_t v(x,t)+\text{div }(v(x,t)\otimes v(x,t))+\nabla p(x,t)=0,\qquad \text{div }v(x,t)=0. $$ Here, $v(x,t)\otimes v(x,t)$ is the matrix with the entries $v_iv_j$ and the divergence of it is taken row-wise.
Assume that v and u are two classical solutions with pressure fields $p$ and $q$, respectively and with the same initial data $v_0$. Then, $v\equiv u$.
Here is the proof I have found in this document on page 3 (which is page 13 of the pdf document):
(Here, $\nabla_{sym}v=\frac{1}{2}(\nabla v+\nabla^T v)$ is the symmetric gradient and $d^-(v)$ is the negative part of its smallest eigenvalue).
I have mainly three questions.
(1) Where does the second equality come from?
Using the equations, we get $$ 2\int_{\mathbb{R}^d}(v-u)\cdot\partial_t(v-u)\, dx=-2\int_{\mathbb{R}^d}\underbrace{(v-u)\cdot(\text{div }(v(x,t)\otimes v(x,t))-\text{div }(u(x,t)\otimes u(x,t)))}_{=:A}\, dx -2\int_{\mathbb{R}^d}(v-u)\cdot\nabla(p-q)\, dx $$
But I do not see that $$ A=(v-u)\cdot(v\cdot\nabla v-u\cdot\nabla u). $$
(2) Why does the second integral in the second row disappear, i.e. why do we have $$ 2\int_{\mathbb{R}^d}(v-u)\cdot\nabla(p-q)\, dx=0? $$
(3) Why is $$ v\cdot\nabla v-u\cdot\nabla u=\nabla_{sym}v(v-u) $$ and what is the idea/ theory about this symmetric gradient and this $d^-(v)$ (i.e. why do we need the symmetric gradient and $d^-(v)$, where does this come from and what does this norm $\lVert d^{-}(v)\rVert_{L_x^{\infty}}$ mean)?
The Gronwall thing is then again clear to me.
Maybe you can help me a bit, would be great!

(1) Let's compute the divergence of $v\otimes v$. Using the product rule, its $i$th component becomes $$\sum_j \frac{\partial}{\partial x_j}(v_iv_j) = \sum_j v_j \frac{\partial v_i}{\partial x_j} + \sum_j v_i \frac{\partial v_j}{\partial x_j} $$ The second term on the right is $v_i \operatorname{div}v = 0$ since $v$ is divergence-free. The first term is $v\cdot \nabla v_i$. This is the $i$th component, so the whole $\operatorname{div}(v\otimes v)$ can be written as $v\cdot \nabla v$.
(2) Integrate by parts, moving $\nabla $ to the other term, where it becomes divergence, and the divergence of $v-u$ is zero: $$\int_{\mathbb{R}^d}(v-u)\cdot\nabla(p-q)\, dx = - \int_{\mathbb{R}^d} \operatorname{div}(v-u) \ (p-q)\, dx =0$$ This is used so often that it's worth remembering: integration of a divergence-free field against a gradient field gives zero (under suitable boundary conditions, so that no boundary terms emerge).
(3) By the fact mentioned in (2), we have $\int u \nabla ((v-u)^2) = 0$. Expand the gradient by the chain rule: $\int (v-u) u (\nabla v - \nabla u) = 0$. This fact implies $$ \int (v-u)\cdot (v\cdot \nabla v - u \cdot \nabla u) = \int (v-u)\cdot (v\cdot \nabla v - u \cdot \nabla v) $$ (the difference of two terms is zero). The latter can be rewritten as $\int (v-u)\cdot \nabla v (v-u)$, which is the same as $\int (v-u)\cdot \nabla_{sym} v (v-u)$ because the antisymmetric part of the gradient contributes zero. (Every matrix can be written as symmetric+antisymmetric, and $w\cdot Aw=0$ for every vector $w$ and every antisymmetric matrix $A$).
Finally, $(v-u)\cdot \nabla_{sym}v (v-u)$ is a quadratic form with matrix $\nabla_{sym}v$. These can be bounded by the largest and smallest eigenvalues: $\lambda_{\min}(A)|w|^2\le w\cdot Aw\le \lambda_{\max}(A)|w|^2$ for any symmetric matrix $A$. (Think of diagonalizing the form.) In our case we use the lowest eigenvalue, since there is a minus sign in front of the integral: $$ - (v-u)\cdot \nabla_{sym}v (v-u) \le -\lambda_{\min}(\nabla_{sym}v ) |v-u|^2 $$ Lastly, we pull the eigenvalue out of the integral, using the maximal value of $-\lambda_{\min}(\nabla_{sym}v )$; so only the negative values of $\lambda_{\min}(\nabla_{sym}v )$ are of concern, and the estimate uses the maximum of those.