I studying - trying to prove myself the following Corollary.
Corollary: Let $X$ be a reflexive Banach space and let $A$ be a coercive, maximal monotone subset of $X \times X^*$. Then $A$ is surjective, that is, $R(A) =X^*$.
Proof: A proof is outlined in the book Viorel Barbu, Nonlinear Differential Equations of Monotone Types in Banach Spaces. It goes as such:
"Let $y_0 \in X^*$ be arbitrary but fixed.Without loss of generality, we may assume that $X,X^*$ are strictly convex, so that by Theorem 2.2 for every $\lambda > 0$ the equation $y_0 \in \lambda J(x_\lambda )+Ax_\lambda$ has a (unique) solution $x_\lambda \in D(A)$ . Now, we multiply (in the sense of the duality pairing) the equation above by $x_\lambda -x^0$, where $x^0$ is the element arising in the coercivity condition."
I am adding here, that the coercivity condition states that if a subset $A$ of $X \times X^*$ is coervice, then $$\lim_{\|x\| \to \infty} \frac{\left\langle x_n-x^0, y_n\right\rangle}{\|x\|} = \infty$$ for some $x^0 \in X$ and for all $[x_n,y_n] \in A$, such that $\lim_{n \to \infty} \left\|x_n\right\|= \infty$
Now, multiplying the above equation by what was stated, yields:
$$\left\langle x_\lambda-x^0, y_0\right\rangle = \left\langle x_\lambda - x^0, Ax_\lambda\right\rangle + \left\langle x_\lambda - x^0, \lambda J\left( x_\lambda\right)\right\rangle$$
BUT on mister Barbu's book, it is stated as:
"We have: $\lambda \|x_\lambda\|^2 + \left\langle x_\lambda - x^0, Ax_\lambda \right\rangle = \left\langle x_\lambda - x^0, y_0\right\rangle + \lambda\left\langle \color{red}{x_0}, J(x_\lambda)\right\rangle$"
The outlined proof then proceeds stating that due to the coercivity condition, one understands that $\{x_\lambda\}$ is bounded in $X$ and thus there exists $\color{red}{x_0} \in X: x_\lambda \xrightarrow[\lambda \to 0]{w}x _0$. (How, from the coercivity condition one yields such a statement for $x_\lambda$ ? - I understand the intuition about the weak convergence, even if a subsequence is needed in that case though, provided that $x_\lambda$ is indeed bounded.)
The proof then finishes with a rather straightforward claim, as it lets $\lambda \to 0$ and shows that it should be $Ax_\lambda \xrightarrow[\lambda \to 0]{w} y_0$.
MAIN QUESTION: I cannot understand, how that $\color{red}{x_0}$ pops up after multiplying in the duality brackets. As I have carried out the multiplication my self and provided the yielding expression, I find no logical way to implement that $\color{red}{x_0}$ in there, especially as its existence is later stated in the proof. The problem, though, is that this $\color{red}{x_0}$ plays a vital part in the proof, as if it was not meant to be there, the intuition and last part of the proof would not be carried out, thus not concluding to the demanded result.
I would be grateful if someone could clear up my mind.
Thanks in advance.