$Edited$ after the comment of @Ninad Munshi:
The result $$\int_{x_1}^{x_2} [f(x)-f^{-1}(x)]dx = 2\int_{x_1}^{x_2} [f(x)-x] dx, ~\text{if}~ f(x)>f^{-1}(x) \forall x \in [x_1,x_2],~~~~~~(*)$$ holds for $$(i):f(x)=\sqrt{x}, f:[0,1] \to [0,1];~ \text{however for}~ (ii): f(x)=x^3+x, f:[0,1] \to [0,2] ~ \text{and} ~ (iii) : f(x)=e^x, f:[0,2] \to [1,e^2]$$ it doesn't. Apparently, the reason could be that in this cases $(ii,iii)$, $f(x)=f^{-1}(x)$ doesn't have two roots.
What could be the the most rigorous reason behind $(*)$?
For an answer see the comments below.