I was reading some notes from Wolf and stumbled across this theorem [page 83, theorem 5.13]:
Let $f$ be an operator monotone function on an interval $I=[0,a]$ with $f(0) \ge 0$ and $T: \mathcal{M}_d \to \mathcal{M}_{d'}$ any positive map for which $T(1) \le 1$, where $\mathcal{M}_d$ is the algebra of $d \times d$ matrices. Then for all $A=A^{\dagger}$ with $\text{spec}(A) \subset I$ we have \begin{equation} T(f(A))\le f(T(A)) \end{equation}
Later, in the subsequent corollary 5.2 it is stated that using this theorem we can prove that for any positive $T$ with $T(1)\le 1$
\begin{equation} T(\log(A)) \le \log(T(A)) \quad \text{for } A > 0 \end{equation}
My problem is that $\log$ doesn't satisfy of course the condition $f(0) \ge 0$ which is fundamental in proving the above theorem. What am i missing?
Edit: for better clarification, the problem is that the proof of the theorem uses the fact that for every op. concave $f$ (op. monotone $\implies$ op. concave) and for every operator $X$ such that $||X|| \le 1$, we have \begin{equation} f(X^{\dagger} A X + W^{\dagger} B W) \ge X^{\dagger} f(A) X + W^{\dagger} f(B) W \end{equation} where $A,B$ are hermitian and $X^{\dagger} X + W^{\dagger} W=1$. By taking $B=0$ and using $f(0) \ge 0$ we get $f(X^{\dagger}A X) \ge X^{\dagger} f(A) X$ which is what is used for the theorem above. I don't really see how one could circumvent this requirement in order to allow the logarithm.
Apply the initial observation to the function $f=\log(x+1)$ to get an inequality for this function. Then use the fact that $\log(x+\epsilon)=[\log((x/\epsilon)+1)+\log(\epsilon)]$ to show that the inequality holds for $\log(x+\epsilon)$ where $\epsilon>0$. Then take the infimum over $\epsilon>0$.