Let $A$ and $B$ be two matrices, and suppose that $\exp(A)=\exp(B)$. Does this imply that $[A, B]=0$?
BCH's formula shows a clear relation between $\exp(A)\exp(-B)$ and $[A, B]$, but the implication is far from obvious. However, I haven't been able to find a counterexample for this fact. Is this a known result?
While, as was pointed out in one answer, this is not true in the general case, or even in the case where we restrict ourselves to diagonalizable matrices, the argument below (*) leads me to think that it may be true when restricting to normal matrices (that is, to unitarily diagonalizable ones). Is this argument correct? Is this a known result, and/or can it be shown in a more direct way?
(*)
$U$ is diagonalizable and non-degenerate.
If $U$ is diagonalizable and non-degenerate, then we can write $$U=\sum_{k=1}^N \lambda_k P_k,$$ where $\lambda_k$ are the eigenvalues of $A$ and $P_k$ projectors over the corresponding eigenvectors. If $U$ to be normal, we also have $P_j P_k=\delta_{jk}$. Then I can write the set of logarithms of $U$ as: $$\log U = \sum_{k=1}^N (\operatorname{Log}\lambda_k+2\pi i\nu_k)P_k$$ with $\nu_k\in\mathbb Z$. From the above, it seems that indeed all logarithms of $U$ must commute with each other (because trivially $[P_j, P_k]=0$).
$U$ diagonalizable and possibly degenerate.
If $U$ is diagonalizable but non necessarily non-degenerate, we can write it as $$U=\sum_k\lambda_k\sum_{j=1}^{d_k}P_{kj},$$ where $d_k$ is the dimension of the $k$-th degenerate eigenspace, $\sum_j P_{kj}=P_k$, $\sum_k P_k=\mathbb 1$, $P_{ij}P_{kl}=\delta_{ik}\delta_{jl}$, and it is to be noted that one can choose any such basis of projectors for each degenerate eigenspace. The logarithms of $U$ are then written as $$ \log U = \sum_{jk} \ell_{\nu_{k}}(\lambda_j) P_{jk}, $$ where we used the notation $\ell_j(\lambda)\equiv\operatorname{Log}(\lambda)+2\pi i j$ to denote the set of logarithms of a given $\lambda$. The set of logarithms of $U$ is then characterised by the possible choices of $\{\nu_k\}_k$ and $P_{jk}$.
You can cook up lots of matrices with $\exp(A)=I$. Just take $A=2\pi A_0$ where $A_0^2=-I$. For instance $A_0=\pmatrix{0&1\\-1&0}$. Let $B=2\pi B_0$ where say, $B_0=\pmatrix{1&1\\-2&-1}$. Then $B_0^2=-I$ so $\exp(B)=I$ with $B=2\pi B_0$. But $AB\ne BA$ here.