Bayes's theorem states $P(A\mid B) = \dfrac{P(B\mid A)\cdot P(A)}{P(B)}$. The intuition behind this is simple: if $B$ is true, then the probability that $A$ is true is the number of cases where $A$ is true out of all cases where $B$ is true.
Now, here is another formulation of the rule, just rearranging fractions: $\dfrac{P(A\mid B)}{P(A)} = \dfrac{P(B\mid A)}{P(B)}$. To me, what this says is that "if upon learning $B$ is true, we think $A$ is $x$ times more likely to be true than we previously thought, then upon learning $A$, $B$ is $x$ times more likely to be true than we previously thought." But this sentence does not seem similarly obvious to me. Is there a natural interpretation of $\dfrac{P(A\mid B)}{P(A)} = \dfrac{P(B\mid A)}{P(B)}$?
I agree that this is less obvious, but you can go some way towards making intuitive sense of it by noting that it's obvious in three important cases:
If $A$ and $B$ are identical, it's obviously true by symmetry.
If $A$ and $B$ are independent, it's obviously true because both ratios must then be $1$.
If $A$ and $B$ are mutually exclusive, it's obviously true because both ratios must then be $0$.
Given this, it would be surprising if such simple ratios would manage to coincide at three different points but not in general.