Am I wrong? Why?
Consider 3 time-series, Portfolio (P), Benchmark (B) and Excess (E), like so $$P = \{ r_1, r_2, \dots , r_n \}$$ $$B = \{ s_1, s_2, \dots , s_n \}$$ $$E = \{ x_1, x_2, \dots , x_n \}$$ where $E = P - B$, or for the $i_{th}$ observation $x_i = r_i - s_i$.
Now, Tracking Error is evaluated as $$ \delta(P,B) = \sqrt \frac{\sum_i^n (r_i - s_i)^2}{n} \qquad \qquad (1)$$
However, instead of $(1)$, I have noticed the practice of taking $ \delta(P-B) $, which doesn't make sense because
$$ \delta(P-B) = \delta(E) \\ \qquad \qquad \qquad\qquad\quad\quad\quad\quad\quad\quad = \sqrt \frac{\sum_i^n (x_i - \mu_{E})^2}{n} \qquad \qquad (2)\\ \qquad\qquad\qquad\qquad $$
Equations $(1)$ and $(2)$ measure 2 very different things. The former is variability of portfolio returns around a given benchmark and gauges active manager risk; however, the latter expresses variability of $E$ over $B$ around $\mu_E$ which doesn't hold much meaning for the intended purpose whatsoever. In fact, it understates portfolio risk because, on balance, $\mu_E$ > $s_i$ and $ r_i - s_i > x_i - \mu_{E} $, which is a less volatile picture of relative portfolio returns.
Below is a manual decomposition to illustrate the difference for random sampled annualised monthly returns
