Over $\mathbb{R}$, the square, $(\cdot)^2$, and usual square-norm, $|\cdot|^2$, functions agree for all inputs and are occasionally used interchangeably. For instance, when computing the variance of some data $\{x_1,\cdots,x_n\}$, $n>1$, with mean $\mu$, I have only ever seen the formula given as $$ \sigma^2 = \frac{1}{n-1}\sum_{i=1}^{n}(x_i-\mu)^2; $$it should be $|x_i-\mu|^2$ so that even if the data are complex, the variance is real and non-negative.
However, I am interested in instances when $(\cdot)^2$ and not $|\cdot|^2$ is the correct generalization. A clear parallel in linear algebra is the adjugate matrix: given a square matrix $A$, its adjugate is the transpose of the cofactor matrix $C$, but not the conjugate-transpose, even when the entries of the matrix are complex. That is, we have $\text{adj}(A)=C^T$, not $\text{adj}(A)=C^*$. Off the top of my head, I can't think of any scenarios where this distinction is important, but I am curious if the community can name any.
Posting an answer (CW) so the question has one; feel free to add more.