$X_1, X_2, \dots, X_n$ are $n$ IID RVs. $Z = \mathrm{max}(X_1, X_2, \dots, X_n)$.
Now define $I$ as follows: $I = i$ when $X_i$ is the maximum of $X_1, X_2, \dots, X_n$. Using law of total expectation we can write $$ E[Z] = E[E[Z|I]] = E[X_I] = E[X] $$ This says that $Z$ and any $X_i$ have the same expectation. But it can be shown that the CDF of $Z$ is $F_X^n$. This is contradictory as it suggests that RVs with CDFs $F$ and $F^n$ for any integer $n$ have the same expectation which is false for the simple case of $U(0, 1)$ RVs.
What is wrong with the application of law of total expectation above?
The problem isn't the law of total expectation. Rather, it's the claim $\mathbb{E}[X_I] = \mathbb{E}[X]$ that is false, because your index is a function of all $X$s.
In fact, you can skip the law of total expectation altogether, and use the definition definition of $I$ to write $Z = X_I$ and $\mathbb{E}[Z] = \mathbb{E}[X_I]$.
EDIT: Here's some more details about the expectation of $X_I$. Here I use $J(I=i)$ for the indicator function as $I$ is already used.
$$\mathbb{E}[X_I] = \sum_{i=1}^n \mathbb{E}[X_i J(I=i)]$$
Due to the symmetric setting, $I$ is uniformly distributed. If, in some other setting, $J(I=i)$ and $X_i$ were independent, we could then continue $$\sum_{i=1}^n \mathbb{E}[X_i J(I=i)] = \sum_{i=1}^n \mathbb{E}[X_i]\mathbb{P}[I=i] = \frac{1}{n}\sum_{i=1}^n \mathbb{E}{X_i} = \mathbb{E}[X]$$
However, in our problem this is not the case. $I$ is a function of $(X_1 \dots X_n)$ - it's the index of the maximum. Therefore, they're highly correlated instead: The larger $X_i$ is, the more likely it is that $I=i$.