https://youtu.be/XtNXQJkgkhI?t=1147
In this MIT video on Bayesian statistical inference, starting around the 19:07 mark, the professor claims that
$E[\hat\Theta|X] = \hat\Theta$
because $\hat\Theta$ is a function of $X$.
I feel this is a trivial argument but somehow I don't quite get it.
Could someone elaborate on that a bit?
Both $\hat\Theta$ and $X$ are uppercase here i.e.
the word is about random variables and not about concrete values of random variables.
Def. of what the $E[X|Y]$ means in the context of this course. It is a random variable by definition.

I think the best way to understand this segment of the lecture is to extend the particular example that was discussed, to explicitly calculate the estimator $\hat \Theta$. Rather than using increasing levels of abstraction or formalization, we seek to illuminate by making the example more concrete.
Recall in the previous portion of the lecture that the model is $$\Theta \sim \operatorname{Uniform}(4,10), \\ X \mid \Theta \sim \operatorname{Uniform}(\Theta - 1, \Theta + 1).$$ The joint density has value $1/12$ over its support, which is a parallelogram: $$f_{\Theta, X}(\theta, x) = \frac{1}{12} \mathbb 1(4 \le \theta \le 10) \mathbb 1(\theta - 1 \le x \le \theta + 1).$$ This parallelogram is bounded by the lines $$\theta = x-1, \quad \theta = x+1, \quad \theta = 4, \quad \theta = 10.$$ When $5 \le X \le 9$, the conditional expectation $\operatorname{E}[\Theta \mid X]$ is just the midpoint between $X+1$ and $X-1$; i.e., $\operatorname{E}[\Theta \mid X] = X$. In other words, in this interval, the conditional expectation is just the parallel line between the aforementioned boundaries $\theta = x-1$ and $\theta = x+1$; i.e., $\theta = x$. However, when $3 \le X < 5$, we have to take the midpoint between $\theta = x+1$ and $\theta = 4$; i.e., $$\operatorname{E}[\Theta \mid X] = \frac{X+1+4}{2} = \frac{X+5}{2}.$$ And when $9 < X \le 11$, we similarly have $$\operatorname{E}[\Theta \mid X] = \frac{X-1+10}{2} = \frac{X+9}{2}.$$ All together, $$\hat \Theta = \operatorname{E}[\Theta \mid X] = \begin{cases}\frac{X+5}{2}, & 3 \le X < 5 \\ X, & 5 \le X \le 9 \\ \frac{X+9}{2}, & 9 < X \le 11. \end{cases}$$ You will note that this is a continuous but not everywhere differentiable function. More importantly, you will also note that $\operatorname{E}[\Theta \mid X]$ is a random variable that is solely a function of $X$, and it seeks to estimate $\Theta$ through the observed $X$. Hence $\hat \Theta = \operatorname{E}[\Theta \mid X]$ is what he calls the least mean squares estimator.
The essential claim that you have questioned is $\operatorname{E}[\hat \Theta \mid X] = \hat \Theta$. But we now see from the above example what the professor means: if $X$ is given, then $\hat \Theta$ is no longer random with respect to $X$. You know it, and its expected value is again a function of the conditional $X$. Moreover, it is unchanged. For instance, if I ask for $\hat \Theta \mid (X = 8)$, you would give me $8$. Taking the conditional expectation of $\hat \Theta$ given $X$ doesn't modify the estimate.
Another way to think of it is to suppose I let $h(X) = X^2$. Then what is $\operatorname{E}[h(X) \mid X = x]$? Well, it is just $\operatorname{E}[X^2 \mid X = x] = \operatorname{E}[x^2] = x^2$. Similarly, $\operatorname{E}[X^2 \mid X] = X^2$. And $\operatorname{E}[h(X) \mid X] = h(X)$. So $\operatorname{E}[\hat \Theta \mid X] = \hat \Theta$.