Let $Y_1$ and $Y_2$ be independent random variables where $(Y_1,Y_2)$ is a complete-sufficient statistic for $(\kappa_1,\kappa_2)$. Furthermore, assume $\mathscr U(\kappa_1,Y_2)$ is an estimator with the property $\mathsf E\mathscr U(\kappa_1,Y_2)=\tau(\kappa_1,\kappa_2)$. It follows from the Lehmann-Scheffe theorem that $\mathscr U$ is the unique UMVUE for $\tau(\kappa_1,\kappa_2)$ with $\kappa_1$ known and $\kappa_2$ unknown.
Now suppose there exists another estimator $\mathscr T(Y_1,Y_2)$ which also possesses the property $\mathsf E\mathscr T(Y_1,Y_2)=\tau(\kappa_1,\kappa_2)$. It again follows from the Lehmann-Scheffe theorem that $\mathscr T$ is the unique UMVUE for $\tau(\kappa_1,\kappa_2)$ with both $\kappa_1$ and $\kappa_2$ unknown.
Does it follow that $\mathsf{Var}\mathscr U\leq\mathsf{Var}\mathscr T$?
Intuition would suggest that this must hold since there should be greater uncertainty in estimating $\tau$ when both $\kappa_1$ and $\kappa_2$ are unknown versus the case where only one of them is unknown. In other words, $(\kappa_1,Y_2)$ carries more information about the parameter vector $(\kappa_1,\kappa_2)$ than $(Y_1,Y_2)$ does. My first instinct (right or wrong) was to consider the law of total variance: $$ \mathsf{Var}\mathscr T(Y_1,Y_2)=\mathsf E(\mathsf{Var}(\mathscr T(Y_1,Y_2)|Y_1))+\mathsf{Var}(\mathsf E(\mathscr T(Y_1,Y_2)|Y_1)). $$ I didn't see an obvious direction to take from here. I also thought about somehow using the Cramer-Rao lower bound but this seems like the wrong way to go because there is no guarantee that $\mathscr U$ or $\mathscr T$ achieve the lower bound. Given the fundamental nature of this inequality, I wanted to make sure a proof doesn't already exist before re-inventing the wheel.
By the law of total variance we have $$ \mathsf{Var}\mathscr T(Y_1,Y_2)=\mathsf E(\mathsf{Var}(\mathscr T(Y_1,Y_2)|Y_2))+\mathsf{Var}(\mathsf E(\mathscr T(Y_1,Y_2)|Y_2)). $$ Now denote $g(Y_2)=\mathsf E(\mathscr T(Y_1,Y_2)|Y_2)$. It follows from the law of total expectation that $\mathsf E g(Y_2)=\tau(\kappa_1,\kappa_2)$. Since $g$ is a function of a complete-sufficient statistic it follows from the Lehmann-Scheffe theorem that $g$ is the unique UMVUE of its expected value. By uniqueness of the UMVUE we then conclude $g(Y_2)=\mathscr U(\kappa_1,Y_2)$ and $$ \mathsf{Var}\mathscr T(Y_1,Y_2)=\mathsf E(\mathsf{Var}(\mathscr T(Y_1,Y_2)|Y_2))+\mathsf{Var}\mathscr U(\kappa_1,Y_2). $$ But $\mathsf E(\mathsf{Var}(\mathscr T(Y_1,Y_2)|Y_2))\geq 0$; thus, $\mathsf{Var}\mathscr U\leq\mathsf{Var}\mathscr T$. $\quad\square$