Let $Y \sim P_Y$ with variance $P^{\alpha_1}$ $P>1$. Assume $n \sim P_n$ with variance $P^{\alpha_2}$ for any $\alpha_2 \le \alpha_1$. Let $\mathcal{Y}$ be the set over which the random variables $Y$ and $n$ takes value. Suppose for some function $Q_{\alpha_1,\alpha_2}: \mathcal{Y} \to \hat{\mathcal{Y}}$ we have the following conditions met
- $\lim \limits_{P \to \infty}Pr[Q_{\alpha_1,\alpha_2}(Y+n)=Q_{\alpha_1,\alpha_2}(Y)]=1$
- $\mathbb{E}(|Y-Q_{\alpha_1,\alpha_2}(Y)|^2) \le P^{\alpha_2}$
What can be we say about cardinality, $|\hat{\mathcal{Y}}|$?
My intuition says it should be finite.
I see that $Q_{\alpha_1,\alpha_2}$ are piecewise constant functions over $\mathcal{Y}$ with finite range and the elemets of the range set are chosen from $\mathcal{Y}$ such that the constraints 1 and 2 are met, something like quantizers. I am interested in knowing if finite range functions are the only class of functions that can satisfy those properties?
Any help would be appreciated.
This is too long for a comment and I don't have the reputation to comment, so I will post my thoughts here; hopefully the will be of some use to solving the problem.
The shortened version: The answer will depend in large part on whether $Y$ and $n$ are discrete or continuous random variables (or mixed, i.e. have non-zero singular and continuous parts in their Lebesgue decompositions), which you don't specify. This just for the simple reason that, for any random variable $X$ whose distribution is absolutely continuous with respect to Lebesgue measure, we will always have $Pr[X=x]=0$ for any $x$. You mention densities in the comments, so perhaps this is what you have in mind, in which case you seem to be correct that $\hat{\mathcal{Y}}$ being finite, plus some other weaker conditions, would be sufficient to get the desired result, since $Q_{\alpha_1,\alpha_2}$ would be transforming a continuous distribution to a discrete distribution (i.e. one with point masses), for which an equality like that in the limit of (1) could actually have non-zero probability.
If you don't assume that $Y$ and $n$ are not negatively correlated, and if there is no regularity condition on the distributions of $Y$ and $n$ ensuring that $Pr[Y=x]$ and $Pr[n=z]$ decrease for any $x$ or $z$ as the spread of the distributions increase (i.e. as the variances blow up, i.e. as $P \to \infty$), then it should be possible to cook up counterexample distributions such that the range of $Q_{\alpha_1,\alpha_2}$ can be infinite (i.e. so that $\hat{\mathcal{Y}}$ is infinite), because all of the information relevant to satisfying (1) will remain concentrated in one area even as the variance and overall spread of the distribution blows [also (1) is more difficult to satisfy in general than (2), hence why I am focusing on it].
In other words, the problems of 1. negative correlation, 2. mass not being spread "evenly" as the spread of the distributions of $Y$ and $n$ increase as their variance increases, and 3. mass escaping to infinity as $P \to \infty$, among others, all seem like they could produce counterintuitive results.
Longer version:
We have that $Var(Y+n)= Var(Y)+Var(n)+2cov(Y,n) = P^{\alpha_1} + P^{\alpha_2} + 2cov(Y,n)$
See https://en.wikipedia.org/wiki/Variance#Sum_of_correlated_variables
Let's rewrite the expression $Pr[Q_{\alpha_1,\alpha_2}(Y+n)=Q_{\alpha_1,\alpha_2}]$:
$$Pr[Q_{\alpha_1,\alpha_2}(Y+n)=Q_{\alpha_1,\alpha_2}] = Pr[Q_{\alpha_1,\alpha_2}(Y+n)-Q_{\alpha_1,\alpha_2}=0] \\ =E[1_{\{Q_{\alpha_1,\alpha_2}(Y+n)-Q_{\alpha_1,\alpha_2}=0\}}]$$
According to Chebyshev's inequality, we have:
$$Pr[|Y - EY|>x] \le \frac{P^{\alpha_1}}{x^2} $$
$$Pr[|n-En|>x] \le \frac{P^{\alpha_2}}{x^2} $$
It is hard for me to reach any definitive conclusions from this, but here are my ideas:
As $P \to \infty$, obviously the variances of $Y$ and $n$ will blow up. Depending specifically on what the distributions of $P_Y$ and $P_n$ are (for example they could have a point masses with probability sitting at some predetermined value, and then the rest of the probability mass could escape off to infinity as P increases without bound and the spread of the distribution, measured by $P^{\alpha_1/\alpha_2}$ blows up), generically we would expect that the probability $Y$ or $n$ equal any certain, given value $x$ will decrease. But again this requires more information about the distributions; if they were absolutely continuous with respect to Lebesgue measure to begin with (as your comments about densities seem to indicate), then the probability that either RV equals any given value $x$ was already 0 to begin with; likewise if we allow certain fixed point masses to hold onto their probability mass indefinitely as the spread of the distribution increases, then we could see no change for certain values $x$. Also, you have not imposed any conditions on the distributions $P_n$, $P_Y$ demanding that they be tight as $P \to \infty$ (i.e. so that no mass escapes to infinity, as in the possible example I mentioned earlier) -- this doesn't necessarily prevent the limit you mentioned from existing, since the distributions only have to be proper probability distributions (as opposed to possibly subprobability distributions) for every finite $P$ in order for the limit to have a chance of existing, but it does make it unclear to me why you are comfortable letting your variance blow up.
If we assume that $P_Y$ and $P_n$ obey the "generic case" (which as mentioned before is in general a very brave and bold assumption), then the spread of the distribution of $Y+n$ will blow up much faster than that of the distribution of $Y$ provided that $Y$ and $n$ are not negatively correlated. You don't say whether this is the case or not, so one can't say much in general. For the sake of being able to conclude something, let's assume briefly that they are at least uncorrelated or positively correlated. Then, as Chebyshev's inequality quantifies to some extent, the spread of the distribution of $Y+n$ will blow up much faster than the spread of the distribution of $Y$. This means, in particular, that the function $Q_{\alpha_1,\alpha_2}$ would have to "compress" the range of $Q_{\alpha_1,\alpha_2}(Y+n)$ relative to the range of $Y+n$ an ever-increasing, hence arbitrarily large, amount in order to ensure that it coincides enough with the rangle of $Q_{\alpha_1,\alpha_2}(Y)$ in order for the probability in $(1)$ to increase to $1$ as $P \to \infty$.
I don't quite know how to prove or state this rigorously, but seemingly the only way in which $Q_{\alpha_1,\alpha_2}$ could be guaranteed to achieve "arbitrarily large compression" is if its range was fixed to be a subset of the real line with finite Lebesgue measure (but not even necessarily finite cardinality). So in this special case it seems like your intuition might have some validity.
In conclusion, it seems more likely than not that the claim that the cardinality of $\hat{\mathcal{Y}}$ must be finite is false, although since there are too few assumptions (for instance you don't even specify whether $Y$ or $n$ are discrete or continuous or mixed, something which has an enormous bearing on the question) it would be too laborious to go through all of the myriad possibilities and give detailed examples of how and why things go right or wrong in each and every case.