I have a normal distributed random variable, $X$ with mean $\mu$ and standard deviation, $\sigma$. I don't believe it matters, but this distribution was obtained as a result of summing a large number of independent, identically distributed random numbers with finite variance (hence invoking the central limit theorem).
It seems intuitive that $X - \lfloor X \rfloor$ should become closer and closer to a uniform random number between $(0,1)$ as the variance of $X$ increases. And in the limit, it should become a uniform random number. Is there a proof for this claim or a refutation of it?
Context: this is going to help "complete" the accepted answer here: As the variance of a random variable grows, the conditional distribution of it residing in an interval of length $1$ becomes uniform. Larger picture, I'm trying to prove Blackwell's theorem from renewal theory. See here for details: Going "well into the lifetime" of a renewal process means the time until the next event will be uniform conditional on inter-arrival?
You are wrong. It is of crucial importance how you obtain this approximate gaussian distribution. You are looking at the fine structure of the distribution so the CLT is of no help here. A counter-example: Let $X_k$ be an i.i.d. sequence of integer valued random variables with variance $0<\sigma^2<+\infty$. Then $S_n=X_1+\cdots X_n$ will satisfy the CLT: $(S_n-{\Bbb E}(S_n))/\sqrt{n} \sim {\cal N}(0,\sigma^2)$ but $S_n-\lfloor S_n\rfloor$ is identically zero.
Thus you may not necessarily have a normal distributed variable in the sense you would like to have. In the generality stated the claim does not hold. Other posts deal with various ways of considering the limit. In view of your description, I believe the relevant problem you want to address is under what conditions on the distribution of an i.i.d. sequence $X_k$, does $S_n \ {\rm mod}\ 1$ converge in distribution to $\ { U}([0,1))$. For this we have the following complete characterization:
Theorem: Let $(X_k)_k$ be a sequence of i.i.d. real valued random variables and define for each $m\in {\Bbb Z}$: $\gamma_m = {\Bbb E} \left( e^{2\pi i m X_1} \right)$. Set $S_n=X_1+\cdots X_n$. Then the following are equivalent:
Proof: By the i.i.d condition $${\Bbb E} \left( e^{2\pi i m S_n} \right) = {\Bbb E} \left( e^{2\pi i m X_1} \right)^n = \gamma_m^n$$ Thus if $g$ is a 1-periodic trigonometric polynomial, then ${\Bbb E}(g(S_n))\to \int_0^1 g$ whenever $|\gamma_m|<1$ for every non-zero $m$. Conversely if for some non-zero $m$, $\gamma_m=e^{i\theta}$ then the convergence does not take place for $g=\exp(2\pi i m x)$ (like in the above counter-example). As trigonometric polynomials are dense in the 1-periodic continuous functions we get that $1\Leftrightarrow 2$. To see that 2 and 3 are equivalent, simply note that for non-zero $m$ $${\Bbb E} \left( e^{2\pi i m X_1} \right) = e^{2 \pi i \theta}$$ iff $mX_1 \in \theta +{\Bbb Z}$ almost surely.//
Note that the above holds without further assumptions on $X_1$. It need not be integrable, so in particular, the usual CLT need not even apply. I imagine that the above result is a well-known theorem to specialists.