Minitab blog claims that:
"As you increase the sample size, the sampling error decreases and the intervals become narrower. If you could increase the sample size to equal the population, there would be no sampling error. In this case, the confidence interval would have a width of zero and be equal to the true population parameter."
That makes sense intuitively, but I don't see it mathematically. Lets say I had a population size of 100 with a known mean and population standard deviation. If I were to sample the entire population, my confidence interval would not be 0:
$$Mean_{sample} (+ OR-) z*\sigma= CI$$
Mean sample would become population mean and z*standard devation would become some nonzero number (depending on the confidence level chosen for z). Why doesn't the $z*\sigma$ term go to zero for a noninfinite sized population?
The standard deviation will go to 0, since you do not have uncertainty.