Consider the following situation: Margin of error = 2% C-interval = 95%
Why is it that you require less sample size to produce the same margin of error in a c-interval when you have a lower prob. of success/ sample proportion? Shouldn't this be a higher sample proportion?
Can someone give me an example to understand this better?
The $\alpha$% confidence interval around a parameter is constructed so that if you drew multiple samples, about $\alpha$% of the confidence intervals would capture the true value of the parameter. The not-quite-right way to understand this is "I am $\alpha$% confident that my interval has captured the true value".
If you have a fixed sample size, then when you make the confidence interval wider you are increasing the probability that you've captured the true value of the parameter in it, because you're casting a wider net - if you extend the interval all the way to infinity in both directions, you'll be 100% confident that it contains the true value.
If you keep the width of the interval fixed, but change the sample size, then you're changing how much information you have about the variability of the data. With only 10 sample units, you might not be very confident that your guess at the true value is particularly good, but if you have 1000 sample units and make the same guess, then you've got a much better clue that you're in the right vicinity.