I have a basic question about the Binomial distribution:
If I understand correctly, the standard deviation is calculated by: std = sqrt(np(1-p)) (p = mean, n = trials number). But, Something is bothering me with this formula.. if std = sqrt(np(1-p)), then it increases with n. Shouldn't the std decrease when n increases?
I have to analyze success rate error in a session of trials. I have a binary vector that describes the success/failure of each trial. I try to understand how to calculate the error of my session mean, which should I think denoted by $\hat p$. So, should I take $P$ as 0.5? The assignment is learned if the success rate is 1 (success in 100% trials). But I cant take $p=1$ because I'll have $q=0$. I guess: $H_0$ is $p=0.5$ and $H_1$ is $P>0.5$? But I need to check how if P is close to 1.. I mean, as closer to 1 my sample get, the bigger my error will be. and I need my error to be smaller as my trials vector approches to (1,1,1,1,...)..
Shortly: How do I analyse the error of a sample population in which the following is known: session mean, session trials number, success/failure in each trial.
Thanks

I think you are confusing this for std. dev of the sample mean. The std. dev of the sample mean decreases with larger sample size. Here, on the other hand, we are talking about the number of successes out of $n$ trials. Obviously, more the number of trials, more the range of number of successful events. E.g. If $n = 5$, you can have {1,2,3,4,5} as the possible successful events. If $n=500$, then you can have {1,2,.. ,500} as the possible successful events.