Autocorrelation test for Bernoulli distribution

401 Views Asked by At

Im running a Bernoulli experiment with probability P and discrete time-intervals. After t=100 I have an average success-rate of 0.6 (60 success and 40 fails) and set P=0.6. (?)

Now Im assuming Im doing something wrong but this is/was my approach:

I assume the mean is 0.6, P=0.6 which would mean the variance is P*(1-P) = 0.24 (Im assuming this doesnt change because Im assuming P is constant over all times t?)

If I want to check the autocorrelation between t=0 and t=1 I try to follow the formula on wikipedia R(s,t) = E[(X-us)(X-ut)] / (Sigma-s*Sigma-t) =>

Assuming both trials were successfull I get:

((1-0.6)*(1-0.6)) / (SQRT(0.24)*SQRT(0.24)) = 0.4^2 / 0.24 = 0.6667

Similar if both trials were unsuccessful (0s) I get:

((0-0.6)*(0-0.6)) / (SQRT(0.24)*SQRT(0.24)) = -0.6^2 / 0.24 = 1.5

If one is successfull and the other isnt I get:

((1-0.6)*(0-0.6)) / (SQRT(0.24)*SQRT(0.24)) = -1

Neither of these values makes sense to me (R(s,t) should be in range [-1,1] and when they are both of the same outcome, successfull or unsuccessfull, shouldnt the autocorrelation be 1?

What am I doing wrong, am I missing something fundamental about my approach?

Any help or pointing me in the directions of the right resources would be greatly appreciated.

Tomas

EDIT: I have a large data-set of tennis-points and Im trying to see if there is any correlation between 2 points played after eachother (or a point and a point played 2 points later, etc). So I have a players won % of his own serves (which I put as P) and I have an outcome for each point Xi (0,1).