I am fairly new to statistics and just recently encountered queueing theory.
I have programmed a simulation for a $M/M/1$ queue in which I specify the inter-arrival times and service times. I input say, an exponential distribution with a mean of $1$ for both the inter-arrival and service time.
I also measure for the effective arrival rate, meaning I run for, say, $1000$ time steps and at each time step a random value is drawn from the exponential distribution. I collect these random values in a list and then compute the effective arrival rate, this being the mean of the list. In theory, this value should converge to the mean, yet, in practice I end up with values not so close to the mean.
My question is, how many random values from an exponential distribution should I draw such that the mean of these converges to the mean of the distribution?
The exponential distribution with mean $1$ also has standard deviation $1$. If you collect statistics for $1000$ time units, you should get a sample mean of about $1$ with a standard error of about $0.032$. When you say your values are not so close to the mean, how far off are they?
By the way, you should be aware that if both the inter-arrival and the service times have mean $1$, the queue is not stable.