How to model server load?

417 Views Asked by At

I want to model the server under load.

I'm using following assumptions:

  • The server serves only one request at a time, and all requests take him exactly 100ms to process.
  • All requests that came while server was processing another request are placed into unlimited queue, and the processed in FIFO order.
  • Load is generated by "users", which make a request, wait for it to be served, the wait 5-15s(with flat distribution) before making next request.

The parameter I am most interested in is mean time of user waiting for his request. Ideally, I want to find a function like f(n), where n is number of users, and f(n) - is mean waiting time.

Modeling this for n=1 is easy - (f(1)=0.1), as it is for two users: the probability of second user making request while first user is being served is 0.01, so f(2) = 0.99*0.1 + 0.01*0.15 = 100,5ms.

But I'm stuck when I try to model more users.

Any suggestions?