I'm trying to model the process scheduling mechanism in Linux using queueing theory models.
Assuming that both the arrival and processing times of processes in the system follow a Poisson distribution of, and the system contains c independent logical cores, this scenario approximates an M/M/c model.
The challenge, however, is that this model assumes each task is executed sequentially in a first-come-first-served manner to completion. In practice, Linux uses a Round-robin scheduling scheme (not considering the impact of the Completely Fair Scheduler). That is, the system sets a time slice, assumed to be of length s, where the continuous execution time of a task does not exceed the time slice length s. If a task's execution time surpasses s before completion, it is interrupted and placed back at the end of the waiting queue.
Given this context, how should I calculate the average waiting time for each process?
Now I know that this scenario can be approximated through a "processor sharing" model, but the PS models in the relevant materials are all based on the assumption of M/M/1. How can we extend them to the multi processor scenario of M/M/c?