How long does it take for me to get food at the buffet line if there are "cheaters"?

122 Views Asked by At

This is a real life problem. On my way home from work I have to clear a traffic jam at the 605-405 on-ramp in California. At this junction there are 4 lanes, the two on the left merge onto 405, the two on the right go straight and exit into the city. Cheaters like to use the right 2 lanes (which are not congested) to get to the front of the line and then merge in. At peak hours, it can be backed up for 3/4 of a mile.

The issue is: the more people cheat, the longer it takes for honest people in the back to clear the jam. But as they spend more time in the back, even more cheaters get to cheat on them. So there is compounding.

I would like to find the formula for how long it takes to get to the front of the line f(h,d,x) where:

  • h is the throughput at the front of the line (cars/minute)
  • d is my distance from the front of the line (cars)
  • x is rate of cheaters (cars/minute)

Technically the cheaters often start merging in around 1/8 mile from the front of the line. But for simplicity let's assume they always merge in at the very front. And yes, this injustice bothers me to no end.

ANALOGY

How long does it take for me to get my food at the buffet line if there are cheaters?

Assuming cheaters always jump in at the front of the line, I'd like to find the formula for how long it takes for me to get food f(h,d,x) where:

  • h is the throughput at the front of the line (persons per minute)
  • d is my distance from the front of the line (persons)
  • x is the rate of cheaters (persons per minute)
3

There are 3 best solutions below

7
On BEST ANSWER

In your model, the cheaters are stealing processing time. The $h$ persons per minute is shared between the cheaters and the noncheaters. As we are given that $x$ cheaters per minute are served, $h-x$ noncheaters per minute are served so you will be served in $\frac d{h-x}$ minutes. If people didn't think one line was cheating the equilibrium would be to have the noncheater line and the cheater line the same length. Each arrival would choose a line at random and their expected service time would be the same as if there were no cheaters and one line. Now it is hard to call the people in the second lane cheaters.

The presence of two processing lanes on the freeway changes things from your analogy. Now you have your choice of two noncheating lanes approaching the intersection, one of which has cheaters merging in. It is natural to assume that cars will leave the intersection at the same rate in each lane. You should be in the lane not affected by the merge of the cheaters. In the real case that cheaters merge in over a range, the lane affected by cheaters will speed up as the cheaters merge in because there are fewer merging in beyond that point. Because the lane next to the cheaters is slower people will be merging from that lane into the other one. Now you need to observe where people merge from one lane to another and the rate as a function of distance from the intersection. It gets hard.

I find local observation is quite valuable. Different freeway areas have different lanes that are advantageous.

2
On

assume all cheaters come from behind you then they add 1% to the throughput needed at the front of the line based on your estimated percentage of cheaters. even if this is compounded the limits relate when the throughput can erase (1.01)^z where z takes on some natural number as it's value. it's like an amortized mortgage you pay a constant amount to pay off a compounding loan.

0
On

One approach is to regard the exit lane as a queuing system where there are two classes of customers, one with a higher priority than the other, and a single server, the exit. If the higher priority customers, i.e. cheaters, are class $1$, then the average time they spend in the queue waiting for the exit is $$W_{q_1} = \frac{\lambda}{\mu (\mu-\lambda_1)}$$ and the average time spent waiting in queue by the class $2$ customers, i.e. good citizens, is $$W_{q_2} = \frac{\lambda}{(\mu-\lambda)(\mu-\lambda_1)}$$ where $\lambda_i$ is the arrival rate for customers of class $i$ for $i=1,2$, $\lambda = \lambda_1+\lambda_2$, and the service rate is $\mu$.

This all assumes the system has a steady state and times between arrivals and service times are exponentially distributed, which is probably false, but still it gives you an idea.

By contrast, if all customers are a single class, then the average time in queue is $$W_q = \frac{\lambda}{\mu(\mu-\lambda)}$$

Reference: Fundamentals of Queueing Theory by Donald Gross and Carl M. Harris.