Please see Charles Hudgins answer below for a beautiful solution.
I have encountered a problem that I'm unsure if I will be able to solve myself. I've spent countless hours spinning my wheels on this and I've begun to wonder if it's even possible (or logical) what I am trying to do.
The nature of the problem is I have a simulation of an object in motion that is both decelerating and experiencing a drag force that is subject to the object's velocity at any given point in time. I want to find the required time for the object to come to a complete stop (TTS). The equation to calculate the velocity of an object from frame-to-frame is:
$$ V_f = (V_i + at)(1 - rt) $$ $V_f$ is the next simulation step velocity, $V_i$ is the current simulation step velocity, $a$ is a constant acceleration on the object (slowing down), $r$ is the drag coefficient (between $0$ and $1$), and $t$ is the simulation time step (generally $0.02$ seconds). Whenever the drag coefficient is $0$, this model reduces to Newton's first law of motion $V_f = V_i + at$. I am able to reliably use all of Newton's equations of motion to solve for the time to stop with ease.
Now when the drag coefficient is not $0$ obviously many things change. The main change is that the drag coefficient modifies the velocity at every given point in time. At any given point I know $V_i$, $a$, and $r$. It seemed obvious to me that plugging in $0$ for $V_f$ and then solving for $t$ would get the correct result but it is very wrong.
Can anyone point me in the right direction to derive the time to stop given this equation? And / or sanity check me about whether or not this is feasible given the time-step type of simulation. Since Newton's laws work without drag, I am optimistic there is a continuous solution that includes drag. I have tried integrating to be able to solve over a given range, but it gets weird because of the fact that the drag coefficient is subject to instantaneous velocity.
Note - I have implemented an iterative solver that simply projects this equation out, but for most of my use cases, this is not ideal due to the sheer amount of simulations that are happening at any given time. I have included the simulation (which works) below for reference. I would much prefer to have an integral or something to solve in an absolute manner. Cubic and/or Quartic is no problem.
private float SimulateTimeToStop(float vi, float a, float r, float timeStep)
{
var tts = 0f;
while(vi > 0)
{
vi = (vi + a * timeStep) * (1 - r * timeStep);
tts += timeStep;
}
return tts;
}
Here are some (simualated) sample results.
Scenario 1: Vi = 15, r = 0.5, a = -6
TTS = 1.64 seconds
Scenario 2: Vi = 45, r = 1, a = -15
TTS = 1.4 seconds
Scenario 3: Vi = 28, r = 0.75 a = -3;
TTS = 2.78 seconds
Scenario 4: Vi = 35, r = 0.85, a = -5
TTS ??
Set $V_f = v + dv$ and $V_i = v$. Then $$ v + dv = (v + adt)(1 - rdt) $$ Which yields $$ v + dv = v - rvdt + adt + O(dt^2) $$ So $$ dv = (a - rv)dt + O(dt^2) $$ Which yields the differential equation $$ \frac{dv}{dt} = a - rv $$ Which integrates to $$ v = \frac{1}{r} (a - Ce^{-rt}) $$ for some constant $C$.