Infinitesimally small time intervals

286 Views Asked by At

When saying that in a small time interval $dt$, the velocity has changed by $d\vec v$, and so the acceleration $\vec a$ is $d\vec v/dt$, are we not assuming that $\vec a$ is constant in that small interval $dt$, otherwise considering a change in acceleration $d\vec a$, the expression should have been $\vec a = \frac{d\vec v}{dt} - \frac{d\vec a}{2}$ (Again assuming rate of change of acceleration is constant). According to that argument, I can say that $\vec v$ is also constant in that time interval and so $\vec a = \vec 0$.

Can someone point out where exactly I have gone wrong. Also this was just an example, my question is general.

5

There are 5 best solutions below

0
On

In your suggested answer, da/dt is the ratio of two infinitesimals, so it can be finite and non-zero. However, da/2 is an infinitesimal so you can treat it as being zero when compared to the first term.

(If there was infinite acceleration in that moment, it could be an exception, but we normally assume acceleration is finite.)

0
On

You wrote that you haven't studied calculus.

Okay, then. Do not think of $dt$ and $dv$ as numbers. Instead, think of the whole expression $dv/dt=a$ as an abbreviation for "if $\Delta t$ is a very short time interval during which velocity changes by $\Delta v$, then at any time in that interval, $\Delta v/\Delta t$ is very close to $a$." Now you might worry about making precise sense out of things like "very small" and "very close". This is exactly what a good calculus course will teach you.

1
On

You are right. Infinitesimals are an imprecise way of talking about limits. It is a little odd that mathemeticians use them, because math is all about rigor.

Physicists tend to be looser about tiny mathematical details because they are interested in modeling the behavior of the universe. They can be satisfied with approximations. Sometimes that is the best they can get.

But mathematicians are interesting in modeling ideas. They build up theorem after theorem to describe complex ideas. One false theorem can be used to prove literally anything and destroy the entire structure of math. It is built into the structure of "if a then b" statements. If a is false, then the statement is true no matter what b says. Mathematicians are nitpicky because they have to be.

A derivative is the slope of a function at a point. You get at it by choosing two nearby points and approximating it with $a = \Delta v/\Delta t$. As $\Delta t$ gets small, the approximation gets better. There is a bit of a conceptual difficulty. Given any number that is almost right, you can find an approximation so good, using intervals so small, that you can show the number is wrong. But you never run out of even closer to right values.

Taking the limit is the way to rigorously show that all numbers but one can be eliminated as not good enough approximations. The only number that can't be eliminated is defined to be the derivative. This is how mathematicians think.

But then they see how easy it is the think about $\Delta v/\Delta t$, and they essentially cheat. In part, I think it has to do with traditional notation that goes all the way back to Newton.

If you use $\Delta v/\Delta t$, there are problems like the ones you raise. So they try to sweep the problems under the rug by making $\Delta v$ and $\Delta t$ so small that the error is $0$.

But that means $\Delta t$ would have to be $0$, which won't work. So they make $\Delta v$ and $\Delta t$ "infinitely small" and yet not $0$. And this works well enough. You have to ignore the fact that you can't say exactly what number actually is that small.

The best thing you can do is recognize that infinitesimals are an imprecise mental shortcut. They are a great tool if you ignore the logical nitpicks. There is a rigorous way to get the same answer, but you would have to use the language of limits all the time. It would be cumbersome.

7
On

It is important to be careful when working with infinitesimals. The answer by @mmesser314 is a good answer (+1 from me) which is described in terms of limits and the so-called standard analysis. In that analysis an infinitesimal is not a number. More specifically, an infinitesimal is not a real number.

However, it is not the only possible rigorous approach. If we use the hyperreal numbers then we can indeed treat infinitesimals as actual numbers. In the hyperreal numbers an infinitesimal is a positive number that is smaller than any positive real number. (That statement can be made precise, but I am going for the concept rather than for mathematical rigor). A finite hyperreal is then a real number, $x$, plus an infinitesimal, $\epsilon$. If you take the "standard part", denoted $\mathrm{st}$, of a hyperreal $x+\epsilon$ then you get the real number without the infinitesimal, $\mathrm{st}(x+\epsilon)=x$.

Now, with that you can think of $dv$ and $dt$ as being legitimate infinitesimal numbers. The derivative of a function, $f$, is then defined as: $$\dot f(x) = \mathrm{st}\left( \frac{f(x+dx)-f(x)}{dx} \right)$$

So, let's see how this applies for your example of a non-constant acceleration. Let's say that we have $v(t)=b t^2 + c t$ and $a(t)=\dot v(t)$. Now, we will not assume that $a$ is constant but we will apply the definition above: $$\dot v(t) = \mathrm{st}\left( \frac{v(t+dt)-v(t)}{dt} \right)=\mathrm{st}\left( \frac{2 \ b \ t \ dt +c \ dt + b \ dt^2}{dt} \right)= \mathrm{st}\left(2bt+c+b \ dt \right)$$ Now, notice that the last term inside the $\mathrm{st}$ is infinitesimal, so it is dropped and we are left with $$\dot v(t)=2bt+c$$

So even treating infinitesimals as valid numbers and not treating the acceleration as constant we are able to get the correct result. This is because of the way that the $\mathrm{st}$ function chops off any remaining infinitesimals. Roughly using your original terminology if we are treating infinitesimals as valid hyperreal numbers then $$\vec a = \mathrm{st}\left(\frac{d\vec v}{dt} - \frac{d\vec a}{2}\right) = \frac{d\vec v}{dt}$$

0
On

Since you are a physicist (and I am an engineer) to avoid to be entangled with the centuries long debate on infinitesimals on one side, as well as with uncertainty principle on the other (!), let's go back to the approach that the same Newton took to justify its motion. law: finite differences, i.e. Newton's series

You are measuring the position $s$ of an object every $\tau$ seconds and come up with a record $$(s_0, 0\tau),(s_1,1 \tau), \cdots , (s_n, n \tau) , \cdots$$.
You have reason to believe that the phenomenon is repetible and want to model its position vs time behaviour.

You might realize that the ratio $$ v_k = \frac{{\Delta s_k }}{{\Delta t_k }} = \frac{{s_{k + 1} - s_k }}{\tau } $$ is not constant, and so a linear model $s_n = s_0 + v n \tau$ is not satisfactory.

But then, instead you might realize that the difference of second degree $$ a_k = \frac{{\Delta v_k }}{\tau } = \frac{{\Delta ^2 s_k }}{{\tau ^2 }} = \frac{{s_{k + 2} - 2s_{k + 1} + s_k }}{{\tau ^2 }} $$ is quite constant, which means that the Newton series truncated at the second degree is a quite satisfactory model.
But such a Newton series is just the polynomial of second degree that interpolates the measured point.
And since it "interpolates" it means that it gives a prediction also for smaller $\tau$'s that you can verify if you have a clock sensible enough. Mathematically it gives you a law that is continuous in time, and thus has
a continuous second derivative.