Halving the speed of a video vs. doubling the time between each frame?

49 Views Asked by At

I learned about "limits" and have a question.

I know that if I halve the speed of a video every minute, then I will only ever get to watch 2 minutes of that video no matter how long I wait. But suppose instead of slowing the video down by half every minute, I double the time it takes for each frame to display. Wouldn't I then be able to watch the whole video in finite amount of time even though nothing actually changed between the two scenarios?

What am I misunderstanding/missing?

2

There are 2 best solutions below

0
On

The idea of showing half the video in one minute, then half the remaining video in the next minute, and so on (halving the speed every minute) ad infinitum, relies on the notion that one can show an arbitrarily small fraction of a video and take one minute to do so.

Suppose a video is shot at the rate of $30$ frames per second. So in the first $60$ seconds (one minute) we show $60\times 30 = 1800$ frames. In the next minute we cut the speed in half and show only $900$ frames. In the third minute $450$ frames. In the fourth minute $225$ frames. In the fifth minute ... half of $225$ is $112.5$ frames, I can show $112$ frames or $113$, but what is half a frame?

Changing the specification so that you double the screen time of each frame instead of halving the speed just shows how the original idea is incompatible with the actual way videos work, which is that they have a finite number of frames. Suppose you eventually get to the point where one frame is supposed to display for two minutes; how do you "double the time it takes for each frame to display" in the middle of displaying a frame that came on-screen a minute earlier?

One way you could resolve how to lengthen the on-screen time of each frame is to say that since the frame showing right now has one minute of screen time left, I'm going to double the remaining screen time. That is, instead of showing the same frame for just one additional minute, I'll show it for two minutes.

But when another minute has elapsed, oh look, I have a frame with one minute left on the display but I have to double its remaining screen time, so it gets an extra minute. I'm never going to be able to get to the next frame, will I?

If you carry out the idea that way, it may be even easier to set up the limit than if you try to slow down the rate of showing a discrete number of frames.

0
On

This was too long for a comment: I'm for some reason a bit uncomfortable with saying that only 2 minutes of the video can be seen, but that's semantics (for why, read on). However, yes, I believe that your explanation is correct.

If we phrase everything in terms of frames and assume that the video was shot with a framerate of $1000$ frames per second, then in minute 1 we see 1000 frames, in minute 2 we see 500 frames etc, so in total we are seeing the first $$1000*60+500*60+250*60+\cdots =1000*60(1+1/2+\cdots+1/(2n)+\cdots)=120000$$ frames. In the original framerate it would take us $120$ seconds to view the first $120000$ frames.

In the other case, one frame is displayed in 1 milisecond. Then frame 2 is displayed in 2 miliseconds. Frame 3 is displayed in 4 miliseconds etc. So, yes, a video with a finite amount of frames can be still be displayed in a finite amount of time even when we double the time it takes for a frame to display after a frame displays.

The difference between the two is that in the first case you're watching a video (with a finite count of frames) for an infinite amount of time (hence my discomfort with saying that only 2 minutes of the video can be seen when we're actually seeing it for all infinity - maybe 'first 2 minutes of the content' would be better phrasing, but again, semantics), infinitely subdividing the frames and frame rates, but in the second case you're watching a video with a finite amount of frames, all taking a finite amount of time to appear, without infinite subdivision.