I’m looking at the following lemma.
Suppose $\gamma_{0}, \gamma_{1}: \mathbb{S}^{1} \rightarrow \mathbb{R}^{2} \backslash\{0\}$ are two closed curves in the punctured plane. If $$ \left\|\gamma_{0}(z)-\gamma_{1}(z)\right\| \leq\left\|\gamma_{0}(z)\right\| $$ for all $z \in \mathbb{S}^{1}$, then $\gamma_{0} \simeq \gamma_{1}$ in $\mathbb{R}^{2} \backslash\{0\} .$
The lecturer provides the following analogy to illustrate the idea behind the lemma.
Suppose you are walking your dog and you pass near a tree. To ensure that you don't get the leash tangled around the tree, it suffices to adjust its length so that, at any instant, it is shorter than your distance to the tree. Mathematically, we prove this "dog-on-a-leash" lemma simply by considering a linear homotopy.
I can see how the two curves $\gamma_0$ and $\gamma_1$ satisfying the condition in the statement of the lemma would forbid $\gamma_1$ to “wrap around” the missing point. But beyond that I don’t see how the analogy translates to a linear homotopy. Some more elaboration on the idea and the proof would be appreciated.
When you hear or read "linear homotopy" your mind should automatically and immediately go to $H:S^1\times[0,1]\to \Bbb R^2$ given by $$ H(z,t)=(1-t)\gamma_1(z)+t\gamma_0(z) $$ because that's what "linear homotopy" means. It is often the simplest homotopy between two maps, and therefore also often the first thing to try when you want a homotopy between two maps. At least as long as the codomain of the two maps is a vector space or a subset of a vector space.
And indeed, for any fixed $z$, the given requirement on $|\gamma_1(z)-\gamma_0(z)|$ means that $H(z,t)\neq0$ for all $t\in[0,1]$, and we do have a valid homotopy.
PS. This exact argument lies at the heart of my personal favourite proof of the fundamental theorem of algebra: for any degree-$n$ (monic) polynomial $p$, if $S^1$ is a large enough circle around the origin, and $\gamma_1(z)=p(z)$ and $\gamma_0(z)=z^n$, the two therefore go around the origin the same number of times. And unless $0$ is a root of $p$, a small enough circle around the origin is mapped by $p$ to a circle that doesn't go around the origin. So some circle in-between needs to cross the origin, and $p$ has a root.