Intuition behind the proof of Tietze Extension Theorem (or a weaker version thereof)

135 Views Asked by At

I am going through the proof of a weaker version of the Tietze extension theorem:

If $f : K \rightarrow \mathbb{R}^d$ is a continuous function, where $K \subseteq \mathbb{R}^n$ is compact, then it can be extended to a continuous function $g: \mathbb{R}^n \rightarrow \mathbb{R}^d$, i.e, $g(x) = f(x)$ for all $x \in K$.

I am looking at the proof which is essentially as described here. I understand the proof by just going through the motions, but what is the intuition behind it? What motivates us to define $\varphi_i$ and $f$ on $K^c$ as it is done here?

1

There are 1 best solutions below

3
On

Well, the basic idea is to define $g(x)$ to be a weighted average of $f(y)$ for points $y$ that are nearby to $x$ and in $K$. That's exactly what the formula for $g$ on $K^c$ does: it takes an average of the values $f(a_i)$, weighted by $2^{-i}\varphi_i(x)$. The trick is then to figure out how to choose the weighting functions $\varphi_i$ so that the resulting function is continuous on the boundary of $K$. The idea is that as you approach a point $a\in K$, the weight of the $\varphi_i$ corresponding to points near $a$ should dominate all the other weights, so the weighted average you compute just approaches $f(a)$.

So what does the function $\varphi_i$ do? It asks, "how close is $a_i$ to being the closest point to $x$ in $K$?". If $a_i$ actually is the closest point, then $|x-a_i|=\rho(x,K)$ and so $\varphi_i(x)=1$. If $a_i$ is not the closest point, then $\varphi_i$ decreases with how far proportionately it is from being the closest point, reaching $0$ once it is twice as far away as the closest point. This way, the weighted sum we will use to define $g(x)$ depends only on the values of $f$ at nearby points of $K$ (with the threshold for "nearby" getting stricter and stricter as $x$ approaches $K$).

Another important technical point is the fact that we are using a countable dense subset. Naively, you might like to take a weighted average simply over all points of $K$, weighted according to how close they are. Of course this runs into technical issues of how to define such an average--you would need some sort of nice measure on $K$. The trick of using the countable dense subset is to let you be able to just add up all the weights instead of needing to come up with a measure, as long as you multiply them by $2^{-i}$ to make sure the sum converges.