How gradient transform under homotety

277 Views Asked by At

Let $\Sigma^n \subseteq \mathbb{R}^{n+1}$ be a smooth hypersurface. Let $\lambda > 0$ be a constant and let define $\tilde{\Sigma} := \lambda \Sigma$.

Let $f$ be a smooth function on $\Sigma$. This defines a function $\tilde{f}$ on $\tilde{\Sigma}$ as follows: for every $p \in \tilde{\Sigma}$, $$ \tilde{f}(p) := f(\frac{p}{\lambda}). $$ Let $V$ be a constant vector field in $\mathbb{R}^{n+1}$. I would like to express $$ \langle \nabla^{\Sigma} f, V \rangle_{\mathbb{R}^{n+1}} $$ in terms of $\nabla^{\tilde{\Sigma}} f$.

On a point $p \in \tilde{\Sigma}$ it should hold \begin{equation} \nabla^{\tilde{\Sigma}}\tilde{f}(p) = \frac{1}{\lambda^2} \nabla^{\Sigma}f(\frac{p}{\lambda}). \tag{1} \end{equation} I obtain this formula from expressing the gradient in local coordinates and from the observation that the pull-back metric on $\Sigma$ induced by the metric on $\tilde{\Sigma}$ is basically $\tilde{g}_{ij} = \lambda^2 g_{ij}$, where $g_{ij}$ is the metric on $\Sigma$ induced by the ambient Euclidean metric.

But somehow I feel that there is something fishy. I would expect the gradient to scale as $\frac{1}{\lambda}$ and I would expect a formula of the kind:

$$ \langle \nabla^{\Sigma} f(\frac{p}{\lambda}), V \rangle_{\mathbb{R}^{n+1}} = \langle \nabla^{\tilde{\Sigma}} \tilde{f}(p), \lambda V \rangle_{\mathbb{R}^{n+1}}. \tag{2} $$

Can anyone help me? I'm getting very confused...

EDIT: Consider the case where $f$ is the restriction on $\Sigma$ of a function $F : \mathbb{R}^{n+1} \to \mathbb{R}$. We can always assume that, at least locally. Then it is known that $\nabla^{\Sigma} f = \left( \nabla^{\mathbb{R}^{n+1}} F|_{\Sigma}\right)$. Therefore, given $p \in \tilde{\Sigma}$, and identifying the tangent spaces $T_p \tilde{\Sigma}$ and $T_{\frac{p}{\lambda}} \Sigma$ we have: $$ \nabla^{\tilde{\Sigma}} \tilde{f}(p) = \left( \nabla^{\mathbb{R}^{n+1}} F(\frac{y}{\lambda}) \right)^{\top} = \frac{1}{\lambda} \left(\nabla^{\mathbb{R}^{n+1}} F \right)^{\top}(\frac{p}{\lambda}) = \frac{1}{\lambda} \nabla^{\Sigma}f(\frac{y}{\lambda}). \tag{3} $$ I now believe that $(3)$ is correct and from this $(2)$ follows. Therefore $(1)$ should be wrong. I think that I was mislead by the intrinsic approach showed in the answer by Trevis. I think I got confused from the fact that in the intrinsic approach one thinks about $\Sigma$ and $\tilde{\Sigma}$ as the same manifold but with different Riemannian metrics. The equation of Trevis is of course correct, but then my equation $(1)$ is wrong probably because one should be careful in translating the instrinsic equation back into the extrinsic situation. In fact the same abstract coordinates genereates two different local frames on $\Sigma$ and $\tilde{\Sigma}$ (as hypersurfaces) and one frame is the other one rescaled by a factor of $\lambda$.

1

There are 1 best solutions below

2
On BEST ANSWER

The gradient is intrinsic to (depends only on) the metric structure of the hypersurfaces, so considering $\Bbb R^n$ is perhaps a distraction---and probably a source of confusion.

It's simpler just to consider a fixed abstract Riemannian manifold $(M, g)$ and a metric $\tilde g := \lambda^2 g$, $\lambda > 0$, homothetic to $g$. In particular, this has the advantage that you can think just about smooth functions on $M$, rather than worrying about and comparing gradients of corresponding functions on different surfaces. Now, the respective gradients $\operatorname{grad} f, \widetilde{\operatorname{grad}} f$ of $f$ w.r.t. $g, \tilde g$ are related by $$\boxed{\widetilde{\operatorname{grad}} f = \tilde g^{-1}(df,\,\cdot\,) = (\lambda^2 g)^{-1} (df,\,\cdot\,) = \lambda^{-2} g^{-1} (df,\,\cdot\,) = \lambda^{-2} \operatorname{grad} f} .$$ NB this computation doesn't depend on $\lambda$ being constant, so in fact this formula applies to a conformal rescaling of a metric by a general positive function $\lambda$.