Prove if $\phi: \mathbb{S}^n \longrightarrow \mathbb{R}$ is differentiable then its derivates are zero

114 Views Asked by At

I'm a beginner student of differential topology and I try to understand this question:

Prove if $\phi: \mathbb{S}^n \longrightarrow \mathbb{R}$ is differentiable, then exists two different points $p$ and $q$ such that the two linear mappings $\phi_*: T_p\mathbb{S}^n \longrightarrow T_{\phi(p)}\mathbb{R}$ and $\phi_*: T_q\mathbb{S}^n \longrightarrow T_{\phi(q)}\mathbb{R}$ are zero.

How $\phi_*$ is defined? I try to show that one point in the preimage of the maximum could work, but I really don´t underestand the objects what I´m working with. Can anyone help me? Maybe giving me some bibliography? I´ll really appreciate that.

1

There are 1 best solutions below

2
On BEST ANSWER

There are multiple ways to define $\phi_*$ according to the level of generality you are using. One of the most common ways to work with this object is as follows:

When you are working with some manifold, in this case $\mathbb{S}^n$, around each point you have a coordinate neighborhood. For example, for the sphere these can be the neighborhood where the stereographic projections are defined, but also it could be the regions where the manifold you are working with is a graph of some function.

Just to put an example, in the circle $\mathbb{S}^1$, the coordinates around the point $(0, 1)$ can be $\psi(t) = (t, \sqrt{1 - t^2})$ with $-1 < t < 1$.

These coordinates are a map from some open set into the space your manifold sits in (of course, in case your manifold is known to be somewhere explictly, as is the case for the sphere). To you have a smooth function $\phi:\mathbb{S}^n\rightarrow\mathbb{R}$ means that when you compose it with the coordinates, you obtain a smooth function in your classical multivariable classical sense.

For example, the function $\phi:\mathbb{S}^1\rightarrow\mathbb{R}$ given by $$\begin{equation} \phi(x, y) = xy^2 \end{equation}$$ becomes in the coordinates given above

$$\begin{equation} t\mapsto t(1 - t^2), -1 < t < 1, \end{equation}$$ and all the questions about whether this is a local maximum, minimum and so on, can be read from this expression in coordinates.

The main point, when you define your manifold theory, is that you make sure that the conclusions you obtain are independent of the coordinates you use.

If we return now to your question of how do I prove and/or find that maximums and minimums exist and so on, this becomes a question in two different levels: when you are dealing with local objects, as it is to talk about local maximums or minimums, this is something that can be read from the coordinates and the computations around it.

For example, in the charts we gave above, if we derivate we get that the derivative, in the coordinate $t$, is $1 - 3t^2$ that vanishes only on $t = \pm\frac{1}{\sqrt{3}}$, both of which live in the interval of definition. Furhtermore, the second derivative is $-6t$, so that one is a maximum ($t = \frac{1}{\sqrt{3}}$) and the other a minimum ($t = -\frac{1}{\sqrt{3}}$).

Returing to the circle, what this says is that the corresponding points $(\frac{1}{\sqrt{3}}, \frac{\sqrt{2}}{\sqrt{3}})$ and $(-\frac{1}{\sqrt{3}}, \frac{\sqrt{2}}{\sqrt{3}})$ are a local maximum and a local minimum of this function in the circle.

All of this, though, is only happening in the neighborhood where this coordinates work, and tells you nothing about what happens beyond it (in the case above, we know nothing from this function in the lower hemisphere of the circle from these coordinates).

That's the second level at which this question happens, which is the global perspective. When we say, as it was mentioned in the comments, this is compact and hence there is a maximum for this function, that is a topological argument that you cannot capture a priori with local charts. The local coordinates do not know that the manifold you are is or not compact, or so on. They can't tell you about global arguments, just as your derivative theory in multivariable calculus could not tell you which ones of your critical points was a global maximum, you needed extra arguments for that.

So, now that we have said all of this, we can return and explain what is $\phi_*$. When you have coordinates, those are between an open subset of some Euclidean space and another euclidean space (in our example, is from a subset of $\mathbb{R}$) into $\mathbb{R}^2$ and you know from multivaribale calculus that this map has a derivative, which is a linear map, at any given point.

If we are interested in the point $(0, 1)$, which corresponds to the value $t = 0$, we would compute that derivative in the regular calculus sense and that gives you a linear map from $\mathbb{R}$ into $\mathbb{R}^2$. The image of $\mathbb{R}$ under this linear map, at $t = 0$, is the tangent plane. Indeed you could say now that this dependended on the coordinates, and that is true, but what changes at the end is not the image in $\mathbb{R}^2$ (notice that in any coordinates, you will get subspaces of the same euclidean space) but rather the basis at which you represent this map as a matrix.

In the same way, if you want to define $\phi_*$ what you would do is to consider the map in coordinates (in the example above, $t \mapsto t(1 - t^2)$) and consider the derivative, as a linear map, at the relevant point you are interested.

In our example, the map was $t\mapsto t(1 - t^2)$ and the derivative, as a linear map is a map that goes from $\mathbb{R}\rightarrow\mathbb{R}$ and, again by multivariable calculus, itis represented by the jacobian matrix. In this case the $1\times 1$ matrix $[1 - 3t^2]$. So, for example, for our maximums/minimums this matrix is precisely the $0$ matrix, corresponding to the fact that the linear derivative map is the zero linear transformation.

In general, when you define $\phi_*$, you pick a coordinate system around some relevant point and compute the Jacobian matrix of your $\phi$ expressed in these coordinates. In our example, the coordinates were only one: $t$, but in general you have as many as the dimension of the manifold you are, so when you repeat all we did above, what you get is the Jacobian matrix, which now is not $1\times 1$, and which represents the derivative in certain basis and that's why when you want to understand information about your map $\phi$ locally, and study its derivative you recover all your tools of calculus.

This is just one approach, there is another approach via curves that is easier for computations many times and there are more theoretical approaches via derivations when you do not have a big euclidean space where your manifold is already living in, but all of them are equivalent after the correct identifications have been made.

To read on this there are many sources:

  1. Guillemin Pollack, differential topology
  2. Differential Geometry of curves and surfaces from DoCarmo, and Riemannian Geometry from Docarmo are great sources to read and have exercises.
  3. Differential Geometry from Loring Tu is a good introduction to all this theory in a more abstract sense.
  4. Introduction to Smooth Manifolds of John Lee is in my opinion the best for the abstract theory.
  5. Topology from the Differentiable viewpoint from John Milnor is a marvel too.