Is the space of maps which satisfy this vanishing condition finite-dimensional?

182 Views Asked by At

Let $\mathbb{D}^n \subseteq \mathbb{R}^n$ be the closed $n$-dimensional unit ball. Let $h:\mathbb{D}^n \to \mathbb{R}^{k}$ be smooth, and suppose that $h(x) \neq 0$ a.e. on $\mathbb{D}^n$. Set $$V_h=\{ \,\,f \in C^{\infty}(\mathbb{D}^n;\mathbb{R}^{k}) \, \,\,| \, \, (df_x)^T\big(h(x)\big)=0 \, \text{ for every }\, x \in \mathbb{D}^n \, \} $$

$V_h$ is a real vector-space. Is it always finite-dimensional? Can it be infinite-dimensional for some $h$?

Edit:

Pozz showed nicely that when $k=1$, $V_h$ always coincides with the space of constant functions, and that for $k>1$, $V_h$ might be infinite-dimensional (e.g. if $h$ is a constant function).

Is there ever a case where $V_h$ is finite-dimensional when $k>1$? I suspect that the answer is negative, but I don't know how to prove this.


2

There are 2 best solutions below

2
On BEST ANSWER

Let us write the condition $(df_x)^T(h(x))=0$ more explicitly. We can write $$ (df_x)^T=\bigg(\nabla f^1(x)\,\bigg|\,...\,\bigg|\,\nabla f^k(x)\bigg), $$ where $\nabla f^i(x)$ is the column vector given by the Euclidean gradient of $f^i$, where the $f^i$'s are the components of $f$ for $i=1,...,k$. Hence the condition defining $V_h$ becomes $$ (df_x)^T(h(x))=0\quad\forall x \qquad\Leftrightarrow\qquad \langle \partial_j f(x), h(x) \rangle =0\,\,\quad\forall j=1,...,n\quad \forall x$$ where $\langle\cdot,\cdot\rangle$ denotes the Euclidean product.

If $k=1$, we can then prove that $V_h$ is the $1$-dimensional vector space of constant functions. Indeed, if $f\in V_h$ then $h(x)\partial_j f(x)=0$ for any $j=1,...,n$ and any $x$. Since $h(x)\neq0$ almost everywhere, then $\partial_j f(x)=0$ for any $j=1,...,n$ and at almost every $x$. Since $f$ is smooth, then $\nabla f$ is actually identically zero, and thus $f$ is constant.

If $k>1$ we can find an example of $h$ such that $V_h$ is infinite-dimensional. Consider in fact $h(x)=(1,0,...,0)$, that is smooth and non-zero. In this case, if $f\in V_h$ then $$ \langle\partial_j f(x),h(x)\rangle=\partial_jf^1(x)=0 \,\,\quad \forall j=1,...,n\quad\forall x. $$ This implies that any function $f=(0,f^2,...,f^k)$ belongs to $V_h$ for any choice of $f^2,...,f^k$ smooth. And thus $V_h$ is infinite-dimensional.

2
On

TL;DR: Yes, it can be finite dimensional. I think that this is possible only due to "global obstructions".

Let's consider the case $n = 2, k = 2$. Writing $f = (f^1,f^2)$ and $h = (h^1,h^2)$, we get the system

$$ f^1_x h^1 + f^2_x h^2 = 0, \\ f^1_y h^1 + f^2_y h^2 = 0. $$

Differentiating the first equation with respect to $y$ and the second to $x$, we also get $$ f^1_{yx} h^1 + f^1_x h^1_y + f^2_{yx} h^2 + f^2_x h^2_y = 0, \\ f^1_{xy} h^1 + f^1_y h^1_x + f^2_{xy} h^2 + f^2_y h^2_x = 0. $$

Comparing both equations and using the equality of mixed derivatives, we get the equation $$ f_x^1 h^1_y + f^2_x h^2_y = f^1_y h^1_x + f^2_y h^2_x. $$

This gives us three linear equations for $(f^1_x,f^1_y,f^2_x,f^2_y)$ which are generically independent and so will leave us with one degree of freedom (ignoring questions of integrability). Now, let's analyze a specific example:

Take $h(x,y) = (x,y)$. Then we get the system $$ f^1_x x + f^2_x y = 0, \\ f^1_y x + f^2_y y = 0, \\ f^2_x = f^1_y. $$ Plugging the third equation into the first two allows us to "decouple" the system and get two identical equations for $f^1,f^2$: $$ f^1_x x + f^1_y y = 0, \\ f^2_x x + f^2_y y = 0. $$ Let's see if we can find a global solution. Geometrically, the first equation says that $\nabla(f^1)$ is perpendicular to $(x,y)$. Hence, on $\mathbb{D}^2 \setminus \{ (0,0) \}$, we must have that $$ \nabla(f^1)(x,y) = a(x,y)(-y,x) $$ for some smooth uniquely determined function $a$. That is, $\nabla(f^1)$ is a multiple of $\partial_{\theta}$ (or, dually, $df^1$ is a multiple of the famous $d\theta$). However, not all possible multiples are legal -- the mixed second partial derivatives of $f^1$ should agree and we get an equation for $a$: $$ f^1_{yx} = -a_y y - a = a_x x + a = f^1_{xy} \iff 2a = -(a_x \cdot x + a_y \cdot y). $$ This is a linear first order PDE for $a$ which can be solved explicitly using the method of characteristics. Fix $(x_0,y_0) \in \partial{\mathbb{D}^2}$ and set $u(t) := a(e^{-t}(x_0,y_0))$. Differentiating, we get $$ u'(t) = a_x(e^{-t}(x_0,y_0))(-e^t x_0) + a_y(e^{-t}(x_0,y_0))(-e^t y_0) = 2a(e^{-t}(x_0,y_0)) = 2u(t) $$ which implies that $$ u(t) = e^{2t} u(0) = e^{2t} a(x_0,y_0). $$ Hence, we see that $$ a(x,y) = a \left( \frac{(x,y)}{\| (x,y)\|} \right) \frac{1}{\| (x,y) \|^2}, \\ (\nabla f^1)(x,y) = \frac{-(y,x)}{\| (x,y) \|^2} a \left( \frac{(x,y)}{\| (x,y) \|} \right). $$ On each ray through the origin, the length of $(\nabla f^1)$ decays like $\frac{1}{r}$ and so in order to have a limit at the origin, we must have $a \equiv 0$ and so $f^1$ must be constant (and similarly for $f^2$).

Note that over $\mathbb{D}^2 \setminus \{ (0,0) \}$ there is an infinite dimensional family of solutions to your equation. One non-constant solution is the "obvious" solution $$ f = \frac{h}{\| h \|} = \left( \frac{x}{\sqrt{x^2+y^2}}, \frac{y}{\sqrt{x^2+y^2}} \right). $$

In general, if you follow the details of my analysis, you can show that any solution on (an open subset or the whole of) $\mathbb{D}^2 \setminus \{ (0,0) \}$ has the form $$ f^1 = -\int \varphi(\theta) \sin \theta \, d\theta, \,\,\, f^2 = \int \varphi(\theta) \cos \theta \, d\theta. $$

If $\varphi \equiv 1$ then you get the "obvious" solution $$ f^1 = \cos \theta = \frac{x}{\sqrt{x^2 + y^2}}, \,\,\, f^2 = \sin \theta = \frac{y}{\sqrt{x^2 + y^2}} $$

but you can take any other $\varphi$ and obtain infinitely many other solutions. If the resulting integrals are periodic, you get a solution on the whole of $\mathbb{D}^2 \setminus \{ (0,0) \}$ but none of the solutions will extend to the whole of $\mathbb{D}^2$.