Putnam Problem A3 2010: if $h=a\frac{\partial h}{\partial x}+b\frac{\partial h}{\partial y}$ and $h$ is bounded then $h\equiv0$

165 Views Asked by At

I found a possible solution to the following Putnam problem from the 2010 edition of the competition, and I was wondering whether my pproach/solution is correct or whether I have made some assumptions that need to be proven, or if I just made a mistake somewhere?

The problem is:

Suppose that the function $ h : \mathbb R ^ 2 \to \mathbb R $ has continuous partial derivatives and satisfies the equation $$ h ( x , y ) = a \frac { \partial h } { \partial x } ( x , y ) + b \frac { \partial h } { \partial y } ( x , y ) $$ for some constants $ a , b $. Prove that if there is a constant $ M $ such that $ | h ( x , y ) | \le M $ for all $ ( x , y ) \in \mathbb R ^ 2 $, then $ h $ is identically zero.

My solution is as follows:

Since the partial derivatives are continuous, $ h ( x , y ) $ is also continuous and well defined on all of $ \mathbb R ^ 2 $. Therefore we may assume that if h has a maximum or minimum point at $ ( X _ 0 , Y _ 0 ) $, its partial derivatives will be equal to $ 0 $ there. Therefore we follow this reasoning: Let $ M $ be the maximum/minimum value of $ h $ on its domain, and assume $ | h ( X _ 0 , Y _ 0 ) | = M $. Then: $$ | h ( x , y ) | \le M = h ( X _ 0 , Y _ 0 ) = a \frac { \partial h } { \partial x } ( X _ 0 , Y _ 0 ) + b \frac { \partial h } { \partial y } ( X _ 0 , Y _ 0 ) = 0 $$ And so: $$ | h ( x , y ) | \le 0 $$ Therefore $ h ( x , y ) $ is identically equal to $ 0 $.

1

There are 1 best solutions below

0
On

First of all, note that a bounded and continuously differentiable $ h : \mathbb R ^ 2 \to \mathbb R $ need not have a minimizer/maximizer. For example, consider $ h ( x , y ) = \arctan ( x + y ) $. We have $ | h ( x , y ) | \le \frac \pi 2 $ and $ \frac { \partial h } { \partial x } ( x , y ) = \frac { \partial h } { \partial y } ( x , y ) = \frac 1 { ( x + y ) ^ 2 + 1 } $ for all $ x , y \in \mathbb R $, and $ - \frac \pi 2 $ and $ \frac \pi 2 $ are the infimum and supremum of the values of $ h $, but there are no $ x , y \in \mathbb R $ with $ h ( x , y ) = \pm \frac \pi 2 $, and the partial derivatives never vanish. Of course, this particular $ h $ does not satisfy the partial differential equation $$ h ( x , y ) = a \frac { \partial h } { \partial x } ( x , y ) + b \frac { \partial h } { \partial y } ( x , y ) \tag 0 \label 0 $$ for any constants $ a , b \in \mathbb R $ (if it did, the statement of the problem would be wrong), but this example shows that your reasoning is invalid. You seem to have confused the situation with that of a continuous function defined on a compact domain, which is guaranteed to achieve its minimum/maximum. As $ \mathbb R ^ 2 $ is not compact, that is not the case here.

But that aside, let's take a look at how the problem can be solved, correctly. The expression on the right-hand side of \eqref{0} suggest that the problem is meant to examine your knowledge of the chain rule for multi-variable functions. For any differentiable $ f , g : \mathbb R \to \mathbb R $, if you define $ H : \mathbb R \to \mathbb R $ with $ H ( t ) = h \bigl ( f ( t ) , g ( t ) \bigr ) $ for all $ t \in \mathbb R $, then the chain rule implies that $ H $ is differentiable and $$ H ' ( t ) = f ' ( t ) \frac { \partial h } { \partial x } \bigl ( f ( t ) , g ( t ) \bigr ) + g ' ( t ) \frac { \partial h } { \partial y } \bigl ( f ( t ) , g ( t ) \bigr ) \tag 1 \label 1 $$ for all $ t \in \mathbb R $. We only need to choose $ f $ and $ g $ wisely, and take advantage of \eqref{0}. For that, consider the constants $ a , b \in \mathbb R $ as given in the statement of the problem, and fix $ x _ 0 , y _ 0 \in \mathbb R $ arbitraily. Define $ f , g : \mathbb R \to \mathbb R $ respectively with $ f ( t ) = x _ 0 + a t $ and $ g ( t ) = y _ 0 + b t $ for all $ t \in \mathbb R $. Then $ f $ and $ g $ are differentiable, and we have $ f ' ( t ) = a $ and $ g ' ( t ) = b $ for all $ t \in \mathbb R $. By \eqref{1} we get $$ H ' ( t ) = a \frac { \partial h } { \partial x } \bigl ( f ( t ) , g ( t ) \bigr ) + b \frac { \partial h } { \partial y } \bigl ( f ( t ) , g ( t ) \bigr ) \tag 2 \label 2 $$ for all $ t \in \mathbb R $, while at the same time, substituting $ f ( t ) $ for $ x $ and $ g ( t ) $ for $ y $ in \eqref{0}, we have $$ H ( t ) = a \frac { \partial h } { \partial x } \bigl ( f ( t ) , g ( t ) \bigr ) + b \frac { \partial h } { \partial y } \bigl ( f ( t ) , g ( t ) \bigr ) \tag 3 \label 3 $$ for all $ t \in \mathbb R $. Comparing \eqref{2} and \eqref{3}, we get the ordinary differential equation $$ H ' ( t ) = H ( t ) \tag 4 \label 4 $$ for all $ t \in \mathbb R $. This is a first-order homogeneous linear differential equation with constant coefficients, which has well-known solutions, that can be obtained as follows. We have \begin{align*} \frac { \mathrm d } { \mathrm d t } \bigl ( \exp ( - t ) H ( t ) \bigr ) & = \exp ( - t ) H ' ( t ) + \frac { \mathrm d } { \mathrm d t } \bigl ( \exp ( - t ) \bigr ) H ( t ) \\ & = \exp ( - t ) \bigl ( H ' ( t ) - H ( t ) \bigr ) \\ & = 0 \text , \end{align*} and thus $ \exp ( - t ) H ( t ) $ is constant. As $ f ( 0 ) = x _ 0 $ and $ g ( 0 ) = y _ 0 $, we have $$ \exp ( - 0 ) H ( 0 ) = 1 \cdot h \bigl ( f ( 0 ) , g ( 0 ) \bigr ) = h ( x _ 0 , y _ 0 ) \text , $$ and therefore $ \exp ( - t ) H ( t ) $ is equal to $ h ( x _ 0 , y _ 0 ) $ for all $ t \in \mathbb R $. Equivalently, we have $$ h ( x _ 0 + a t , y _ 0 + b t ) = h ( x _ 0 , y _ 0 ) \exp t \tag 5 \label 5 $$ for all $ t \in \mathbb R $. Now, if $ h ( x _ 0 , y _ 0 ) \ne 0 $, then the right-hand side of \eqref{5} is unbounded, and therefore, the left-hand side must be unbounded, too, contradicting the assumption. More precisely, if $ | h ( x , y ) | \le M $ for all $ x , y \in \mathbb R $ but $ h ( x _ 0 , y _ 0 ) \ne 0 $, then letting $ t = \log \left ( 1 + \frac M { | h ( x _ 0 , y _ 0 ) | } \right ) $ in \eqref{5} we get \begin{align*} M & < | h ( x _ 0 , y _ 0 ) | + M \\ & = | h ( x _ 0 , y _ 0 ) | \exp \Biggl ( \log \left ( 1 + \frac M { | h ( x _ 0 , y _ 0 ) | } \right ) \Biggr ) \\ & = \left | h \Biggl ( x _ 0 + a \log \left ( 1 + \frac M { | h ( x _ 0 , y _ 0 ) | } \right ) , y _ 0 + b \log \left ( 1 + \frac M { | h ( x _ 0 , y _ 0 ) | } \right ) \Biggr ) \right | \\ & \le M \text , \end{align*} which is a contradiction. Therefore, we must have $ h ( x _ 0 , y _ 0 ) = 0 $. As we did everything for arbitrary $ x _ 0 , y _ 0 \in \mathbb R $, this holds on the whole $ \mathbb R ^ 2 $, which is what was desired.


Further remarks:

In fact, we can characterize all the continuously differentiable solutions of \eqref{0}. Note that in case $ a = b = 0 $, \eqref{0} itself implies that $ h $ is constantly zero, and that's the only solution, which already falls in the category of bounded solutions, that are of interest in this particular problem. So, let's focus on the case where at least one of $ a $ and $ b $ is nonzero. Recall that up to deriving \eqref{5} we did not use the boundedness condition.

Define $ u : \mathbb R \to \mathbb R $ with $ u ( s ) = h ( - b s , a s ) $ for all $ s \in \mathbb R $. Note that since $ h $ is continuously deifferentiable, so is $ u $. Now, use \eqref{5} to get \begin{align*} h ( x , y ) & = h \Biggl ( x - a \left ( \frac { a x + b y } { a ^ 2 + b ^ 2 } \right ) , y - b \left ( \frac { a x + b y } { a ^ 2 + b ^ 2 } \right ) \Biggr ) \exp \left ( \frac { a x + b y } { a ^ 2 + b ^ 2 } \right ) \\ & = h \Biggl ( - b \left ( \frac { a y - b x } { a ^ 2 + b ^ 2 } \right ) , a \left ( \frac { a y - b x } { a ^ 2 + b ^ 2 } \right ) \Biggr ) \exp \left ( \frac { a x + b y } { a ^ 2 + b ^ 2 } \right ) \text , \end{align*} and therefore $$ h ( x , y ) = u \left ( \frac { a y - b x } { a ^ 2 + b ^ 2 } \right ) \exp \left ( \frac { a x + b y } { a ^ 2 + b ^ 2 } \right ) \tag 6 \label 6 $$ for all $ x , y \in \mathbb R $. Therefore, we've found out that any $ h : \mathbb R ^ 2 \to \mathbb R $ satisfying \eqref{0} for all $ x , y \in \mathbb R ^ 2 $ must be of the form \eqref{6} for some continuously differentiable $ u : \mathbb R \to \mathbb R $. Conversely, given any such $ u $, if we define $ h : \mathbb R ^ 2 \to \mathbb R $ with \eqref{6} for all $ x , y \in \mathbb R ^ 2 $, then $ h $ will be continuously differentiable. Also, $$ \frac \partial { \partial x } h ( x , y ) = \Biggl ( \frac a { a ^ 2 + b ^ 2 } u \left ( \frac { a y - b x } { a ^ 2 + b ^ 2 } \right ) - \frac b { a ^ 2 + b ^ 2 } u ' \left ( \frac { a y - b x } { a ^ 2 + b ^ 2 } \right ) \Biggr ) \exp \left ( \frac { a x + b y } { a ^ 2 + b ^ 2 } \right ) $$ and $$ \frac \partial { \partial y } h ( x , y ) = \Biggl ( \frac b { a ^ 2 + b ^ 2 } u \left ( \frac { a y - b x } { a ^ 2 + b ^ 2 } \right ) + \frac a { a ^ 2 + b ^ 2 } u ' \left ( \frac { a y - b x } { a ^ 2 + b ^ 2 } \right ) \Biggr ) \exp \left ( \frac { a x + b y } { a ^ 2 + b ^ 2 } \right ) \text , $$ which together imply \eqref{0} for all $ x , y \in \mathbb R $. Hence, every $ h $ obtained from any continuously differentiable $ u $ from \eqref{6} will satisfy \eqref{0} for all $ x , y \in \mathbb R $, and thus we've characterized all the solutions.

One can take a look the original problem from another angle, using this characterization. As the exponential function only takes nonzero values, \eqref{6} tells us that $ h $ is identically zero iff $ u $ is so. For any $ s , s _ 0 \in \mathbb R $, we have $$ h ( a s - b s _ 0 , b s + a s _ 0 ) = u ( s _ 0 ) \exp s \text . $$ Thus, if there is $ s _ 0 \in \mathbb R $ with $ u ( s _ 0 ) \ne 0 $, $ h $ is unbounded, as the exponential function is so.