Does the "concavity delta" necessarily decrease for monotonously increasing functions?

123 Views Asked by At

Consider a function $u:\mathbb{R}\rightarrow\mathbb{R}$, with the following properties: \begin{align} \forall x\in\mathbb{R}:&u'(x)>0 \quad &\text{(strictly increasing)}\\ \forall x\in\mathbb{R}:&u''(x)<0 \quad &\text{(concave)} \end{align} Now the second property leads to: $$\forall\lambda\in [0,1],x\in\mathbb{R},\delta>0:\quad \lambda u(x)+(1-\lambda)u(x+\delta)< u(\lambda x+(1-\lambda)(x+\delta))=u(x+(1-\lambda)\delta)$$

If you are not interested into the economic interpretation you can skip to the italic text:

These are the usual assumptions about utility functions in a context, where a simple ordering is not enough. Strictly increasing makes sense, because having more is always better than less. And concavity means that the increase in utility increases depending on how much you already have. (e.g. if you earn 10k a month then adding 10k on top of that is gigantic. if you earn a million then 10k doesn't seem like much)

At the same time the concavity also leads to risk aversion. You would always take the expected value over the gamble. If you could choose between a gamble where you get $\delta$ with probability $(1-\lambda)$ you would rather just take the expected value $(1-\lambda)\delta$ for sure. Which is expressed in the inequality above and stems from the concavity of the utility function.

The difference between the expected utility (aka the linear combination of the different outcomes) and the utility of the expected outcome, is kind of a measure how risk averse you are. Given that with the assumptions above the utility function becomes flatter and flatter, I wonder if your risk aversion decreases if you have more wealth initially. Better would be to compare the risk premiums (how much fixed money do you have to add to the gamble to make the utility equal to the expected outcome) for different amounts of wealth, but with decreasing marginal utility, the risk premium probably increases for wealthier people, so I though I would start with the easier part.

Welcome back everyone who is not interested in economics!

If you try to draw some of these kind of functions, you will notice that they all start very steep and then flatten towards higher x. Now the convexity inequality holds for for all x, but it looks like as if the difference between the curve and the line below it decreases with higher x.

So the question is, is it true that: \begin{align} \forall\lambda\in[0,1], x_2>x_1, \delta>0:\\ &u(x_1+(1-\lambda)\delta)-(\lambda u(x_1)+(1-\lambda)u(x_1+\delta)) \\ \ge &u(x_2+(1-\lambda)\delta)-(\lambda u(x_2)+(1-\lambda)u(x_2+\delta)) \end{align}

Maybe I need more assumptions like third derivative for that, or maybe I can even relax the strict inequalities above. Maybe it just doesn't make sense.

I am not quite sure how to tackle this. Usually I go for similar proofs, but the proof we did in our analyis lecture which shows the inequality above with the negative second derivative is not very instructive. It basically fell from the sky.

I think I will try playing with Taylor extensions for a bit now. But I am not sure if that will get me anywhere.

One more idea: Given that u has to be strictly increasing $u''(x)$ probably has to tend towards 0. If it stays away from 0 on the negative side for too long, then the first derivative will fall long enough to be negative at some point. I guess I will need the second derivative to be continuous for a proof of that though.

But maybe this will get me somewhere.

Sanity check with matlab: tested it on randomly generated values between 1 and 2, random deltas and random lambdas on the log function It at least doesn't give a counterexample. But that doesn't mean this inequality is true.

2

There are 2 best solutions below

2
On BEST ANSWER

We must have $L=\lim_{x\to\infty}u'(x)$ exists and $L\geq 0$. Indeed, $u'$ is strictly decreasing $($because $u''<0)$, but bounded below by $0$.

Moreover, we have

$$u'(x)-u'(x_0)=\int_{x_0}^x\,u''(t)\,dt,\tag{1}$$

so that $\int_{x_0}^\infty\,u''(t)\,dt=L-u'(x_0)$ is finite.


Now, suppose that $v:[x_0,\infty)\longrightarrow\mathbb{R}$ is negative and $\int_{x_0}^\infty\, v(t)\,dt$ is finite. For any choice of constant $c>\int_{x_0}^\infty\, v(t)\,dt$, the antiderivative $w(x)=\int_{x_0}^x\,v(t)\,dt+c$ is positive.

At this point we got everything we need, for any antiderivative of $w$ will be as $u$ in statement, strictly increasing and strictly convex. Therefore, all we need is consider second derivatives like $v$.


Because of equation $(1)$, $v$ cannot stay uniformly away from $0$ for large $x$, that is, there cannot be some $M>0$ such that for all $x>M$, $v(x)<c<0$. However, $v(x)$ need not converge to $0$ as $x\to\infty$.

There can be arbitrarily low bumps in $v$ along the tail, as long as the integral remains finite. In particular, $v$ need not ever be increasing for large $x$; there may be arbitrarily large $x$ for which $v$ is decreasing in some neighborhood $(x,x+\xi)$.


Consider $f_i:[0,1]\longrightarrow\mathbb{R}$ given by

$$f_i(\lambda)=u(x_i+(1-\lambda)\delta)-\Big(\lambda\,u(x_i)+(1-\lambda)\,u(x_i+\delta)\Big).$$

By inspection, we have $f_i(0)=f_i(1)=0$ and $f''(\lambda)=\delta^2\,u''(x_i+(1-\lambda)\delta)$.

Let $g:[0,1]\longrightarrow\mathbb{R}$ be given by $g(\lambda)=f_1(\lambda)-f_2(\lambda)$. From our previous calculations, it follows that $g(0)=g(1)=0$ and that

$$\frac1{\delta^2}\,g''(\lambda)=u''(x_1+(1-\lambda)\delta)-u''(x_2+(1-\lambda)\delta)$$

We will show that that there are choices of $u$ $($or rather, $v\simeq u'')$, $\delta$ and $x_1<x_2$ such that $g<0$ in $(0,1)$, which shows that what you wish to prove needs additional hypotheses.

Let $u''$ be as in the previous section, with bumps along its tail, and let $I$ be an interval on which $u''$ is decreasing. Let $x_1,x_2\in I$ and let $\delta>0$ be such that $x_2+\delta \in \overline{I}$.

Under these conditions, it's easy to check that $g''(\lambda)>0$ for all $\lambda \in (0,1)$, so that $g$ is strictly convex on $[0,1]$. Since $g$ is $0$ at the endpoints, it must be negative on $(0,1)$, which concludes the proof.

2
On

If we assume that $u''(x)$ increases monotonously, which in most cases it would (it probably has to converge up to zero, without actually becoming zero), then this statement is probably true because of this taylor approximation: \begin{align} u(x_i+h)\sim u(x_i)+hu'(x_i)+\frac{h^2}{2}u''(x_i)+o(h^2) \end{align} thus: \begin{align} \forall \lambda\in[0,1], x_2>x_1,\delta>0:&\\ u(x_1+\lambda\delta)-[(1-\lambda)&u(x_1)+\lambda u(x_1+\delta)]\\ &= u(x_1+\lambda\delta)-u(x_1)+\lambda[u(x_1)-u(x_1+\delta)]\\ &\sim \lambda\delta u'(x_1)+\frac{(\lambda\delta)^2}{2}u''(x_1)-\lambda[\delta u'(x_1)+\frac{\delta^2}{2}u''(x_1)]\\ &=\left(\frac{(\lambda\delta)^2-\lambda\delta^2}{2}\right)u''(x_1)\\ &=\frac{\lambda\delta^2}{2}(1-\lambda)(-u''(x_1))\\ &\ge \frac{\lambda\delta^2}{2}(1-\lambda)(-u''(x_2))\\ &\sim u(x_2+\lambda\delta)-[(1-\lambda)u(x_2)+\lambda u(x_2+\delta)] \end{align}

I'll look if I can get rid of the residual problem. But it seems like the statement is at least not completely wrong.

About the question wether u''(x) converges to zero $\forall x\in\mathbb{R}:u'(x)>0$, thus: \begin{align} &u'(0)\ge u'(0)-u'(x)=\int_0^x(-u''(t))dt \\ \Rightarrow &u'(0)\ge \int_0^\infty (-u''(t))dt =\sum_{n=0}^\infty\int_n^{n+1}(-u''(t))dt\\ \Rightarrow & \int_m^\infty (-u''(t))dt=\sum_{n=m}^\infty\int_n^{n+1}(-u''(t))dt \rightarrow 0\quad (m\rightarrow\infty) \end{align}

With uniform continuity or with monotony of u'' you get that it has to converge to 0. I am not sure if only continuity is enough. I can't find a proof for it. Oh and if u'(0) is not finite then you just take anything which is finite and do the same thing.