Can a function be shown positive without derivative?

570 Views Asked by At

In a Finnish matriculation examination was the following problem

Let $a_1,a_2,\ldots,a_n$ be reals. For what value the parameter $x$ should be given if one wants to minimize the value of the sum $(x-a_1)^2+\cdots + (x-a_n)^2$. This can be compute easily using derivatives but is there an alternative proof which does not use calculus?

4

There are 4 best solutions below

0
On BEST ANSWER

Yes, it can be done without calculus. Just expand and write it as a square like this: $$\sum_{k=1}^n(x-a_k)^2\!=\!nx^2-2x\sum_{k=1}^na_k+\sum_{k=1}^na_k^2\!=\!n\left(x^2-\frac2n\sum_{k=1}^na_k+C\right)\!=\!n\left(x-\frac1n\sum_{k=1}^na_k\right)^2+D$$ where $C$ and $D$ are appropriate constants. You can see the minimum is $\displaystyle\frac{a_1+\ldots+a_n}n$.

0
On

just simplify it as a normal quadratic equation?

$$ (x-a_1)^2+\cdots + (x-a_n)^2 = nx^2-(2\sum_{i=1}^{n}a_i)x+\sum_{i=1}^{n}a_i^2 $$

when $x=\sum_{i=1}^{n}a_i/n$ it reaches minimal.

0
On

You can do it geometrically. Consider the point $p=(a_1,\dots,a_n)\in\mathbb R^n$ (set $n=3$ for ease of imagination).

Then, using Pythagorean theorem, you see that what you want is to find the point $(x,x,x,\dots,x)$ that minimizes the distance form $p$. This is easily done by projecting $p$ onto the line $L=\{x_1=x_2=\dots=x_n\}$. Which is achieved by intersecting $L$ with its orthogonal through $p$.

The orthogonal to $L$ is clearly $\sum x_i=0$ and the orthogonal to $L$ passing through $P$ is the hyperplane $\pi=\{\sum x_i=\sum a_i\}$.

So $\pi\cap L$ is $(x,\dots,x)$ with $nx=\sum a_i$.

0
On

Using orthogonal projection in vector algebra. The essence of this proof is that you want to minimize the Euclidean distance between the vectors $\bar x = (x, ..., x) \in R_n$ and $\bar a = (a_1, ..., a_n) \in R_n$.

Premise 1 (assignment)

You have $f: x \in R \longrightarrow \sum_{\{a_k\}} (a_k - x)^2 \in R$ and want to find $x$ such that $f(x)$ or $\sqrt{f(x)}$ is minimum (it's equivalent because square root is monotonic and $f(x) >= 0 \space \forall x$)

It is trivial to show that $\sqrt{f(x)} = \lVert (a_1, ..., a_n) - (x, ..., x)\rVert_2$ where $\lVert \cdot \rVert_2$ is the Euclidean norm. (It is intended that both vectors have $n$ components.) We can write simply $\lVert \bar a - \bar x \rVert$and note that $\bar x = (x, ..., x)$ is in the vector space $\{(r, ..., r)| r \in R\} = \langle \space \{ \space (1, ..., 1) \space \} \space \rangle \subset R^n$

Premise 2 (lemma)

One can demonstrate that, if $V$ is a vector space and $W = \langle \{w_1, ..., w_m\} \rangle $ is a subspace, and $\lVert \cdot \rVert$ is a norm on $V$, then, $\forall v \in V, w \in W,$

$\lVert v - w \rVert $ is minimum (that is: $\lVert v - w \rVert <= \lVert v - w'\rVert \space \forall w' \in W$) if and only if $w = \sum_{\{w_k\}} \frac{(v \vert w_k)}{\lVert w_k\rVert^2}w_k$

$w$ is then called the orthogonal projection of $v$.

Solution

Using the lemma, $\lVert \bar a - \bar x\rVert$ is minimum if and only if $\bar x = \sum_{\{w_k\}}\frac{(\bar a \vert w_k)}{\lVert w_k \rVert_2^2}w_k = \frac{(\bar a \vert (1, ..., 1))}{\lVert (1, ..., 1) \rVert_2^2}(1, ..., 1) = \frac{\sum_{\{a_k\}}a_k}{n}(1, ..., 1)$

or equivalently, $x = \frac{\sum_{\{a_k\}}a_k}{n}$.