We Know that To minimize the sum of error (objective Function)
$\ J = (y(t)-\theta (t) u(t))^2 $ (eq. 1)
is done by using least square :
$\theta (t) = \theta (t-1) + \gamma y(\theta u -y) $ (eq.2)
Where $u=input ; $ $y=output; $ $\theta=Gain Input; $ $t=time; $
But the question is how to prove that eq.2 is minimizing the eq 1 respect to $\theta$?
and what the terms that shows $J$ is minimized ?
*Lets say all variables is scalar
Thanks before
It's a bit difficult to untangle this question. It should have a form where we start with a sequence on $m$ measurements of the form $$\left\{ x_{k}, y_{k} \right\}_{k=1}^{m}$$ with an input trial function $$ y(x). $$ which has parameters $a_{1}, a_{2}, \dots, a_{n}$. The method of least squares finds the vector $a$ which minimizes the total error $$ r^{2}(a) = \sum_{k=1}^{m} \left( y_{k} - y(x_{k}) \right)^{2} $$ In fact, the least squares solution is defined as $$ a_{LS} = \left\{ a \in \mathbb{R}^{n} \colon r^{2}(a) \text{ is minimized} \right\} $$ This sets up the following $n$ equations: $$ \begin{align} \frac{\partial} {\partial a_{1}} r^{2}(a) & = 0 \\ \frac{\partial} {\partial a_{2}} r^{2}(a) & = 0 \\ \vdots \\ \frac{\partial} {\partial a_{n}} r^{2}(a) & = 0 \end{align} $$
It would help to have your question nudged into something close to a format such as this.