In regards to Machine Learning, in the Adaline rule we say that
$$ J(w)=\frac{1}{2} \sum_{i} (\mbox{target}^{(i)} - \mbox{output}^{(i)})^2, \quad \mbox{output}^{(i)} \in \mathbb{R} $$
is convex. I´d like to know how can we say that? Some proof that it is always convex.
Thank you in advance
Please check that the Hessian matrix is positively semidefinite.
For a more concrete answer please write precisely what do you understand by $w$.