I have a polynomial regression model which is called the hypothesis function defined as such : $$ h( \theta) := \theta_0 + \theta_1 x_1^2 + \theta_2 x_1^3 + \theta_3 x_2^2 $$ We wish to find the gradient descent algorithm for this model,
First of all my main concern is why the hypothesis function is defined as a function of $\theta$ and not $x$. Furthermore my attempted derivation is as such : \begin{align*} J(\theta_{0},\theta_{1},\theta_{2},\theta_{3})&=\underset{\theta}{\operatorname{argmin}}\frac{1}{2}\sum_{i=1}^{2}\left(h(x_{i})-y_{i}\right)^{2}\\ &=\underset{\theta}{\operatorname{argmin}}\left(\frac{1}{2}(h(x_{1})-y_{1})^{2}+\frac{1}{2}(h(x_{2})-y_{2})^{2}\right) \end{align*} I can not proceed without knowing how this hypothesis function is defined because I only know that hypothesis function is a function of $x$ with $\theta$ parameters which is not the case here with the defined hypothesis function