I am a bit confused about a concept that was recently covered in my stats class. My professor said that there's never any real need to do nonlinear regression, because a function can always be transformed to a linear form, for example he said if:
$Y = \beta_0 + \beta_1 x^2$,
Then, we can let $Z = x^2$, and now:
$Y = \beta_0 + \beta_1 Z$, which is a linear function of $Z$.
Although this seems algebraically correct to me, it just comes across as suspicious. If one can always do this, what does this mean for nonlinear functions? Ie., is there some theorem or conditions of when such transformations are allowed and when they are not allowed?
There's nothing wrong with the substitution of $Z$. Indeed, any time we want to fit a model of the form $Y = f(X)$, we can do a linear regression of $f(X)$ and $Y$. The catch is that the linear regression has no straightforward interpretation in terms of $X$ and $Y$, i.e. we may only speak in terms of $f(X)$ and $Y$ about the linear regression. For example, there is no $r^2$ for $X$ and $Y$, only for $f(X)$ and $Y$.