We are currently using Least Squares to calculate the error: $$\min_{a,b}\sum_{k=1}^n(ax_k+b-y_k)^2$$
Last squares magnifies the error making it bigger for larger errors, and this magnification becomes sort of "leverage" to correct the error faster for iterative methods (like Levenberg-Marquard for example).
But why don't we magnify the error more, and use Least Quartic errors?
$$\min_{a,b}\sum_{k=1}^n(ax_k+b-y_k)^4$$
Would this make iterative methods like Levenberg-Marquard more efficient and lead to fewer iterations?
I think the main motivation comes form what we know to solve well.
We mostly know how to solve Linear problems.
Linear Least Squares have 2 great properties:
Your Least Quartic method doesn't satisfy having linear derivative which means we are left with hard problem to solve.
Specifically about Making the Error Large, it is not a good property as it means the method is very sensitive to outliers. See Robust Regression in that regard.