My question arises after reading this link, which presents 7 methods of regression, we know that each of these methods is determined by a loss function, my question is What is the intuition behind these loss functions?
I arrived at that link since in my study I am trying to find a loss function that is concave with respect to the random variables, that is, we know that a regression problem consists of estimating the parameter $\beta_{*}\in \mathbb{R}^{d}$ which allows expressing $$ Y_{i} = β_{*}^{T}X_{i} + e_{i} $$ where $ Y_{i} \in \mathbb{R} $ and $ X_{i} \in \mathbb{R}^{d}$, $ i= 1, \ldots, n $. In order to estimate the parameter $ \beta_{*}$ what is attempted is to minimize a loss function $ l(x, y, \beta) $, for example, the quadratic loss function is common when using least squares techniques. The quadratic loss is given by $ l (x, y, \beta) = (y- \beta^{T} x)^{2}$, the reason why it is intuitive to use the quadratic loss function is obvious from the point of view of linear algebra, but note that this is convex with respect to $ w = (x, y) $.
My intention is to construct or find a loss function $ l (x, y, β) $ that is concave with respect to $ w = (x, y) $ and that I can justify within the context of regression methods.