How to solve linear regression with an uncommon error function?

103 Views Asked by At

For common linear regression problems, the error terms are $l2$ norm. In other words, the error between measurement (independent values) $y$, and estimate $\hat{y}=X\beta$ is measured as $||y-\hat{y}||_{2}$.

Least square regression's objective is then to find the $\beta$ that minimize above error term.

Now, I have an uncommon target terms, which is a maximization problem:

My estimate $b=X\beta$ is a vector, in which each item is a binary scalar $b_{j} \in {-1, 1}$. In other words, for each sample, our prediction is either positive or negative. Then we want to find the $\beta$ that maximize

$y^{t} \cdot b$

It seems that I have a classification problem with a weight term. Can you think of a way that we can transform this problem to a more common quadratic optimization/cone optimization problem? Or re-formulate this problem as a classification problem?