I understand this mathematically. we have function of 2 variables represents the sum of square errors. We have to find the $a$ and $b$ that minimize the function. there is only one minimum point.
But when I think of it, I can't see why 2 different lines would not bring the value to the same minimum. we have 2 degrees of freedom, so why can't we find a new pair of $(a,b)$ that will have the same value.
You have two degrees of freedom (the slope $m$ and $y$-intercept $b$ of the regression line, say), but also two constraints: The partial derivatives of the squared total error $E$ with respect to $m$ and with respect to $b$ must vanish. That means you expect only finitely many (local) minima. (As A.E. says, $E$ is strictly convex, so there is at most one minimum.)