My question applies to graph questions in general. Given that the equation for a straight line is $y=mx+c$, wouldn't it be easier to calculate the value of a gradient using algebra: $$ \frac{y-c}{x}=m $$ instead of using the formula $\frac{y_2-y_1}{x_2-x_1}$? It just seems unnecessarily complicated and also requiring more than one data point when the gradient of a line can be calculated using only the coordinates of one point.
Ex:
Let's pick one data point on this line, like $(1,4)$. We also know that the y-intercept is 1. If we substitute this into our formula we get $\frac{4-1}{1}=3$ which is in fact the gradient of this line. Why should we go to the trouble of using multiple data points?
The formula $$m=\frac{y_2-y_1}{x_2-x_1}$$ is defined for any points of the linear graph that exist (either the point is intercept at x-axis or y-axis). Your formula is surely correct given that point $(0,c)$ for y-intercept or $(b,0)$ for x-intercept will gives : $$m=\frac{y_2-c}{x_2}$$ or $$m=\frac{y_2}{x_2-b}$$
But, the question is, how you will find $b$ and $c$ first before using the the formula above?