I am told that if we have some linear function $f$ defined over an interval $[a,b]$, then the fact that $f$ is linear implies that, for all $\lambda$ between 0 & 1 exclusive, the following property holds: $$f((1-\lambda)a+\lambda b) \equiv (1-\lambda)f(a)+\lambda f(b)$$
Why is this the case? How can I see that the two expressions are equivalent to each other? My issue isn't in understanding what's being said here, but rather in understanding why it's true.
For context, I am trying to understand the definition of concave & convex functions, and this property is given as a minor step in the build up towards the definition, with no further elaboration.
EDIT: In the answer below I am told that this property is taken as the definition of a linear function.. but in the resource I'm using, it tells me that it is due to $f$ being a linear function that this property holds, so I feel quite confused.
Wouldn't it be possible to show that this property is implied by a more immediately intuitive definition? Am I thinking about this in the wrong way? How should I view this property/definition?
Any help in clearing up my confusion would be greatly appreciated.
Using "linear" for "affine", as the text does, the "if and only if" follows by double implication.
If $f(x) = \alpha + \beta x$, then $f\left((1−\lambda)a + \lambda b\right) = (1 − \lambda)f(a) + \lambda f(b)$ follows for all $\lambda \in (0, 1)$: $$ \require{cancel} \begin{align} f\left((1−\lambda)a + \lambda b\right) &= \alpha + \beta \left((1−\lambda)a + \lambda b\right) \\ &= \alpha \cdot \left(\color{red}{(1-\lambda)} + \color{blue}{\lambda}\right) + \color{red}{\beta \cdot (1-\lambda) a} + \color{blue}{\beta \cdot \lambda b} \\ &= \color{red}{(1-\lambda)\cdot(\underbrace{\alpha + \beta a}_{\color{black}{=\,f(a)}})} + \color{blue}{\lambda (\underbrace{\alpha + \beta b}_{\color{black}{=\,f(b)}})} \\ &= (1-\lambda) f(a) + \lambda f(b) \end{align} $$
If $f\left((1−\lambda)a + \lambda b\right) = (1 − \lambda)f(a) + \lambda f(b)$ for all $\lambda \in (0, 1)$, then $f(x) = \alpha + \beta x$ follows for all $x \in [a,b]$. Let $\lambda = \frac{x-a}{b-a} \iff x = (1-\lambda)a + \lambda b$, then: $$ \begin{align} f(x) = f\left((1−\lambda)a + \lambda b\right) &= (1 − \lambda)f(a) + \lambda f(b) \\ &= \left(1 - \frac{x-a}{b-a}\right) f(a) + \frac{x-a}{b-a} f(b) \\ &= \frac{b-x}{b-a}f(a) + \frac{x-a}{b-a}f(b) \\ &= \underbrace{\frac{bf(a)-af(b)}{b-a}}_{=\,\alpha} + \underbrace{\frac{f(b)-f(a)}{b-a}}_{=\,\beta}\,x \\ &= \alpha + \beta x \end{align} $$
This is precisely the "if" part proved at the previous step, just with $h_{a,b}$ instead of $f$.