I see the expression $(1-\alpha )\cdot f(x) + \alpha \cdot f(y)$ in many places:
In the definition of a concave function:
$$ {\displaystyle f((1-\alpha )x+\alpha y)\geq (1-\alpha )f(x)+\alpha f(y)}$$
In reinforcement learning:
which seems to also have the special notation:
as well as in batch normalization for neural networks:
and I've seen it appear in other places, and I was wondering if there's a special meaning behind it, an intuitive way to look at things or a new angle to look at things and get some insight.



It's a cheap way of making a (weighted) convex combination of two variables. Generally speaking, the freedom of choosing $\lambda$ can bias the value toward either $f(x),f(y)$. When $\lambda = 1/2$ you get the usual average. When $\lambda$ is close to $(0,1)$ you strongly prefer one point over the other. So it's commonly used in situations where you want to balance the contributions of two terms, and you prefer to get something in between, as opposed to outside of their interval. In a sense, you're specifying how much you trust one point over the other.
For example, this is useful in machine learning and statistics, because it allows you to play with different values of $\lambda$ to see which model performs better.
As an example, elastic-net regularization can simply be of the form $\lambda \|w\|_1+(1-\lambda)\|w\|_2$, which tries to balance ($L_1,L_2$) regularization.
It's also common when moving averages are involved, which pertains to your reinforcement learning example, or say, the momentum update term in the Adam optimizer: $m_t=\lambda m_{t-1}+(1-\lambda)g_t$, where $m$ is the moving average of the gradients, and $g$ is the gradient from the current batch.
In probability, it's the easiest way of making a new distribution (called a mixture-distribution) from distributions $P,Q$, via $R(x):=\lambda P(x)+(1-\lambda)Q(x)$.