There are two definitions of sample mean. First one
Given $n$ independent random variables $X_1$, $X_2$, ..., $X_n$, each corresponding to randomly selected observation. The sample mean is defined as $$\bar{X}=\frac{1}{n}(X_1+X_2+...+X_n)$$
The second one defines the sample mean as the arithmetic average of sample values ($x_i$ is a particular outcome of a random variable - a number).
$$\bar{X}=\frac{1}{n}(x_1+x_2+...+x_n)$$
Is the second a shorthand notation for the first definition? I believe the first one is more correct in the formal sense. For instance we can calculate the expected value using the first one, but not the second one (expeceted value of a constant $c$ is just $c$).
By convention, the first one is often used to denote a random variable, while the second one is a value
For example, $X \sim N(0,1)$ denotes a random variable, that is function from the sample space to the reals, while $x = 0.26$ is a value obtained from an observation of the random variable. Generally you don't have to draw such a clear distinction, the same way you don't distinguish the fixed variable $\pi$ and the value it represents. This only becomes untrue if you need to think about the random variable as a function, such as understanding "convergence with probability 1". Then you must know that random variables are neither random nor are they variables
If this confuses you, don't worry about it, and treat them as equal definitions. If you want to be really clear, you can go to Wikipedia can see the strict definitions for random variables