In the linear regression model $$y = \beta_1 X_1 + \cdots + \beta_p X_p + \varepsilon \, ,$$ can the regressors $\{X_i\}_{i \in \{1, \ldots, p\}}$ be considered as random variables?
I know that what we really have is $n$ observations of the relationship between the dependent variable and the regressors, i.e., $$y_i = \beta_{i1} x_{i1} + \cdots +\beta_{ip} x_{ip} + \varepsilon_i \, ,$$ but does it make sense to see things like $\Pr[X_1 = 0]$, where $X_1$ is the same $X_1$ as in the first equation? Would that be equal to $\frac{1}{n} \sum_{i = 1}^n [x_{i1} = 0]$ or does it refer to the probability according to a theoretical distribution?
Yes, it is meaningful to imagine the regressors are sampled from a theoretical distribution. The quantity $P(X_1=0)$ would then refer to the theoretical distribution for $X_1$, not to the observed values $x_{1,1},\ldots,x_{1,p}$ of $X_1$. Note that in elementary treatments of regression we assume the regressors are non-random, i.e., they are known constants (equivalently, we are conditioning on the the values of the regressors).