In classical mechanics, states can be formalized as probability measures on phase space, i.e. $\pi_t:\operatorname{Bor}(\mathscr F)\to[0,1]$ (parametrized by time) satisfying the usual probability axioms for all $t$. The number $\pi_t(E)$ can be interpreted as the probability that the actual representative point of the system, $(\boldsymbol q(t),\boldsymbol p(t))$ is contained in the set $E$.
Some of these measures are absolutely continuous at all times w.r.t. the Liouville volume measure (stemming from the standard symplectic form $\omega = dp_i \wedge dq^i$), so they admit a Radon-Nikodym derivative $\rho_t$ with the meaning of probability density; other states do not, and among these are the Dirac states (one could call them pure states), i.e. $$\delta_x(E) = \begin{cases} 1 & x \in E,\\ 0 & x \notin E, \end{cases} $$ which take the meaning of "perfectly localized" states (states with zero variance). It is not hard to convince oneself that the set of states is a convex set (it contains every convex combination, i.e. incoherent superposition, of its elements).
What I just described is the formalization of an epistemic uncertainty about the "true state of a system": this is because classically speaking the system will trace out a trajectory in phase space under the action of Hamilton's equations, so it really does admit a representative point $(\boldsymbol q(t),\boldsymbol p(t))$ at all times, I just don't always know it exactly (as is mostly the case in the statistical mechanics of systems with a large number of degrees of freedom).
In quantum mechanics, on the other hand, states are formalized as measures on the lattice of projectors on a particular Hilbert space $\mathscr H$, or equivalently (by Gleason's theorem), as trace-class operators $\rho$ (suppressing the time dependency for simplicity) with unit trace. These also form a convex set under incoherent superposition, and always admit a decomposition (by the spectral theorem) $$\rho = \sum_n p_n (\psi_n,–)\psi_n, \tag{$\star$} $$ for an orthonormal eigenbasis $\psi_n$ of $\mathscr H$, with $p_n\geq 0$ and $\sum_n p_n =1$. When $\rho = (\psi,–)\psi$ for some normalized $\psi \in \mathscr H$, then it describes a pure quantum state (in one-to-one correspondence with a ray in $\mathscr H$); however, $(\star)$ does not describe an epistemic uncertainty, because in general, the eigenbasis is not unique (think $\rho=1/{(\dim\mathscr H)}\cdot\mathbb{I}_\mathscr H$ when $\dim\mathscr H<\infty$).
Is there a mathematical definition, or a characterization, of a (convex?) family of probability measures, that establishes when its elements can be interpreted to represent epistemic uncertainty vs. when they don't?
To me, it looks like it should have something to do with the fact that classical systems admit perfectly localized states (Dirac measures), which could be seen as "deterministic" (they attain values in $\{0,1\}$), whereas quantum systems do not (by the Kochen–Specker theorem). However, I wouldn't know if this property already had a name in measure theory.