I have the following matrix $M$, where the first column is a score vector and the second column is a vector with the corresponding probabilities: $$ M = \begin{pmatrix} 0 & \psi_0\\ 1 & \psi_1\\ \vdots & \vdots\\ i & \psi_i\\ \vdots & \vdots\\ n & \psi_n\\ \end{pmatrix} = \begin{pmatrix} 0 & \prod\limits_{k=1}^{n}p_k\\ 1 & \sum\limits_{s\in S_{n,1}} \left(\prod\limits_{k \in\mathcal{N}\setminus s}p_{k} \left(1-p_{s}\right)\right)\\ \vdots & \vdots\\ i & \sum\limits_{s\in S_{n,i}} \left(\prod\limits_{k \in\mathcal{N}\setminus s}p_{k} \prod\limits_{l\in s}^{n}\left(1-p_{l}\right)\right)\\ \vdots & \vdots\\ n & \prod\limits_{k=1}^{n}(1-p_k)\\ \end{pmatrix}, \quad \sum\limits_{i=0}^{n} \psi_i = 1 $$
Term $S_{n,i}$ stands for the set of $\begin{pmatrix} n\\i\end{pmatrix}$ combinations of $i$ elements' indices $s \subseteq \mathcal{N} = \{1,...,n\}$, for a given score value $0<i<n$. For example:
if $n=2 \implies score=1$ for $\psi_1 = p_1(1-p_2) + p_2(1-p_1)$;
if $n=3 \implies score=1$ for $\psi_1 = p_1 p_2(1-p_3)+ p_1(1-p_2)p_3+(1-p_1)p_2 p_3$; $score=2$ for $\psi_2 = (1-p_1)(1-p_2)p_3+ p_1(1-p_2)(1-p_3)+(1-p_1)p_2(1-p_3)$, and so on.
Using the cumulative probabilities from the second column I can find $m$-th percentile of score values in the first column. Is it possible to show that by increasing the number of observations $n$, the difference between, e.g., 10-th and 50-th percentile becomes smaller (using $\lim\limits_{n \rightarrow \infty} M$ or so)?
Thanks in advance.
Assuming that the $p_k$ are bounded away from $0$ and $1$, the answer to this is "No", as shown by the following argument.
The second column of matrix $M$ is the probability mass function of the so-called Poisson-Binomial distribution, which is the distribution of a sum of independent-but-not-necessarily-identically-distributed Bernoulli random variables. Thus, let $S_n = X_1+...+X_n$ where the $X_k\sim\text{Bernoulli}(p_k)$ are independent with $0<p_k<1$. Then $\Pr(S_n=i)=\psi_i$ for $i=0,...,n$.
Now it can be shown, as an application of Lyapunov's Central Limit Theorem, that if the $p_k$ are bounded away from $0$ and $1$, then $$\lim_\limits{n\to\infty}\Pr\left({S_n-\sum_{k=1}^np_k\over\sqrt{\sum_{k=1}^np_k(1-p_k)} }\le z \right)\,=\,\Pr(Z\le z), $$ where $Z$ is a standard normal random variable. Hence, for sufficiently large $n$, the $q$th quantile of $S_n$ is $$(S_n)_q\,\approx\,\sum_{k=1}^np_k+\sqrt{\sum_{k=1}^np_k(1-p_k)}\,z_q,$$ where $z_q$ is the $q$th standard normal quantile, and the difference between two quantiles of $S_n$ (say the $q_1$th and $q_2$th, with $q_1>q_2$), when $n$ is sufficiently large, is $$(S_n)_{q_1}-(S_n)_{q_2}\,\approx\,\sqrt{\sum_{k=1}^np_k(1-p_k)}\,(z_{q_1}-z_{q_2}). $$ Therefore, $(S_n)_{q_1}-(S_n)_{q_2}$ eventually increases as $n$ increases, because $\sqrt{\sum_{k=1}^np_k(1-p_k)}$ can only increase as $n$ increases.