I roll the dice until the average of the outcomes is $2.5$. Once the average is $2.5$, I will stop rolling. For example, if I get a sequence of 2, 6, 4, 1, 1, 1 outcomes, I will stop rolling the dice at the sixth trial.
Let $E$ be the event of stopping rolling the dice and $E_n$ be the event of getting the average $2.5$ after rolling the dice exactly $n$ times. Then, $E = E_1 \cup E_2 \cup E_3 \cup E_4 \cup \cdots$. Since it will not end with odd trials, I can rewrite it as $E = E_2 \cup E_4 \cup \cdots$.
Since ${E_{2n}}^\complement \supseteq \bigcup_{k=2n}^{\infty} E_{2k}$, $E_{2k}$'s are disjoint. Thus, $Pr(E) = \sum_{k=1}^{\infty} Pr(E_{2k})$. However, I am stuck with calculating the exact probability of each event $E_{2k}$ for large $k$. How can I calculate the probability of each event $E_{2k}$ ($Pr(E_{2k})$ for $k>1$) and finally the probability of stopping rolling the dice $Pr(E)$?
It seems that $Pr(E_{2k})$ goes to zero as $k\rightarrow \infty$. Then, would the $Pr(E)<1$? Or, is the probability of stopping rolling the dice eventually one?
Also, I wonder if there is any related theorem that I can use to calculate such events.
In this answer, we
$$\mathbf{P}(E) \approx 0.21257819505882691800$$
Setting.
Let $X_n$ denote the outcome of the $n$th roll, and let $S_n = \sum_{k=1}^{n} X_k$ be the partial sum. Also, fix $\alpha$ and define $E_n$ be the event that $n$ is the first time $S_n = \alpha n$ holds, excluding $n=0$. That is,
$$ E_n = \{ \text{$S_n = \alpha n$ and $S_k \neq \alpha k$ for all $k \in [n-1]$} \},$$
where $[n-1] = \{1, 2, \ldots, n-1\}$. Then we are interested in the value of $\mathbf{P}(E)$, where
$$ E = \bigcup_{n=1}^{\infty} E_n = \{\text{$S_n = \alpha n$ for some $n\geq 1$} \}. $$
Finding a representation of $\mathbf{P}(E)$.
To simplify this, we write $A_n = \{S_{n} = \alpha n\}$. Then by the inclusion-exclusion principle,
\begin{align*} \mathbf{P}(E_n) = \mathbf{P}(A_n \cap A_{n-1}^c \cap \cdots \cap A_1^c) = \sum_{J \subseteq [n-1]} (-1)^{|J|} \mathbf{P}\biggl(A_n \cap \bigcap_{j \in J} A_j \biggr). \end{align*}
To further simplify the last sum, we write $l = |J|+1$ and $J = \{j_1 < \ldots < j_{l-1}\}$, together with $j_l = n$. Then
\begin{align*} (-1)^{|J|} \mathbf{P}\biggl(A_n \cap \bigcap_{j \in J} A_j \biggr) &= (-1)^{l-1} \mathbf{P}(A_{j_1} \cap A_{j_2} \cap \cdots \cap A_{j_l} ) \\ &= (-1)^{l-1} \mathbf{P}(A_{j_1})\mathbf{P}(A_{j_2-j_1}) \cdots \mathbf{P}(A_{j_l-j_{l-1}}) \\ &= (-1)^{l-1} \mathbf{P}(A_{d_1})\mathbf{P}(A_{d_2}) \cdots \mathbf{P}(A_{d_l}), \end{align*}
where $d_1 = j_1$, $d_2 = j_2 - j_1$, $\ldots$, and $d_l = j_l - j_{l-1}$. (In the second step, we utilized the fact that $S_n$ is a random walk, having stationary and independent increments.) As $J$ runs over the $(l-1)$-subsets of $[n-1]$, the tuple $(d_1, \ldots, d_l)$ runs over all the positive values satisfying $d_1, \ldots, d_l \geq 1$ and $d_1 + \cdots + d_l = n$. Hence,
\begin{align*} \mathbf{P}(E_n) &= \sum_{l=1}^{n} (-1)^{l-1} \sum_{\substack{d_1, \ldots, d_l \geq 1 \\ d_1 + \cdots + d_l = n}} \mathbf{P}(A_{d_1})\mathbf{P}(A_{d_2}) \cdots \mathbf{P}(A_{d_l}). \end{align*}
Now we want to plug this to $\mathbf{P}(E) = \sum_{n=1}^{\infty} \mathbf{P}(E_n)$ and rearrange this sum, but this is not always possible. To overcome this issue, we consider regularized sums
\begin{align*} f(z) &= \sum_{n=1}^{\infty} \mathbf{P}(E_n) z^n, \\ g(z) &= \sum_{n=1}^{\infty} \mathbf{P}(A_n) z^n. \end{align*}
Note that each of $f(z)$ and $g(z)$ defines an analytic function on the region $|z| < 1$. Moreover, $\mathbf{P}(E) = \lim_{z\uparrow 1} f(z)$. Now let $\delta > 0$ be small enough so that $g(\delta) < 1$. Then for $z \in [0, \delta]$, Fubini's theorem is applicable, yielding
\begin{align*} f(z) &= \sum_{n=1}^{\infty} \biggl[ \sum_{l=1}^{n} (-1)^{l-1} \sum_{\substack{d_1, \ldots, d_l \geq 1 \\ d_1 + \cdots + d_l = n}} \mathbf{P}(A_{d_1})\mathbf{P}(A_{d_2}) \cdots \mathbf{P}(A_{d_l}) \biggr] z^n \\ &= \sum_{l=1}^{\infty} (-1)^{l-1} \sum_{d_1, \ldots, d_l \geq 1} \mathbf{P}(A_{d_1})\mathbf{P}(A_{d_2}) \cdots \mathbf{P}(A_{d_l}) z^{d_1 + \cdots + d_l} \\ &= \sum_{l=1}^{\infty} (-1)^{l-1} g(z)^l \\ &= \frac{g(z)}{1 + g(z)}. \end{align*}
Although this identity has been proved a priori for $z \in [0, \delta]$, both sides are analytic function on $|z| < 1$ and hence the validity of the equality remains true on all of $|z| < 1$ by the principle of analytic continuation. In particular,
$$ \bbox[color:navy; border:1px dotted navy; padding:8px;]{ \mathbf{P}(E) = \lim_{z\uparrow 1} f(z) = \lim_{z\uparrow 1} \frac{g(z)}{1 + g(z)} = \frac{g(1)}{1+g(1)}, } $$
where $g(1) = \sum_{n=1}^{\infty} \mathbf{P}(A_n)$ and we adopt the convention that $f(1) = 1$ if $g(1) = \infty$. In particular, $\mathbf{P}(E) < 1$ if and only if $g(1) < \infty$.
Estimating $\mathbf{P}(E)$ for $\alpha = 5/2$. Now we turn to OP's problem. Let $\alpha = 5/2$. Then $\mathbf{P}(A_n) = 0$ for odd $n$'s, hence
$$ g(1) = \sum_{d=1}^{\infty} \mathbf{P}(A_{2d}) = \sum_{d=1}^{\infty} \mathbf{P}(S_{2d} = 5d). $$
By a simple combinatorial argument,
\begin{align*} \mathbf{P}(S_{2d} = 5d) &= \frac{1}{6^{2d}} \#\{ (x_1, \ldots, x_{2d}) \in [6]^{2d} : x_1 + \cdots + x_{2d} = 5d \} \\ &\leq \frac{1}{6^{2d}} \#\{ (x_1, \ldots, x_{2d}) \in \mathbb{N}_1^{2d} : x_1 + \cdots + x_{2d} = 5d \} \\ &= \frac{1}{6^{2d}} \binom{5d-1}{2d-1}. \tag{*} \end{align*}
Also, it is not hard to prove that $\binom{5(d+1)-1}{2(d+1)-1}\big/\binom{5d-1}{2d-1} \leq \frac{5^5}{2^2 \cdot 3^3}$ for all $d \geq 1$. Hence
$$ \mathbf{P}(S_{2d} = 5d) \leq \frac{1}{6^{2d}} \binom{5-1}{2-1} \left(\frac{5^5}{2^2 \cdot 3^3}\right)^{d-1} = \frac{432}{3125} \left( \frac{3125}{3888} \right)^d. $$
This shows that $g(1) < \infty$, hence $\mathbf{P}(E) < 1$ as required.
This estimate can also be used to estimate $\mathbf{P}(E)$. To this end, we first derivate an exact formula for $\mathbf{P}(S_{2n} = 5n)$. Invoking an inclusion-exclusion type computation, We can give an explicit formula for $\mathbf{P}(S_{2n} = 5n)$:
\begin{align*} \mathbf{P}(S_{2n} = 5n) &= \frac{1}{6^{2n}} \sum_{x_1, \ldots, x_{2n} \geq 1} \mathbf{1}[x_1 + \cdots + x_{2n} = 5n] \prod_{j=1}^{2n} \mathbf{1}[x_j \leq 6] \\ &= \frac{1}{6^{2n}} \sum_{x_1, \ldots, x_{2n} \geq 1} \mathbf{1}[x_1 + \cdots + x_{2n} = 5n] \prod_{j=1}^{2n} (1 - \mathbf{1}[x_j > 6]) \\ &= \frac{1}{6^{2n}} \sum_{J \subseteq [2n]} (-1)^{|J|} \sum_{x_1, \ldots, x_{2n} \geq 1} \mathbf{1}[x_1 + \cdots + x_{2n} = 5n] \prod_{j \in J} \mathbf{1}[x_j > 6] \\ &= \frac{1}{6^{2n}} \sum_{J \subseteq [2n]} (-1)^{|J|} \sum_{x_1, \ldots, x_{2n} \geq 1} \mathbf{1}[x_1 + \cdots + x_{2n} = 5n - 6|J|] \\ &= \frac{1}{6^{2n}} \sum_{J \subseteq [2n]} (-1)^{|J|} \binom{5n-6|J|-1}{2n-1} \cdot \mathbf{1}[5n-6|J|\geq 2n] \\ &= \frac{1}{6^{2n}} \sum_{l=0}^{\lfloor n/2 \rfloor} (-1)^{l} \binom{5n-6l-1}{2n-1} \binom{2n}{l} \end{align*}
Moreover, using the bound on $\mathbf{P}(S_{2n} = 2n)$ above, we get
$$ \epsilon_N := \sum_{n=N+1}^{\infty} \mathbf{P}(S_{2n} = 5n) \leq \frac{432}{763} \left(\frac{3125}{3888}\right)^N. $$
Altogether, when $\alpha = 5/2$ we can decompose $g(1)$ into
$$ \bbox[color:navy; border:1px dotted navy; padding:8px;]{ g(1) = \sum_{n=1}^{N} \frac{1}{6^{2n}} \sum_{l=0}^{\lfloor n/2 \rfloor} (-1)^{l} \binom{5n-6l-1}{2n-1} \binom{2n}{l} + \epsilon_N, } $$
which then allows us to produce rigorous bounds on the value of $\mathbf{P}(E) = \frac{g(1)}{1+g(1)}$ up to any desired accuracy. For example, this allows us to prove that
$$\mathbf{P}(E) \approx 0.21257819505882691800$$
by choosing $N = 208$ and using $\epsilon_{208} < 10^{-20}$.