Assume $\{S_n\}_1^\infty$ is a random walk, where $S_n=\sum_{i=1}^n X_i$ and $X\sim N(1,1)$, where N(1,1) is normal distribution with mean 1 variance 1. Define stopping time when it cross a positive threshold $a$ and a negative threshold $-a$. $$T^+(a)=\inf\{t\in Z^+, S_t>a\}$$ $$T^-(a)=\inf\{t\in Z^+, S_t<-a\}$$ And the crossing time of the random walk is defined as the earlier cross $T(a)=min\{T^-(a),T^+(a)\}$. We already know $\lim_{a\to+\infty}E[T^+(a)]/a=1$ (equals to expectation of $X$).
The first question is how to make it clear that $\lim_{a\to+\infty}E[T(a)]/a=1$. Since the probability of hitting negative threshold is greater than zero (although very small), it should be considered when quantifying $E[T(a)]$.
The second question is that if there are more that one independent random walk, will this result still hold for the minimal or maximal stopping time among those random walks? For example, there are two random walks, $\{S^1_n\}_1^\infty$ and $\{S^2_n\}_1^\infty$, define $T_1(a)$ and $T_1(a)$ the same as $T(a)$. And the minimal or maximal stopping time is $$\underline{T}(a)=min\{T_1(a),T_2(a)\},\overline{T}(a)=max\{T_1(a),T_2(a)\}$$
Do we have $\lim_{a\to+\infty}E[\overline{T}(a)]/a=1$ and $\lim_{a\to+\infty}E[\underline{T}(a)]/a=1$ ? Can we prove it?
Here is a related result:
Claim:
Fix $k$ as a positive integer. Let $(X_1(n), ..., X_k(n))$ be a collection of $k$-dimensional random vectors that are parameterized by $n \in [0, \infty)$. The components of the random vectors can be dependent and can have different distributions. Fix $c \in \mathbb{R}$. Suppose that
$\lim_{n\rightarrow\infty} E[X_i(n)]=c$ for all $i \in \{1, \ldots, k\}$.
$\lim_{n\rightarrow\infty} Var(X_i(n))=0$ for all $i \in \{1, \ldots, k\}$.
Then for all $\epsilon>0$ and all $i \in \{1, \ldots, k\}$ we have $$ \lim_{n\rightarrow\infty} P[|X_i(n)-c|>\epsilon] = 0 \quad (Eq. 1)$$ Further $$ \lim_{n\rightarrow\infty} E[\min[X_1(n), ..., X_k(n)]] = \lim_{n\rightarrow\infty} E[\max[X_1(n), ..., X_k(n)]] = c \quad (Eq. 2) $$
Proof:
Fix $\epsilon>0$. Fix $i \in \{1, \ldots, k\}$. By the Chebyshev inequality $$ P[|X_i(n)-c|>\epsilon] \leq \frac{E[(X_i(n)-c)^2]}{\epsilon^2} = \frac{Var(X_i(n)) + (E[X_i(n)]-c)^2}{\epsilon^2} $$ Taking a limit as $n\rightarrow\infty$ proves (Eq. 1).
To prove (Eq. 2), note that it suffices to prove $$ \lim_{n\rightarrow\infty} E[\max[X_1(n), ..., X_k(n)]] = c $$ This is because $$ \min[X_1(n), ..., X_k(n)] = -\max[-X_1(n), ..., -X_k(n)]$$
To this end, define $M_n=\max[X_1(n), ..., X_k(n)]$. Since $X_1(n)\leq M_n$ for all $n$ we have $$E[X_1(n)] \leq E[M_n]$$ Taking a $\liminf$ of both sides as $n\rightarrow\infty$ gives $$ c \leq \liminf_{n\rightarrow\infty} E[M_n]$$ It suffices to prove that $\limsup_{n\rightarrow\infty} E[M_n]\leq c$. We have \begin{align} E[M_n] &\leq (c+\epsilon) + E[M_n|M_n>c+\epsilon]P[M_n>c+\epsilon]\\ &\overset{(a)}{\leq}(c+\epsilon) + \sum_{i=1}^kE[|X_i(n)| |M_n>c+\epsilon]P[M_n>c+\epsilon] \\ &\overset{(b)}{\leq} (c+\epsilon) + \sum_{i=1}^k\sqrt{E[X_i(n)^2]P[M_n>c+\epsilon]}\\ &= (c+\epsilon) + \sqrt{P[M_n>c+\epsilon]}\sum_{i=1}^k\sqrt{Var(X_i(n)) + E[X_i(n)]^2}\\ &\overset{(c)}{\leq} (c+\epsilon) + \left(\sqrt{\sum_{i=1}^kP[X_i(n)>c+\epsilon]}\right)\sum_{i=1}^k\sqrt{Var(X_i(n)) + E[X_i(n)]^2} \end{align} where (a) holds because $M_n\leq \sum_{i=1}^k|X_i(n)|$; (b) holds by Fact 1 stated below; (c) holds by the union bound: $$P[M_n>c+\epsilon] = P[\cup_{i=1}^k\{X_i(n)>c+\epsilon\}]\leq \sum_{i=1}^kP[X_i(n)>c+\epsilon]$$
Taking a limit as $n\rightarrow\infty$ proves $$ \limsup_{n\rightarrow\infty} E[M_n] \leq c+\epsilon$$ This holds for all $\epsilon>0$, and so $$ \limsup_{n\rightarrow\infty} E[M_n]\leq c$$ $\Box$
Fact 1:
If $X$ is a random variable with finite variance, and if $B$ is an event, then $$ |E[X|B] P[B]| \leq \sqrt{E[X^2]P[B]}$$
Proof:
$$E[X^2] = E[X^2|B^c]P[B^c] + E[X^2|B]P[B]\geq E[X^2|B]P[B]\geq E[X|B]^2P[B]$$ where the last inequality is by Jensen's inequality. Multiplying both sides by $P[B]$ and taking square roots gives the result. $\Box$