Given: Fix $n, k \in \mathbb{N}$, we generate $n$ random vectors $X_i \in \mathbb{R}^k$ with $\|X_i\|_2 = 1$.
We want to compute: The expected max Euclidean inner product between a random vector and the sum of the random vectors, i.e., $$ \mathbb E \max_{i \in [n]} \langle X_i, \sum_{j \in [n]} X_j \rangle. $$
What I have tried:
- When $n = 2$, $\langle X_1, X_1 + X_2 \rangle = \langle X_2, X_1 + X_2 \rangle = 1 + \langle X_1, X_2 \rangle$; $\mathbb E \langle X_1, X_2 \rangle = 0$ due to symmetry, and thus $\mathbb E \max_{i \in [n]} \langle X_i, \sum_{j \in [n]} X_j \rangle = 1$.
- When $n \geq 3$, it becomes unclear to me.
- Intuitively it might get larger as $n$ increases since we have more options, and would approach some limit, possibly related to $\| \sum_{j \in [n]} X_j \|$.
- Simulations (1,000,000 trials) with $n = 3$ and $k = 2$ gives $1.5319441791547532 \pm 0.7104635978561801$.
- Simulations (1,000,000 trials) with $n = 4$ and $k = 2$ gives $1.6632600316282025 \pm 0.9042611238131294$.
- Simulations (1,000,000 trials) with $n = 5$ and $k = 2$ gives $1.9472506556915035 \pm 0.9787530817053821$.
- Simulations (1,000,000 trials) with $n = 100$ and $k = 2$ gives $8.86393808099978 \pm 4.6286431725480295$.
- Simulations (1,000,000 trials) with $n = 100$ and $k = 3$ gives $9.083624932218164 \pm 3.8388222137766603$
- Simulations (1,000,000 trials) with $n = 10,000$ and $k = 2$ gives $88.64022827148438 \pm 46.3698844909668$.
- Simulations (1,000,000 trials) with $n = 10,000$ and $k = 3$ gives $92.19273147896439 \pm 38.86980525584397$.
- Simulations (1,000,000 trials) with $n = 1,000,000$ and $k = 2$ gives $886.809001225146 \pm 463.51510862527164$
- Simulations (1,000,000 trials) with $n = 1,000,000$ and $k = 3$ gives $921.3826932102985 \pm 388.68428531397456$
- Simulations (1,000,000 trials) with $n = 1,000,000$ and $k = 4$ gives $939.8279114022998 \pm 341.27201929744365$
- It looks like the order is near $\sqrt{n}$, and I consequently have a conjecture that it would be something close to $\sqrt{n}$.
- I saw something $0.886$ for $k = 2$, and I know $\sqrt{\pi} / 2 \approx 0.8862$; not sure it is related or not.
- Limit of the integral is the square root of pi over 2
- https://en.wikipedia.org/wiki/Gaussian_integral
- I also suspect that it is related to the sphere-cylinder ratio; see, e.g., Volume of Region in 5D Space
As @Sal pointed out, this is related to random walks (see, e.g., Expected Value of Random Walk), and seemingly $$ \mathbb E \| \sum_{j \in [n]} X_j \| \approx \sqrt{\dfrac{2n}{k}} \dfrac{\Gamma(\frac{k+1}{2})}{\Gamma(\frac{k}{2})}, $$ which is
- $\sqrt{n} \frac{\Gamma(1.5)}{\Gamma(1)} = \frac{\sqrt{\pi}}{2}\sqrt{n} \approx 0.886227 \sqrt{n}$ for $k = 2$ as I suspected,
- $\sqrt{\frac{8}{3\pi}} \sqrt{n} \approx 0.921318 \sqrt{n}$ for $k = 3$,
- $\frac{3}{4} \sqrt{\frac{\pi}2} \sqrt{n} \approx 0.939986 \sqrt{n}$ for $k = 4$, and
- indeed approaches $\sqrt{n}$ as $k \to \infty$.
However, this only gives us the asymptotic results as $n \to \infty$, while the cases for small $n$ are still unclear.
Large $n$ asymptotics
Let $S$ be the position of a random walk with steps $X_j$ which are uniformly distributed unit vectors
$$\tag{1} S=\sum\limits_{j=1}^nX_j $$
Then using the multivariate CLT the distribution of $S$ as $n\to\infty$ is a multivariate Gaussian
$$\tag{2} f_S(s)=\frac{\exp\left(-sM^{-1}s/2 \right)}{\sqrt{(2\pi)^k|M|}} $$
where $M$ is the covariance matrix, $|M|$ is the determinant, and $s \in \mathbb{R}^k$. By symmetry we have for the components $X_j^m$ of each vector $X_j$
$$\tag{3} \mathbb{E}(X_j^mX_j^{m'})=\alpha\delta_{mm'} $$
where $\delta$ is the Kroneckar delta and $\alpha$ is a constant. Using (3) we have $M=\mathbb{I}\alpha$. To find the radial distribution of $r=|s|$ we integrate (2) over the hypersphere $S_{k-1}(r)$ yielding
$$\tag{4} f_R(r)=\frac{2\pi^{k/2}r^{k-1}}{\Gamma(k/2)}\frac{\exp\left(-r^2/2\alpha \right)}{\sqrt{(2\pi\alpha)^k}} $$
To determine $\alpha$ we use $\mathbb{E}(X_iX_j)=\delta_{ij}$ with (1) to show $\mathbb{E}(S^2)=n$ then compare to $\mathbb{E}(r^2)=\int_0^\infty dr \ r^2 f_R(r)$ using (4). The result is $\alpha=n/k$. Finally we may evaluate
$$\tag{6} \mathbb{E}(|S|)=\mathbb{E}(r)=\int\limits_0^\infty dr \ r f_R(r) = \sqrt{\frac{2n}{k}}\frac{\Gamma((1+k)/2)}{\Gamma(k/2)} $$
which is identical to the analogous result for the random walk on a lattice!
If we argue that your expression: $\mathbb E \max_{i \in [n]} \langle X_i, \sum_{j \in [n]} X_j \rangle$ is asymptotically equal to $\mathbb{E}(|S|)$ then we have the large $n$ behavior (as done in the answer by @Chris Sanders).
Finite $n$
We can expand the quantity $Q$ within the maximum
$$\tag{7} Q=X_i\cdot S = 1 + \sum\limits_{j\neq i} X_i \cdot X_j $$
If $L=X_i\cdot X_j$ then $L_j=Y\cdot X_j$ has the same distribution as $L$ with $Y$ a fixed unit vector $(1,0,0,\cdots)$, so that
$$\tag{8} Q=1+\sum\limits_{j=1}^{n-1} L_j $$
I will describe in principle how I think we can find the distribution of $\text{max}(Q)$. The distribution of the dot product of unit vectors is known:
$$\tag{9} f_L(l)=\frac{\Gamma \left(\frac{k}{2}\right)}{\sqrt{\pi }\, \Gamma \left(\frac{k-1}{2}\right)}\,\left(1-l^2\right)^{\frac{k-3}{2}}\qquad , \qquad |l|<1 $$
To find the distribution of a sum, we can use the characteristic function
$$\tag{10} \varphi_L(t)=\mathbb{E}(e^{itL})=\int\limits_{-1}^1 dl \ f_L(l) e^{itl} $$
which a CAS evaluates to hypergeometricFPQ functions$^\dagger$. The characteristic function of $Q$ is
$$\tag{11} \varphi_Q(t)=e^{it}[\varphi_L(t)]^{n-1} $$
so that we have
$$\tag{12} f_Q(q)=\frac{1}{2\pi}\int\limits_{-\infty}^\infty dt \ e^{-iqt} \varphi_Q(t) $$
from which the CDF $F_Q$ may be found. Then we need to take care of the maximum. If it were the case that we draw $n$ times from $Q$, the distribution of the maximum would be
$$\tag{13} f_+(q)=nf_Q(q)[F_Q(q)]^{n-1} $$
from which the expectation is
$$\tag{14} \mathbb{E}(Q_\text{max})=\int dq \ q f_+(q) $$
which is correct if the draws are independent- I am uncertain if this last part is the correct model for the problem in OP because it seems the $n$ draws from $Q$ are not independent there.
From (12) onwards, I suspect that analytical progress will be very difficult, but at least the steps are possible numerically.
$\dagger$ Actually the result simplifies to Bessel $J$ functions in even dimensions and sums of trig functions in odd dimensions. I do not dwell on this because of the formidable integral awaiting in (12)