Suppose I have a series of normal variables $Y_i \sim \mathcal N(\theta, 1)$ for $1 \leq i \leq N$. Define:
$$S_k = \sum\limits_{i=1}^kY_i$$
Since they're sums of normally distributed variables, we know that $S_k \sim \mathcal N(k\theta, k)$.
Now, I want to calculate $$p(S_N \geq 2\sqrt N, \forall i < N: S_i < 2 \sqrt i|\theta)$$
Is there any non-brute-force way of calculating this? That is, is there any way to calculate it other than going
$$p(S_1<2|\theta)p(S_2<2\sqrt 2|S_1<2, \theta)p(S_3<2\sqrt 3|S_2<2\sqrt 2, S_1<2, \theta)...$$
?
Furthermore, let $y = (Y_1, Y_2, ..., Y_N)$ be the vector that characterises that series. What would be the function $p(y|S_N \geq 2\sqrt N, \forall i < N: S_i < 2 \sqrt i, \theta)$?
ETA reason for this:
http://scientiststhesis.tumblr.com/post/118798407605/a-problem-with-the-likelihood-principle
Basically, I want to see whether knowledge about a content-dependent stopping rule as opposed to a neutral one when collecting i.i.d. data can affect my conclusions about the parameter.
Using regular Bayesian statistics and supposing a neutral stopping point, it was shown that if $S_n \geq 2\sqrt n$ then the lower bound of the 95% Bayesian credence interval of the posterior for $\theta$ is $\geq 0$; it was also shown that if you collect data without stopping this happens almost surely at some point.
If the Likelihood Principle applies and the likelihood of the observed data under one stopping rule is proportional to that of the other, we can't draw different inferences depending on the stopping rule, but I think this may not be the case here.
ETA:
Okay so apparently brute-forcing for N = 2 shows that the likelihood functions are different: http://scientiststhesis.tumblr.com/post/118806588610/a-problem-with-the-likelihood-principle
My original question still stands, but I think I've answered my underlying one.
Edit: