$\newcommand{\pr}{\mathrm{Pr}}\newcommand{\d}{\,\mathrm{d}}$Let me begin by saying I really do not know anything about Brownian motion apart from the most basic definitions. I was reading this answer and came upon a seemingly simple step which I wanted to check in detail.
Let $W$ be a linear standard Brownian motion defined on the probability space $(\Omega,\mathcal{A},\mathrm{Pr})$. With no loss of generality take $W$ to have everywhere continuous sample paths. The answer states (paraphrasing):
For $x>0$ and $0<\epsilon<1$, put $A^\epsilon_x:=\{\max_{t\in[\epsilon,1]}W_t=x\}$, $H^\epsilon_z:=\{\max_{t\in[0,1-\epsilon]}W_t=z\}$ and let $\mu$ be the distribution of $W_\epsilon$. Then by independence of increments: $$\tag{$\ast$}\pr(A^\epsilon_x)=\int_\Bbb R\pr(H^\epsilon_{x-y})\d\mu(y)$$
I want to know how to justify equation $(\ast)$. The Tl;Dr is that I can do this, I think, except I can't justify that $(W_t)_t$ and $(W_{t+\epsilon}-W_\epsilon)_t$ have the same joint distribution, in a sense to be made precise below. I would appreciate any hints or answers in that direction.
It is simple to show the following lemma:
Let $E_1,E_2,E_3$ be measurable spaces and $X:\Omega\to E_1,Y:\Omega\to E_2$ independent random variables. Then for any measurable $f:E_1\times E_2\to E_3$ and any measurable $A\subseteq E_3$ we have: $$\pr(f(X,Y)\in A)=\int_{E_1}\pr(f(x,Y)\in A)\d X_\ast(x)$$
I would like to say, I have never seen this lemma stated explicitly but I've seen it "used" often and I proved it to myself - I think. If it is wrong, please let me know!
I believe Prof. Peres has used some version of this here. Here is my attempt, but there are a few points I am unsure of.
Define $S:=[0,1-\epsilon]\cap\Bbb Q$. Note that: $$\begin{align}A^\epsilon_x&=\{\max_{t\in[0,1-\epsilon]}W_{t+\epsilon}=x\}\\&=\{W_\epsilon+\max_{t\in[0,1-\epsilon]}(W_{t+\epsilon}-W_\epsilon)=x\}\\&=\{W_\epsilon+\max_{t\in S}(W_{t+\epsilon}-W_\epsilon)=x\}\\&=\{f(W_\epsilon,(W_{t+\epsilon}-W_\epsilon)_{t\in S})\in\{x\}\}\end{align}$$
Where we consider $(W_{t+\epsilon}-W_\epsilon)_{t\in S}$ to be a random variable $\Omega\to\Bbb R^S$, where the right hand side is the Borel space associated to the topological product and $f:\Bbb R\times\Bbb R^S\to\overline{\Bbb R}$ is defined via $(x,(y_t)_{t\in S})\mapsto x+\sup_{t\in S}y_t$. $f$ can be seen to be measurable since $S$ is countable, and the supremum equals the maximum by continuity.
We want $W_\epsilon$ to be independent of $(W_{t+\epsilon}-W_\epsilon)_{t\in S}$. To that end, I'm pretty sure it suffices to check that they are independent after restricting to the generating cylindrical sets of $\Bbb R^S$'s algebra as these are a generating $\pi$-system. So take some such event $U=U_{t_1}\times U_{t_2}\times\cdots\times U_{t_n}\times\Bbb R^S$ for distinct $t_i$ and some $V\subseteq\Bbb R$ where $V$ and the $U_\bullet$ are open subsets of the real line. Clearly: $$\{(W_{t+\epsilon}-W_\epsilon)_{t\in S}\in U\}=\cap_{i=1}^n\{(W_{t_i+\epsilon}-W_\epsilon)\in U_{t_i}\}$$
Thus $(W_0=0)$: $$\begin{align}\pr(W_\epsilon\in V\text{ and }(W_{t+\epsilon}-W_\epsilon)_{t\in S}\in U)&=\pr(W_\epsilon-W_0\in V\text{ and }W_{t_1+\epsilon}-W_\epsilon\in U_{t_1}\text{ and }\cdots W_{t_n+\epsilon}-W_\epsilon\in U_{t_n})\\&=\pr(W_\epsilon\in V)\prod_{i=1}^n\pr(W_{t_n+\epsilon}-W_\epsilon\in U_{t_n})\\&=\pr(W_\epsilon\in V)\pr(W_{t_1+\epsilon}-W_\epsilon\in U_{t_1}\text{ and }\cdots)\\&=\pr(W_\epsilon\in V)\pr((W_{t+\epsilon}-W_\epsilon)_{t\in S}\in U)\end{align}$$
So they are independent random variables (or are they? See bottom of the post). Here, the "independent increments" axiom of Brownian motion is used. Thus we can apply the lemma to $E_1=\Bbb R,\,E_2=\Bbb R^S$ etc. and find: $$\pr(A^\epsilon_x)=\int_\Bbb R\pr(f(y,(W_{t+\epsilon}-W_\epsilon)_{t\in S})=x)\d\mu(y)$$
But there is one last thing to check. The integrand is, for any $y$: $$\pr(\max_{t\in[0,1-\epsilon]}(W_{t+\epsilon}-W_\epsilon)=x-y)$$Which is almost $\pr(H^\epsilon_{x-y})$ as claimed by Prof. Peres.
This is fixed if $(W_{t+\epsilon}-W_\epsilon)_{t\in S}$ has distribution equal to that of $(W_t)_{t\in S}$, considered on $\Bbb R^S$. To that end, I think I need to use the same finite-cylinder trick.
Again take some basic event $U=U_{t_1}\times U_{t_2}\times\cdots U_{t_n}\times\Bbb R^S$ for distinct $t_i$. It suffices to show the distributions coincide on such a set $U$. We have: $$\begin{align}\pr((W_t)_{t\in S}\in U)&=\pr(W_{t_1}\in U_1\text{ and }\cdots)\\&\overset{???}{=}\pr(W_{t_1+\epsilon}-W_\epsilon\in U_1\text{ and }\cdots)\\&=\cdots\\&=\pr((W_{t+\epsilon}-W_\epsilon)_{t\in S}\in U)\end{align}$$Thus their distributions are equal $\blacksquare$.
I am stuck on justifying step $(???)$. Now I can notice that $W_{t_k}$ and $W_{t_k+\epsilon}-W_\epsilon$ are equal in distribution by the stationary increment axiom. I would like to then just swap every appearance of $W_{t_k}$ with $W_{t_k+\epsilon}-W_\epsilon$ and conclude, but that's not valid. It could be made to be valid if I could express the probability as $\pr(W_{t_1}\in U_{t_1})\times\cdots$ but in fact we don't have independence. Damn. Well, I would also love to try: $$\pr(W_{t_1}\in U_1\text{ and }\cdots)=\pr(W_{t_1}\in U_1\text{ and }W_{t_2}-W_{t_1}\in U_2-U_1\text{ and }\cdots)$$And use independence of increments, but I'm pretty sure that doesn't work. Of course, if $W_{t_1}\in U_1$ and $W_{t_2}\in U_{t_2}$ then $W_{t_2}-W_{t_1}\in U_2-U_1$ but it is not true that $W_{t_1}\in U_1$ and $W_{t_2}-W_{t_1}\in U_2-U_1$ implies $W_{t_2}\in U_2$. I can't think of a good alternative to $U_2-U_1$ that allows for a similar trick to work.
EDIT: I now realise that my other work has the same flaw, namely where I write: $$\pr(W_\epsilon-W_0\in V\text{ and }W_{t_1+\epsilon}-W_\epsilon\in U_{t_1}\text{ and }\cdots W_{t_n+\epsilon}-W_\epsilon\in U_{t_n} )\\\overset{??!}{=}\pr(W_\epsilon\in V)\prod_{i=1}^n\pr(W_{t_n+\epsilon}-W_\epsilon\in U_{t_n})$$
While it is true that $W_\epsilon-W_0$ is independent of each $W_{t_k+\epsilon}-W_\epsilon$ it is not true that $W_{t_k+\epsilon}-W_\epsilon$ is independent of $W_{t_k'+\epsilon}-W_\epsilon$, so I need some kind of extra 'trick' to finish the job. What is that trick?
How do I finish this off? Oh, and - novice that I am - that felt like an unreasonable amount of work to claim such a simple seeming integral equality. Was there a more efficient way, or are the arguments I make just standard details everyone acknowledges?
It's a lot to ask, but I'd also appreciate any comments on the correctness of the other arguments I made. It's easy to make measurability mistakes with product spaces!
I think this is a resolution to the problem.
Lemma: If $X,Y,Z-Y$ are independent then $X$ is independent of the pairing $(Y,Z)$. Proof sketch: as $\{Y\in A\wedge Z-Y\in B\wedge Y\in C\}=\{Y\in(A\cap C)\wedge Z-Y\in B\}$ we can deduce $X$ is independent of the pairing $((0,Y),(Y,Z-Y))$ hence independent of the sum $(Y,Z)$.
Generalising I’m pretty sure that if $X,Y_1,\cdots,Y_n$ are independent then $X$ is independent of $(Y_1,Y_1+Y_2,\cdots,Y_n+Y_{n-1})$. Using the same proof sketch strategy and adding $(0,0,Y_1,Y_1+Y_2,\cdots)$ finds $X$ independent of $(Y_1,Y_1+Y_2,Y_1+Y_2+Y_3,Y_1+\cdots+Y_4,Y_2+\cdots+Y_5,\cdots)$ and iterating this gets $X$ independent of $(Y_1,Y_1+Y_2,\cdots,Y_1+Y_2+\cdots+Y_n)$.
Applying this to the independent increments $X=W_\epsilon,Y_1=W_{t_1+\epsilon}-W_\epsilon,Y_2=W_{t_2+\epsilon}-W_{t_1+\epsilon},$ etc. should find $W_\epsilon$ independent of any tuple $(W_{t_k+\epsilon}-W_\epsilon)_{k=1}^n$. So we might not get full independence but we can conclude: $$\mathrm{Pr}(W_\epsilon\in V\text{ and }\cdots)=\mathrm{Pr}(W_\epsilon\in V)\mathrm{Pr}(W_{t_1+\epsilon}-W_\epsilon\in U_1\text{ and }\cdots)$$Which is all I ever needed.
For the second problem, it’s probably very trivial. It’s easy to see $t\mapsto W_{t+\epsilon}-W_\epsilon$ is again a standard linear Brownian motion. Assuming standard linear Brownian motion has a well defined, unique, invariant-of-model, law on (the topological space) $\Bbb R^{[0,\infty)}$ (this must be true since it’s assumed by every textbook implicitly but I’m not yet fully convinced of it myself, see my comments under Peres’ answer) it follows that the distributions are the same ‘automatically’. No need to fret about independence.