The problem is to find the distribution of $X_1\mid M$ where $M$ is the maximum of the i.i.d. random variables $X \sim U(0,\theta)$. I have a complete solution but am having trouble justifying one step. We use Bayes' Theorem for CDF's to get started:
$$ P(X_1 < x_1 \mid M < m) = \frac{P(M < m \mid X_1 < x_1) P(X_1 < x_1)}{P(M < m)} $$
The cdf's for $M$ and $X_1$ are $(m/\theta)^n$, by independence, and $x_1/\theta$. The cdf for $M\mid X_1$ is $(m/\theta)^{n-1} {\bf 1} [x_1 \leq m]$. The justification I have is that if the observed value $x_1$ is greater than $m$, then $m$ cannot be the maximum. So, I threw the indicator on the cdf in order to justify that $M\mid X_1$ is just the distribution of the maximum excluding $X_1$. So,
$$ \frac{P(M < m \mid X_1 < x_1) P(X_1 < x_1)}{P(M < m)} = \frac{(x_1/\theta) (m/\theta)^{n-1}}{(m/\theta)^n} = \frac{x_1}{m} $$
It follows that $X_1\mid M \sim U(0,m)$.
Is my justification for the distribution of $M\mid X_1$ correct? I believe my final answer is intuitive.
We want the CDF of $X_1|M=m$. One knows that: \begin{align} F_{X_1|M}(x|m)=\int^x_{-\infty} \frac{f_{X_1,M}(u,m)}{f_M(m)}\,du \end{align} It is easy to find \begin{align} F_{X_1,M}(x,m)=P(X_1<x, M<m)= \begin{cases} \left(\frac{m}{\theta}\right)^n & \text{ if } & 0\leq m \leq x \leq \theta\\ \frac{x}{\theta}\left(\frac{m}{\theta}\right)^{n-1} & \text{ if } & 0\leq x < m \leq \theta\\ \end{cases} \end{align} Note that $F_{X_1,M}$ is not differentiable everywhere. But differentiating where it is differentiable yields: \begin{align} f_{X_1,M}(x,m)=\frac{(n-1)m^{n-2}}{\theta^n}\mathbf{1}_{0\leq x<m\leq\theta} \end{align} We also know that $f_M(m)=\frac{nm^{n-1}}{\theta^n}$, just differentiate the CDF you already have. So we get: \begin{align} F_{X_1|M}(x|m) = \begin{cases} 0 & \text{ if } & x<0\\ \frac{x(n-1)}{mn} & \text{ if } & 0\leq x<m\\ 1 & \text{ if } & m\leq x\\ \end{cases} \end{align}