From an interview book, where the answer is not so clear I believe. You keep generating $\mathcal U_{[0,1]}$ iid random variables until their sum exceeds 1, then compute the expected value of the last random variable, i.e. the one responsible for letting the sum of rvs overflow 1.
My idea (not working):
The $i$-th draw from $\mathcal U_{[0,1]}$ is called $X_i$, and $S_N:=\sum_{i=1}^N X_i$.
I aim to compute: $$\mathbb E\left[X_{N}\right], N:=\min \left[i:\sum_i X_i > 1\right].$$
Rewrite it as: $$\mathbb E\left[X_{N}\right] = \sum_{i=2}^\infty \mathbb E\left[X_{N}|N=i\right]\mathbb P[N=i].$$
From this question I know that $\mathbb P[N=i] = (i-1)/i!$.
I know that $X_N$ takes positive values between 0 and 1, so I use the expectation of the tail function: $$\mathbb E\left[X_{N}|N=i\right]=\int_0^1 \mathbb P[X_N>t|N=i]\ \text d t= 1-\int_0^1 \mathbb P[X_N\leq t|N=i]\ \text d t.$$
Now, some relabeling, using $X$ for the generic $\mathcal U_{[0,1]}$ and $Y$ for $S_{i-1}$: $$\mathbb P[X_N\leq t|N=i]=\mathbb P[X_i\leq t|S_{i-1}<1 \cap S_{i-1}+X_i> 1]=\mathbb P[X\leq t|Y<1 \cap X> 1-Y].$$
Now reversing the conditioning: $$\mathbb P[X\leq t|Y<1 \cap X> 1-Y] = \frac{\mathbb P[X\leq t\cap X> 1-Y|Y<1]}{\mathbb P[X> 1-Y|Y<1]}.$$
Now, from the same interview book I know that $\mathbb P[S_N\leq y|S_N < 1]=y^N$, so the density function of $Y|Y<1$ ends up being $(i-1)y^{i-2}\ \text dy$.
By total probability, conditioning over the value of $Y$, I write: $$\mathbb P[X\leq t|Y<1 \cap X> 1-Y] = \frac{\int_0^1\mathbb P[X\leq t\cap X> 1-Y|Y<1, Y=y](i-1)y^{i-2}\ \text dy}{\int_0^1\mathbb P[X> 1-Y|Y<1, Y=y](i-1)y^{i-2}\ \text dy}.$$
The numerator leads to the integral: \begin{equation}\tag{error is here!} \int_0^1\mathbb P[X\leq t\cap X> 1-Y|Y<1, Y=y](i-1)y^{i-2}\ \text dy=\\ \int_0^1(t-1+y)(i-1)y^{i-2}\ \text dy=\dots=t-1+\frac{i-1}i. \end{equation} The denominator similarly: $$\int_0^1y(i-1)y^{i-2}\ \text dy=\dots=\frac{i-1}i.$$
Substituting: $$\mathbb P[X>t|Y<1 \cap X> 1-Y] =1 - \mathbb P[X_N\leq t|N=i]=i\frac{1-t}{i-1},$$
$$E\left[X_{N}|N=i\right]=\int_0^1 \mathbb P[X_N>t|N=i]\ \text d t=\int_0^1 i\frac{1-t}{i-1}\ \text d t=\frac{i}{2(i-1)}$$
$$\mathbb E[X_{N}] = \sum_{i=2}^\infty \mathbb E[X_{N}|N=i]\mathbb P[N=i]=\sum_{i=2}^\infty \frac{i}{2(i-1)}(i-1)/i!=\dots=\frac{e-1}2.$$
The answer should be (verified via MC) $2-\frac e2$.
Would you mind checking my procedure and letting me know where it is wrong?
EDIT: Found the problem thanks to two answers below, adding for completeness.
The probability $P[X\leq t\cap X> 1-Y|Y<1, Y=y]$ should actually be written as (answer by Noble Mushtak):
$$P[X\leq t\cap X> 1-Y|Y<1, Y=y]=\\ =P[X\leq t\cap X> 1-Y\cap 1-Y<t|Y<1, Y=y]=\\=P[1-Y<X\leq t|1-t<Y<1, Y=y]P[1-t<Y<1|Y<1, Y=y]=\\=(t-1+y)\int_{1-t}^1(i-1)y^{i-2}\ \text dy=\dots$$
If so, then (also found by Amir):
$$\mathbb E[X_{N}|N=i]=\frac{i+2}{2(i+1)}.$$
This step is wrong:
$$\int_0^1\mathbb P[X\leq t\cap X> 1-Y|Y<1, Y=y](i-1)y^{i-2}\ \text dy= \int_0^1(t-1+y)(i-1)y^{i-2}\ \text dy$$
You can't say $\mathbb P[X\leq t\cap X> 1-Y|Y<1, Y=y]=t-1+y$ because sometimes $t-1+y$ is negative. Instead, for this probability to be nonzero, we need $1-y \geq t$, which is the same as $y \geq 1-t$, so we need to adjust the integral limits accordingly. We then get:
$$ \begin{align*} \int_0^1\mathbb P[X\leq t\cap X> 1-Y|Y<1, Y=y](i-1)y^{i-2}\ \text dy &= \int_{1-t}^1(t-1+y)(i-1)y^{i-2}\ \text dy \\ &=(t-1)(1-(1-t)^{i-1})+\frac{i-1}{i}(1-(1-t)^i) \end{align*} $$
You also have a second mistake later on: You are substituting $(it-1)/(i-1)$ for $\mathbb{E}[X_N \mid N=i]$, but this isn't right, you should use your integral from before to compute this expectation: $$\mathbb E[X_{N}|N=i]= 1-\int_0^1 \mathbb P[X_N\leq t|N=i]\ \text d t.$$ In fact, I'm not sure how you figured out the value of that infinite sum, since that infinite sum still uses $t$ even though $t$ does not mean anything in that context.
Unfortunately, I don't know how to correct this reasoning, I think the problem can be solved in the way you are doing, but I am not good at infinite sums. Instead, I'll use differential equations.
For any constant $r$, I define the following program $P(r)$.
This is the same random process that you are describing, except I am start with a target sum $r$ and then subtract from there, whereas you start with $0$ and then keep adding until you get to the target sum of $1$. However, these processes are clearly equivalent: Subtracting from $r$ down to some number less than $0$ is the same as adding from $0$ up to some number greater than $r$. I am also using a parameter $r$ for the target sum, whereas in your program, the target sum is $1$. Let $f(r)$ be the expected value of the output of $P(r)$. We will solve for $f(r)$, and then the final answer is just $f(1)$.
Now, in this program, there are basically two cases:
Using these two cases, we get this equation for the expected output of $f(r)$:
$$f(r)=\int_0^r f(r-q)dq + \int_r^1 qdq$$
Use $u=r-q$ in the first integral and simplify the second integral.
$$f(r)=\int_0^r f(u)du+\frac{1-r^2}{2}$$
Take the derivative of both sides:
$$f'(r)=f(r)-r$$
By guess and check, we find that one solution to this differential equation is $f(r)=r+1$. The general solution to the homogeneous equation $f'(r)=r$ is $f(r)=Ce^r$ for arbitrary $C$, so the general solution to this differential equation is $f(r)=Ce^r+r+1$ for arbitrary $C$.
Now, we can find $C$ using $f(0)$: Clearly, $f(0)$ just generates $q$ and always outputs $q$, so $f(0)$ is just the expectation of $q$, which is $1/2$. Ergo, we have: $$ f(0)=Ce^0+0+1=\frac{1}{2} $$ so $C=-1/2$. Ergo, $$ f(1)=-\frac{1}{2}e^1+1+1=2-\frac{e}{2} $$