$X$ - the number of tosses we need to perform to get H(eads), $p(H)=p$. Here we deal with geometric distribution, for which $p_X(k)=(1-p)^{k-1}p$
There's this way to find ${\bf E}[X]$ by conditioning on the result of the first toss. It looks like this:
${\bf E}[X]={\bf E}[X|toss_{1}=H]*P(H)+{\bf E}[X|toss_{1}=T]*P(T)=$
$={\bf E}[X|toss_{1}=H]*p+{\bf E}[X|toss_{1}=T]*(1-p)$
${\bf E}[X|toss_{1}=H]=1$, because if we get H at the first toss, we stop tossing.
${\bf E}[X|toss_{1}=T]*(1-p)={\bf E}[X+1]*(1-p)$ and here comes the question:
Why do ${\bf E}[X+1]$ and not ${\bf E}[X-1]$? Assuming that we are looking for the overall expected number of tosses and knowing that 1 toss is done, don't we look for the expected value of the tosses that are left? I've already taken into account the first 'unfavorable toss', why then add it to $X$ instead of subtract?
No. $\mathsf{E}(X \mid \text{toss}_1 = \mathsf{T})$ is the expected number of tosses needed to get a $\mathsf{H}$, including the first toss which is $\mathsf{T}$. Since we have known the first toss is $\mathsf{T}$, in expectation we still need $\mathsf{E}(X)$ tosses. Therefore, in total, the expected number is $$ \mathsf{E}(X) + 1 = \mathsf{E}(X + 1) $$