I am trying to understand the derivation of the mean of a geometric distribution from the example in Bertsekas' "INTRO. TO PROB.".
He uses conditional expecations, $$E(x|A_i)$$ to find the $$E(x)$$
I don't undersand the above argument. Are they using any independece/memorylessness concept? If I know that the first n tries are failures. then the remaining tries follow still follow the same geometric distribution?
$$ E[x|x>n] = n + E[x] $$

Yes, they are using memorylessness property, though the exact proof can be checked as follows;
$$ \mathbb E(X|X>1) = \sum_{k=0}^\infty \mathbb P(X > k |X > 1) =2+ \sum_{k=2}^\infty \mathbb P(X > k |X > 1) = 2+\sum_{k=2}^\infty \mathbb P(X > k-1).$$ Doing the change of variables, $k-1= u$:
$$ \mathbb E(X|X>1) = 2+\sum_{u=1}^\infty \mathbb P(X > u) = 2+\sum_{u=0}^\infty \mathbb P(X > u) - \mathbb P(X\geq 0) = 1 + \mathbb E(X).$$
Where i used the fact that $\mathbb P(X \geq 0) =1$ and $\mathbb E(X) = \sum_{n=0}^\infty\mathbb P(X>n)$ for a non-negative random variable.