This question relates to my study in microeconomics, where I sometimes have to calculate the average payoff. From my book, this is defined as:
$$(1-x) \sum_{i=1}^\infty v x^{i-1}$$
where $x$ is some discount factor $0 < x < 1$.
For example, if I want to calculate some expected payoff for an outcome $(5,0)$. Then if they play this strategy where they in the first period will receive $5$ and in the next will receive $0$, then I can show that this gets an average payoff by the above formula:
$$(1-x) \sum_{i=1}^n v x^{i-1} = (1-x)(5 \cdot \frac{1}{1-x^2} + 0 \cdot \frac{1}{1-x^2})$$
but if I instead had a finite sequence of payoffs, how would I derive an expression for the above? I know that this is simply a geometric sum, but I find it difficult to manipulate. And just to be sure, the reason that we get $1/(1-x^2)$ times either $5$ or $0$ is this because we only get either $5$ or $0$ every second time? If we would have just gotten $5$ every time I would assume we instead had $5 \cdot 1/(1-x)$. Is this correct? Thanks in advance for any help.