I recently encountered a problem as follows:
You start with \$100. You flip a coin 4 times. Each time, if you get a heads, you get $100. If you get a tails, you lose half of your current money. What is the expected amount of money you have after 4 flips?
The idea is that you can use the expected value of the previous flip to get the expected value of the current flip. In particular, let $f_i$ represent the amount of money you have after flip i.
The problem can be represented as the recurrence $E[f_i] = \frac{1}{2} (100 + E[f_{i-1}]) + \frac{1}{2} \left( \frac{1}{2} E[f_{i-1}] \right)$.
You start with the base case $f_0 = 100$, and work your way to $f_4$.
Why can you just plug in the expected value of the previous state? How is this equivalent to the manual approach of enumerating all of the outcomes and taking the weighted sum?
Nice question. It's the kind of thing you might just take on faith, but it's not necessarily obvious that the expectations are allowed on the right side.
Note that if $f_{i-1}$ is known, then $$E[f_i | f_{i-1}] = \frac{1}{2}(100 + f_{i-1}) + \frac{1}{2}\left(\frac{1}{2} f_{i-1}\right). \quad \quad \quad \quad \quad (*)$$ This is your equation, except it is conditioned on the random variable $f_{i-1}$ instead of using its expectation. This ($*$) equation seems more obvious, in my opinion, than the expectation version that you have in your prompt.
To answer your question "Why can you just plug in the expected value of the previous state?": it's a consequence of the law of total expectation. It essentially says that we can average the equation ($*$) over the outcomes for $f_{i-1}$, and we'll get the average for $f_i$. Formally, $$E[f_i] = E[E[f_i | f_{i-1}]] = \frac{1}{2}(100 + E[f_{i-1}]) + \frac{1}{2}\left(\frac{1}{2} E[f_{i-1}]\right),$$ where the first equality is the law of total expectation and the second uses ($*$) and the linearity of expectation.
As a side note, this is a small enough example that you could actually confirm that the terms involved on the right side agree with the expectation on the left, when expanded. But if you just want a slick justification for why we can average on the right instead of using the plain variable $f_{i-1}$, I think you're looking for the law of total expectation.