I'm studying randomized algorithms and I sometimes come across results like
(1) The algorithm has an expected $O(f(n))$ cost.
and
(2) With constant probability, the cost is bounded by $O(f(n))$.
I'm perfectly fine with statements like (2), but I'm puzzled to what extent a statement like (1) is useful: For certain probability distributions of a random variable, the expected value itself occurs with less than constant probability; for other distributions, it occurs with $1-1/n$ probability. Of course, in many cases, (1) is extended via concentration bounds to show high probability, but in cases where this isn't done, it doesn't seem that a statement on the "expected cost" lets us derive any implications on the actual performance of the algorithm, right?
Well, no. Usually cost is non-negative, so you can apply the Markov inequality and get
$$P(X_n \geq f(n)) \leq \frac{\mathbb{E}X_n}{f(n)}.$$
For example, if $\mathbb{E}X_n = 2013$, then we have that the cost of the algorithm for the input size of $n$ is bounded by $4026$ with probability $\frac{1}{2}$. Of course, with better machinery you can obtain better bounds, but expected time is often good enough. One example of such setting is when running an algorithm sequentially many times (i.e. you get concentration for free and thus may use stronger bounds).
I hope this helps $\ddot\smile$