In probability theory, it is (as far as I am aware) universal to equate "probability" with a probabilistic measure in the sense of measure theory (possibly a particularly well behaved measure, but never mind). In particular, we do assume $\sigma$-additivity, but not anything more (say, additivity with respect to families with cardinality $\mathfrak{c}$ [which would of course make things break down]).
For me, as a mathematician, this is completely satisfactory, and until recently I hardly realised that it may not be entirely obvious that probability should behave thus. A sufficiently convincing justification for working with measures would be that integration theory is precious, and we want to be able to make use of integrals to compute expected values, variances, moments and so on. And we can't require any "stronger" kind of additivity, since then things fall apart already for a uniform random distribution on $[0,1]$.
However, recently I have had some interactions with non-mathematicians, who approach "higher" mathematics with some understandable uncertainty, but who still find the notion of probability relevant. One of the things it made me realise is that I am not myself fully aware why, in principle, we define things thus and not otherwise. Hence, after this overlong introduction, here is the question. Is there a fundamental reason why measure theory is the "only right way" to deal with probabilities (as opposed to e.g. declaring probabilities to be just finitely additive)? If so, is there a "spectacular" example showing why any other approach would not work? If not, then is there an alternative approach (with any research behind it)?
Terence Tao's free book on measure theory spends some time near the beginning developing "Jordan measure", which is a sort of finitely-additive version of Lebesgue measure.
As he points out, that theory is mostly fine as long as one happens to only work with things that are Jordan measurable. However, as Tao proves in Remark 1.2.8, there are even open sets on the real line that are not Jordan measurable. Similarly, it turns out that $[0,1]^2 \setminus \mathbb{Q}^2$ is not Jordan measurable (Exercise 1.1.8).
In general, I think Tao's presentation does show clearly the similarites and differences between Lebesgue and Jordan measure, although it takes some mathematical maturity to read it, so it might not help your friends.
Separately, one reason other than integration that countable additivity is important is that many sets of interest in probability theory are $G_\delta$ or $F_\sigma$, and we want such sets to be measurable.
For a very specific example, it should be the case that a random real number in $[0,1]$ has infinitely many $3$s in its decimal expansion. Formally, this means that the set $U$ of irrationals in $[0,1]$ that have only finitely many $3$s in their decimal expansion should have measure $0$. Now, for each $k$, the set of irrationals in $[0,1]$ with $k$ or more $3$s in their decimal expansion is open as a subset of the irrationals. So the set $U$ is $F_\sigma$ in $[0,1]\setminus \mathbb{Q}$, but it is not open or closed. So, if we did not have countable additivity of the measure, $U$ might not be measurable at all.
This phenomenon happens more generally when we use the Baire category theorem to construct some type of real; this theorem naturally constructs $G_\delta$ sets, not open or closed sets. The key benefit of countable additivity is that once open intervals are measurable, all Borel sets are measurable (and, moreover, all analytic sets - continuous images of Borel sets - are Lebesgue measurable). So, unless we really try, we are unlikely to construct nonmeasurable sets.