When considering a binomial distribution with large $n$, it is (was?) usefull to use the Poisson or Normal distribution instead. One of the reason being the difficuty to compute the binomial coefficient for large $n$. Computers have since grown in performance and algorithms in efficiency.
Is it still worth teaching students about approximating $B(n;p)$ with $Po(np)$ for small values of $np$ (or small values of $n(1-p)$) and with $N(np;np(1-p))$ otherwise?
An ideal answer would include reference (personal or theoretical) to what is actually done in practice and to the applications of such approximations in undergraduate or graduate courses.
They are indeed not so useful as calculational tools, but in terms of the theory they are important. The approximation of binomial by Poisson motivates many applications of the Poisson distribution. The approximation of binomial by normal leads to the Central Limit Theorem.
EDIT: A classic example of a practical application of Poisson distribution (from von Bortkiewicz in 1898) is the number of deaths in a year in the Prussian army due to accidental horse-kicks. We may imagine that there are a large number $n$ of Prussian soldiers, each of whom, independent of the others, has a small probability $p$ of being accidentally kicked to death by a horse in a given year. The total number of such deaths would then be binomial with parameters $n$ and $p$. Since $n$ is large and $p$ small, the result should be well-approximated by a Poisson distribution, and indeed this is what von Bortkiewicz found. Actually the probability will vary from one type of soldier to another, but this does not affect the result.