Say an iterative approximation algorithm is "hyperstable" if roundoff error at one step simply doesn't matter, because it automatically gets corrected in succeeding steps.
For example, we approximate $\sqrt 2$ by saying $x_0=1$, $x_{n+1}=\frac12(x_n+2/x_n)$. Then $x_1=1.5$, but if we get $x_1=1.49$ or $x_1=1.51$ because of roundoff error that doesn't matter, $x_n$ will still converge to $\sqrt 2$.
Q: What's the right word for "hyperstable"?
My work so far: Not much; I have a hard time figuring out an appropriate search term.
Context: Teaching Linear 101. A student asked if we were going to be doing Gauss-Seidel or Gauss-Jacobi. I had no idea - of course being a typical student in that class he was unable to clarify. I looked it up after class, and the answer is neither - we're talking about Gaussian elimination, which is simply not an iterative approximation algorithm.
So I'll be answering the question today, and it seems appropriate to talk a bit about stability.
Q: I have only a vague idea how G-S and G-J work. They are in fact hyperstable, yes?
Note The algorithm I think of as Gaussian elimination is definitely unstable. I gather there are tricks one can use to enhance the stability, but here I'm not talking about that - maybe my Gaussian elimination is "naive Gaussian elimination": Just do it assuming exact arithmetic, without worrying about numerical issues.
So Gaussian elimination is certainly unstable, meaning that tinny roundoff errors can lead to huge errors in the answer. I realized the other day that it's actually "hyper-unstable", meaning that roundoff errors can take a system with a unique solution and convert it to a system with no solution at all!
Q: Is there a standard term for "hyper-unstable"?
Example. Say $\delta>0$ is so small that $1+\delta=1$ in floating-point. Consider the system $x+y+z=0$, $-x+\delta z=1$, $-x=2$. If we don't notice that it's silly because $x$ is given and we blindly apply $R_2=R_2+R_1$, $R_3=R_3+R_1$ we get $y+z=1,$ $y+z=2$.
I have seen the term "self-correcting" used to describe what you call "hyper-stable". Invariably, the term is used in the context of fixed point iterations. The rounding errors are still relevant because they determine how accurately we can compute the limit, but they do not keep us from obtaining a good approximation.
As stated by @Lutz, the Gauss-Seidel and Jacobi iterations are examples of fixed point iterations and we could label them "self-correcting".
There are rather cases of algorithms where rounding errors bring actual benefits. The power method for computing a dominant eigenpair of a matrix is the only example that I can recall right now. In exact arithmetic you never converge if the initial guess is orthogonal to the dominant eigenspace. In floating point arithmetic, rounding errors will sooner rather than later give you component in the right direction and subsequent iterations will blow it up until it dominates.
I know of no term equivalent to "hyper-unstable" nor have I ever needed such a term. A problem is either solvable or insolvable. If it is solvable, then the distance to there nearest unsolveable problem is relevant. A solvable problem is either ill-conditioned or well-conditioned. If it is ill-conditioned, then I need to run the entire computation using, say, double precision, rather than single precision arithmetic. In any case, I will be using stable algorithms. If an algorithm is unstable for the given problem class, then it is useless for practical calculations. The degree of instability is irrelevant.
The label: "It works 99.9999% of the time" is a useless property for a piece of software whose errors can lead to loss of life, say, an auto-pilot, the controller of a close-in weapons system or the operating system for a power plant.