I have a question regarding these two terms. I undertand that for a problem to be well-posed, it needs to satisfy three conditions:
- Existence of solution
- Unique solution
- Solution depends with continuity on the data (i.e. small perturbations of the data lead to small perturbations in the solution). I will refer to this as stability.
Then, once we have our well-posed problem, we can define whether it is well- or ill-conditioned, which as stability, also measures how much the solution changes with perturbations in the data.
Then, I am confused. Why are not all ill-conditioned problems ill-posed?
As an example, say we have to solve for x in a nonlinear continuous function, so our problem is $f(x)=0$. Then, if I found the condition number to tend to infinity, how could my problem ever be well-posed? My professor used this example in class and I was just mind struck.
Note: I have already gone through the post Well-posed vs Well-conditioned, which asks a slightly different question and has helped me a lot, but I still have a lot of confusion in me.
We can see a problem as a function $f: X \rightarrow Y$ from a space of data $X$ to a solution space, $Y$. When solving a problem, we typically only concern ourselves with a solution at a particular data point $x \in X$. This combination of problem and data can be called a problem instance. The behavior of the problem may vary greatly from one instance to another.
Consider as an example solving the inverse problem $Ax = b$ where $A$ is not degenerate. So, $A$ is invertible and this is a well-posed problem. If we perturb the input a little bit
$$ A(x+\delta x) = b + \delta b $$
we get
$$ \begin{align} A\delta x &= \delta b\\ \implies \delta x &= A^{-1}\delta b\\ \implies \|\delta x\| &\le \|A^{-1}\|\|\delta b\| \end{align} $$
We also have
$$ \|A\|\|x\| \ge \|b\| $$
Then the relative condition number of our problem is estimated by
$$ \frac{\|\delta x\|}{\|x\|} \le \|A\|\|A^{-1}\|\frac{\|\delta b\|}{\|b\|} $$
But consider as a particular instance, the $n \times n$ matrix
$$ A = \left(\begin{matrix}1&\alpha&0&\cdots&0\\ 0&1&\alpha&\cdots&0\\ \vdots & \vdots &\vdots &\ddots &\vdots\\ 0&0 & 0& 1& \alpha\\ 0&0&0&0&1\end{matrix}\right) $$
For large $n$ and $|\alpha| > 1$, this matrix is ill-conditioned since the inverse matrix contains $\alpha^{n-1}$ terms. So, systems with "bad" matrices can be considered practically unstable, although formally the problem is well-posed and the stability condition $\|A^{−1}\| \lt \infty \|$ holds.