norm for estimating the error of the numerical method

992 Views Asked by At

In most of the books on numerical methods and finite difference methods the error is measured in discrete $L^2$ norm. I was wondering if people do the in Sobolev norm. I have never see that done and I want to know why no one uses that.

To be more specific look at the $$Au=f,$$ where assume $A_h$ is some approximation for $A$ and $U$ is the numerical solution for the system. Then if we plug the actual function $u$ into $A_hU=f$ and substruct we have $$A_h(u-U)=\tau$$ for $\tau$ being a local error. Thus I have an error equation $$e=A_h^{-1}\tau$$ What are the problems I am facing If I use discrete Sobolev norm?

1

There are 1 best solutions below

2
On BEST ANSWER

For one thing, it's a question of what norm measures how "accurate" the solution is. Which of the two error terms would you rather have: $0.1\sin(x)$ or $0.0001\sin(10000x)$? The first is smaller in the Sobolev norm, the second is smaller in the $L^2$ norm.