Information loss function for numerical approximations

35 Views Asked by At

The other day I was reading about some aspects in information including Shannon's source coding theorem and other interesting ideas. Coming from a computational/numerical mathematics background I was wondering about the following:

When formulating a numerical approximation, say approximating a function using a polynomial, or solving a differential equation using finite difference schemes, one loses information about the original function or the ture solution to the differential equation (assuming for sake of argument that there exists a solution). Now my question:

Could there be a way to write down a function which quantifies the loss of information when applying an approximation?

I understand that this function could have numerous inputs, and perhaps is not the most practical method of determining how good an approximation is. However, I would like to have a general idea if this is possible in principle, and how one might construct such a function.