Why is the idea of measurement errors captured by integrating with a function? (Context of distribution theory)

38 Views Asked by At

Suppose that $f(x)$ represents temperature at a point $x$ in a room (or if you prefer let $f(x,y)$ be temperature at point $x$ and time $t$.) You can measure temperature with a thermometer, placing the bulb of the thermometer at the point $x$. Unlike the point, the bulb of the thermometer has a nonzero size, so what you measure is more an average temperature over a small region of space (again if you think of temperatures as varying with time also, then you are also averaging over a small time interval preceeding the time $t$ when you actually read the thermometer).. Now there is no reason to believe the average is "fair" or "unbiased". In mathematical terms, a thermometer measures $$ \int f(x) \phi(x) dx$$ where $\phi(x)$ dpeends on the nature of the thermometer and where you place it- $\phi(x)$ will tend to be "concentrated" near the location of the thermometer bulb and will be nearly zero once you are sufficiently far away from the bulb.

A guide to distribution theory and fourier transform by Strichartz, pg-1, sect 1.1

Could someone explain why integrating with this function $\phi$ is the way to quantify the measurement error occuring here?