If we have a function $\phi(x)$ we can determine the corresponding distribution $\phi^D$ such that:
$$\forall f:L_{\phi^D}(f)=\langle\phi^D|f\rangle=\int_\mathbb{R}\phi(x) f(x) dx$$
as long as $f$ is a proper function.
Inversely, if we know the functional $L_{\phi^D}$, how can we find back the function $\phi(x)$?
If a distribution is known to come from a function (this is not always the case, see the Dirac delta distribution for an example), then we can apply it to test functions that approximate the Dirac delta distribution, shifted over a distance $x,$ ever more closely. Think of a Gauss curve with mean $x$ and variance $1/n.$
If the original function was continuous then this will converge to its value at $x.$ If the original function was $L^p$ then this will converge to its value at almost every $x.$