I'm working on a couple proofs of Dirac delta identities:
$$x \delta '(x)=-\delta (x)$$ and
$$x^{2}\delta''(x)=2\delta(x).$$
I managed to prove them both without using a test function, but everywhere I look says that you need a test function to prove them, but without actually saying why. Can someone provide some intuition for this?
As a side note, I know I can define the delta function as an operator (within an integral), but that doesn't explain why I didn't need to "operate" on a function to prove these identities. Thanks in advance.
Edit My proofs (where all integrals are from $-\infty$ to $\infty$):
$\int x\delta'(x)dx\rightarrow$Integrate by parts$\rightarrow=x\delta(x)|^{\infty}_{-\infty}-\int\delta(x)dx=0-1=-1=-\int \delta(x)dx$
$\int x^{2}\delta''(x)dx=x^{2}\delta'(x)|^{\infty}_{-\infty}-2\int x\delta'(x)dx=0-2(x\delta(x)|^{\infty}_{-\infty}-\int \delta(x)dx)=-2(0-1)=2=2\int \delta(x)dx$
A distribution $u\in {\cal D}^{\prime}$ is by definition a continuous linear functional on the vector space ${\cal D}$ of test functions.
Therefore to e.g. prove that a distribution $u=0$ is the zero distribution, one must in principle check that $u[f]=0$ for all test functions $f\in{\cal D}$.
OP wants to prove 2 distributional identities. If we (in a slight abuse of notation) denote the identity map $x\mapsto x$ with just $x$, then OP's distributional identities read $$\delta +x \delta^{\prime}~=~0, \qquad x^2 \delta^{\prime\prime}-2\delta ~=~0.\tag{1}$$
Let us list a few definitions from distribution theory for clarity. Here $\delta\in {\cal D}^{\prime}$ denotes the Dirac delta distribution $$\delta[f]~:=~f(0), \qquad f~\in~{\cal D}.\tag{2}$$ Multiplication $gu$ of a distribution $u$ with a smooth function $g$ is defined as $$ (gu)[f]~:=~u[g f], \qquad f~\in~{\cal D},\qquad g~\in~C^{\infty}(\mathbb{R}), \qquad u~\in~{\cal D}^{\prime}. \tag{3} $$
Also the derivative $u^{\prime}$ is defined as $$ u^{\prime}[f]~:=~-u[f^{\prime}], \qquad f~\in~{\cal D}, \qquad u~\in~{\cal D}^{\prime}. \tag{4} $$ (The minus in definition (4) is inspired by the well-known minus from integration by parts.)
In the presumably most favorable distributional interpretation of OP's flawed proof (v2) (i.e. using evaluation a la definition (2)-(4), and eliminating the need for actual integrations), he is essentially evaluating the distributional identities (1) for the constant test function $f=1$, which is not considered a valid test function. (We typically demand that a test function $f$ has compact support, or alternatively, we use Schwartz test functions. Definition (3) is valid in the former case.)
It is not always necessary to spell out the test functions explicitly. E.g. the distributional identities (1) can be seen as derivative consequences $$ 0~=~(x \delta)^{\prime}~=~\delta +x \delta^{\prime},\tag{5} $$ $$ 0~=~(x ^2 \delta)^{\prime\prime}-4(x \delta)^{\prime} ~=~x^2 \delta^{\prime\prime}-2\delta,\tag{6} $$ of the distribution identities $$ x \delta~=~0, \qquad x^2 \delta~=~0 .\tag{7}$$ We leave it to the reader to prove (7).