I always like to have more than one proof for the same theorem. The other day I was browsing through my copy of Lars Hörmander's book on PDE (volume 1). When proving the fourier inversion formula (on $\mathcal{S}(\mathbb{R}^n)$) he makes use of the following lemma:
If $T \colon \mathcal{S}(\mathbb R^n) \to \mathcal S (\mathbb R^n)$ is a linear map such that: $$TD_j \phi = D_j T \phi$$ and $$Tx_j \phi = x_j T \phi$$ for all $j \in \{ 1 , \ldots n\}$ and $\phi \in \mathcal S (\mathbb R^n)$. Then $T \phi = c \phi$, for some constant $c$.
In the proof of this lemma he shows that if $\phi (y)=0$, for some $y\in \mathbb R^n$ then $\phi$ can be written in the following form: $$\phi(x) = \sum_{j=1}^n {(x_j -y_j)\phi_j(x)}\quad \mbox{with } \phi_j \in \mathcal S (\mathbb R^n)$$ (this is not the problem - as he gives a good hint as to how to construct the $\phi_j$'s).
He goes on showing that: $$T \phi(x) = \sum_{j=1}^n(x_j-y_j)T\phi_j(x)=0 \quad \mbox{ if } x=y.$$ (this is also really simple - but now comes the tricky part).
He goes on to conclude that there exist some function $c(x)$ such that $T\phi(x) = c(x) \phi(x)$, and that $c$ is independent of $\phi$. I simply can't see how he arrives at that fact.