The question is (1):
Prove that the support of $ \delta = \{0\} $ where $ \delta $ denotes the dirac-delta distribution.
(2): Show that if $$ f \in C(\mathbb{R^{n}}) $$ is a function then its support is equal to the support of the distribution $$ <f, \psi>:= \int f \psi dx, \psi \in C_{c}^{\infty}(\mathbb{R^{{n}}}) $$
Attempt:
Proof of (1): We know that $ \delta = 0 $ on $ \mathbb{R^{n}} \backslash \{0\}. $ Therefore, by the definition of the support of a distribution, the support of $ \delta \subseteq \{0\}.$
For the other direction, observe that $ \forall \epsilon > 0, \exists \Psi_{\epsilon} \in C_{c}^{\infty}(B_{\epsilon}(0)), \Psi(0) \neq 0. $
Therefore it follows that since $ <\delta, \Psi>:= \Psi(0),$ that $<\delta, \Psi> \neq 0.$
Hence $ \text{supp}\delta = \{0\}.$
Attempt proof of (2): Let $f$ denote the function and $F$ the distribution. Fix an $x \in \text{supp}f,$ and assume for a contradiction that $x \notin \text{supp}F.$
Then $ \exists \epsilon > 0$ such that $F=0$ on $B_{\epsilon}(0)$, in other words $\forall \Psi \in C_{c}^{\infty}(B_{\epsilon}(x)), \int_{\mathbb{R}^{n}}f \Psi dx = 0 \Rightarrow f = 0 \text{in} B_{\epsilon}(x).$ This is a contradiction since by assumption $x \in \text{supp} f. $ Therefore $\text{supp}f \subseteq \text{supp}F.$
I'm not sure how to do the converse.
Thanks.
First, we need a proper definitions:$\DeclareMathOperator{\supp}{supp}$
There is actually quite a lot to unpack here. First off, the domain of an ordinary distribution is an element of $C_c^\infty(\mathbb{R}^n)$. What do this guys look like? Well, your typical element of $C_c^{\infty}(\mathbb{R}^n)$ is a smooth function (so it has as many derivatives as you like) which has commpact support. Because the support of a functional is one of the issues at hand, let's make sure that we have the definition of the support of a function nailed down, too.
So a typical element of $C_c^\infty(\mathbb{R}^n)$ is a smooth function with compact support. Functionals eat these guys for breakfast. That is, a functional (on $C_c^{\infty}(\mathbb{R}^n)$ is a function that takes smooth compactly supported functions as input, then spits out numbers.
Next, what does it mean for a functional to be linear? This one isn't so hard: it has to play nice with addition and scaling. That is, if $f$ is a functional, then we say that $f$ is linear if for any $\varphi,\psi\in C_{c}^{\infty}(\mathbb{R^n})$ and $\alpha,\beta\in\mathbb{R}$l, we have $$ f(\alpha\varphi + \beta\psi) = \alpha f(\varphi) + \beta f(\psi). $$
Finally, we get to that last adjective, bounded. This is a little technical, please excuse me if I skip some details. There is a norm on $C_c^{\infty}(\mathbb{R}^n)$ which tells us how "big" a function is (well, actually, there are lots of norms, but there is one that is useful here). Basically, we say that the "size" of a smooth function is its maximum magnitude. That is, we define $\|\cdot\|_{u}$ by $$ \|\varphi\|_{u} := \max_{x\in\mathbb{R}^n} \| \varphi \|_{u}. $$ The norm of a functional, then, is defined to be $$ \|f\| := \sup_{\|\varphi\| = 1} | f(\varphi) |, $$ where that is an honest-to-goodness absolute value at the end. Basically, this say find all of the functions that max out at 1, feed them to the functional, and figure out what the biggest value you get is. We say that a functional is bounded if this norm is finite. That is $f$ is bounded if $$ \| f \| = \sup_{\|\varphi\| = 1} | f(\varphi) | < \infty. $$ Okay, got that? An ordinary distribution is a function that eats other functions, poops out real numbers and does it in a well-behaved manner.
Right... now, to answer the first question, we need to know what the support of a distribution is (we defined the support of a function above, but not a distribution). So:
And one other defintion:
Basically, the support of a distribution is the largest set on which it doesn't vanish, meaning that there are function supported in the support of $f$ such that when $f$ eats those functions, it spits out something nonzero.
We are now ready to answer the first question!
We typically define the Dirac distribution to be the distribution which evaluates a test function at zero. Hence $\delta$ is defined by the formula $$ \delta(\varphi) := \varphi(0). $$ Alternatively, if you don't like that notation, we can write $$ \langle \delta, \varphi \rangle := \delta(\varphi) = \varphi(0) $$ to mean the same thing. Note that if $0 \not\in \supp(\varphi)$, then $\delta(\varphi) = 0$. That is, any open set which does not contain zero is a vanishing set for $\delta$. This implies that $$ \supp(\delta) \subseteq \{0\}. $$ On the other hand, if $0 \in \supp(\varphi)$, then we have $\delta(\varphi) \ne 0$, which implies that no open set containing zero is a vanishing set for $\delta$. Hence $$ \supp(\delta) \supseteq \{0\}, $$ which finishes the argument.
For the second question, suppose that $f$ is a continuous function (with compact support). Then we can define a functional $F_f$ by integration: $$ F_f(\varphi) := \int \varphi(x) f(x) \,\mathrm{d}x. $$ Note that $F_f$ eats a test function and poops out a real number. To check linearity of this functional, let $\varphi$, $\psi$ be test functions, and let $\alpha,\beta\in\mathbb{R}$. Then \begin{align} F_f(\alpha\varphi + \beta\psi &= \int (\alpha \varphi(x) + \beta \psi(x)) f(x)\, \mathrm{d}x \\ &= \alpha \int\varphi(x) f(x)\, \mathrm{d}x + \beta \int \psi(x)f(x)\,\mathrm{d}x && (\text{by linearity of integration}) \\ &= \alpha F_f(\varphi) + \beta F_f(\psi). \end{align} Hence $F_f$ is linear! Boundedness is a little tricky, but not too hard. Let $\varphi$ be a test function such that $\|\varphi\|_u = 1$. Then \begin{align} |F_f(\varphi)| &= \left| \int \varphi(x) f(x) \,\mathrm{d}x \right| \\ &\le \int |\varphi(x)| |f(x)|\, \mathrm{d}x && (\text{properties of integrals})\\ &\le \int |f(x)|\, \mathrm{d}x && (\text{since $\|\varphi\|_u = 1$}) \\ &\le \|f\|_u \int \,\mathrm{d}x \\ &= \|f\|_u \mu(\supp(f)) \\ &< \infty, \end{align} where $\mu(\supp(f))$ denotes the measure of the support of $f$. Note that this is a uniform bound on $\|F_f(\varphi)\|$ which does not depend on $\varphi$ (other than the hypothesis that $\|\varphi\|_u=1$), so we have boundedness. There is a little hitch here: I have had to assume that $f$ has compact support. The original question is about continuous functions in general. I'm not sure that I see a way to go that far—you might check the statement of the problem (or, perhaps, $C(\mathbb{R}^n)$ means something different to you?).
In any event, given a continuous function $f$ (with compact support), we can associate to it a (bounded) linear functional $F_f$ via integration. Since it doesn't really help to keep extra notation running around, we can think of $f$ as a distribution, and write $$ \langle f, \varphi\rangle = f(\varphi) := \int \varphi(x) f(x) \,\mathrm{d}x. $$
Now, for the very last bit: let $\varphi$ be a test function with $\supp(\varphi) \cap \supp(f) = \emptyset$ (where we are thinking of $f$ as a function here). Then \begin{align}\ f(\varphi) &= \int \varphi(x)f(x)\,\mathrm{d}x \\ &= \int_{\supp(\varphi)\cap\supp(f)} \varphi(x)f(x)\,\mathrm{d}x \\ &= \int_{\supp(\varphi)} \varphi(x)f(x)\,\mathrm{d}x + \int_{\supp(f)} \varphi(x)f(x)\,\mathrm{d}x \\ &= 0, \end{align} since $f(x) = 0$ for all $x\in \supp(\varphi)$, and $\varphi(x) = 0$ for all $x \in \supp(f)$. This implies that the support of $f$ (as a distribution) is contained in the support of $f$ (as a function). This can be a little tricky, as $f$ may integrate to zero over its support. So, let $$f^+(x) := \begin{cases} f(x) & \text{if $f(x) > 0$, and} \\ 0 &\text{otherwise.} \end{cases}$$ Take $\varphi^+$ to be a smooth bump function for some set $U$ such that $\supp(f^+) \subseteq U$. Then $\varphi^+$ is a test function and $$ f(\varphi) = \int f\varphi = \int f^+ > 0. $$ This implies that the support of $f$ (as a distribution) contains the support of $f^+$ (as a function). We can play a similar game with the negative part of $f$, in order to get $$ \supp(f) \supseteq \supp(f^+) \cup \supp(f^-) = \supp(f), $$ where we think of $f$ as a distribution on the left, and as a function on the right (maybe I shouldn't have dropped the $F_f$ notation? ah, well, too late now), which is what we wanted to show.