So my question came out of considering functions involving the unit step function and their derivatives. I'm learning about Laplace transforms right now, if that helps provide context. So here is what I think so far. If I've made any mistakes please let me know.
First of all, we can easily say that for any value other than $t=0$, $tδ(t) =0$ since $δ(t) = 0$ and $t$ is finite. However, for $t=0$ things are more interesting. $δ(t)$ is infinite while $t=0$. I'm sure this could be treated as some sort of limit of a function that approaches the dirac delta function. However, my question arose in a different context.
Let us take the function: $$f(t) = t[u(t)-u(t-1)] + u(t-1)$$
The "generalized" derivative (I'm not sure exactly how it formally got generalized that way) of the unit step function/heaviside function is the dirac delta function. So if we take the derivative of our function we should get: $$f'(t)=[u(t)-u(t-1)]+t[δ(t)-δ(t-1)]+δ(t-1)$$$$f'(t) = [u(t) - u(t-1)] + tδ(t) + (1-t)δ(t-1)$$
However, we also know that from the piecewise definition that: $$f'(t) = u(t)-u(t-1)$$
If these derivatives are to be equal, then $tδ(t)$ and $(1-t)δ(t-1)$ (which are both instances of the same thing) must equal $0$ or be additive inverses. Based on other experiences, I believe they should be $0$. Is this correct? If so, how can this be formally proven?
Just to add to Beta's answer, the statement
$$t\delta(t)=0$$
is valid in a distributional sense because of the following. We know that the Dirac delta has the property
$$\int_{-\infty}^\infty f(t)\delta(t-a)dt=f(a)$$
for any test function $f$. If we consider $t\delta(t)$ as a distribution as well we have by selecting $a=0$ and evaluating as above,
$$\int_{-\infty}^\infty f(t)t\delta(t)dt=f(t)t|_{t=0}=f(0)\cdot 0 = 0$$
for every test function $f$. Hence it makes sense to conclude that $t\delta(t)=0$.