So I was told by my instructor that $$L(\delta(t)) = 1 $$
And that $$\delta * f(t) = f(t)$$
For any $f(t)$ So $$\delta *1 = 1.$$
But this is $$\int_0^{t}\delta(z)dz.$$
So $$(1)' = 0 = (\int_0^{t}\delta(z)dz)' = \delta(t).$$
But then wouldn't we have $$L(\delta) = 0\neq 1?$$
Am I misunderstanding something? How do we reconcile these facts?
I am using the following definition of the convolution: $$f*g = \int_0^t f(t-z)g(z)dz$$
OK let me patch all of the comments together in an answer. Pretty much every comment above touches on an important issue but these points are not separate from each other.
First my own comment. The "correct" definition of a convolution of two functions $f,g:\mathbb{R}\to \mathbb{R}$ is $$ (f*g)(t)=\int_{-\infty}^\infty f(x)g(t-x)dx $$ You explained in a comment that you're are using a less mainstream definition in your class. But in doing so you're encountering a problem and for good reason. I assume your definition is (and I will denote your "convolution" by $f\star g$ to make a distinction) $$ (f\star g)(t)=\int_0^t f(x)g(t-x)dx $$ But under what circumstances the two definitions equivalent? As @user1952009 points out $f*g$ reduces to $f\star g$ only if $f(t)=g(t)=0$ for all $t\leq 0$.
Now comes the issue you are encountering: It is true that under true convolution $\delta * f=f$ for any function $f$, in fact, it is more appropriate to take this as the definition of $\delta$. But $\delta \star f=f$ only for functions $f$ such that $f(t)=0$ for all $t\leq 0$. As a result, if $f=1$, then $$ 1\neq \int_0^t \delta(x)dx $$ since $1$ is not zero for $t\leq 0$. But what is this integral? As @paul mentions $d/dt\int_0^t \delta(x) dx\neq \delta'(t)$, actually $=\delta(t)$. This brings us to the point of @tst. By definition, $\int_0^t \delta(x) dx$ is the anti-derivative of $\delta$, as we actually just saw by fundamental theorem of calculus (assuming it makes sense for the delta "function"). But what can this function be? Well, it needs to be pretty much exactly like constant function $1$, except it must vanish for $t\leq 0$. This is exactly the step-function $$ \theta(t) = \begin{cases} 1 & t>0\\ 0 & t\leq 0 \end{cases}=\int_0^t \delta(x) dx $$ So let me wrap up my fist half of answer: The definition $f\star g$ for convolution is fine as long as you have in mind that it defined for functions which vanish for non-postive numbers.
In this part, we will explore a little bit around what this $\delta$-"function" actually is. In doing so, hopefully, I will clarify a few things. As I said let us take, as the definition, $$ f(t)=\int_{-\infty}^\infty \delta(x)f(t-x)dx= \int_{-\infty}^\infty f(x) \delta(t-x)dx\qquad(1) $$ for all $f$.
Point I: Let $a<b$ and consider $\rho(x)=\theta(x-a)-\theta(x-b)$, which is zero in $(-\infty, a]\cup(b, \infty)$ and one in $(a,b]$. What you find then is $$ f(t)\rho(t)= \int_{a}^b f(x) \delta(t-x)dx\Longrightarrow \begin{cases} f(t)=\int_{a}^{b} \delta(t-x)f(x)dx & a<t\leq b\\ 0=\int_{a}^{b} \delta(t-x)f(x)dx & \text{otherwise} \end{cases} $$ Actually by defining a function $\tilde{\theta}(x)$ such that $\tilde{\theta}(0)=1$ and $\tilde{\theta}(t)=\theta(t)$ for $t\neq 0$, one can actually prove $$ \begin{cases} f(t)=\int_{a}^{b} \delta(t-x)f(x)dx & a{\color{red}\leq } t\leq b\\ 0=\int_{a}^{b} \delta(t-x)f(x)dx & \text{otherwise} \end{cases} $$ recall $a{\color{red}<}b$ throughout.
Point II: Now consider an even function $f$. Then $$ f(t)=f(-t)=\int_{-\infty}^\infty \delta(x)f(-t-x)dx= \int_{-\infty}^\infty\delta(x)f(t+x)dx= \int_{-\infty}^\infty\delta({\color{red}-x})f(t-x)dx $$ Now suppose $f$ is instead odd, then $$ f(t)=-f(-t)=\int_{-\infty}^\infty \delta(x)[-f(-t-x)]dx= \int_{-\infty}^\infty\delta(x)f(t+x)dx= \int_{-\infty}^\infty\delta({\color{red}-x})f(t-x)dx $$ Now consider a general function $f(x)$. Define $e(x)=[f(x)+f(-x)]/2$ and $o(x)=[f(x)-f(-x)]/2$. Then $f(x)=e(x)+o(x)$. As a result we have just found that $$ f(t)=\int_{-\infty}^\infty \delta(x)f(-t-x)dx= \int_{-\infty}^\infty\delta({\color{red}-x})f(t-x)dx $$ So as far $\delta(x)$ and $\delta(-x)$ interact with function, we have $\delta(x)=\delta(-x)$. By abuse of language we say $\delta$ is an even "function".
Point III: Combining the two points $$ \begin{cases} f(t)=\int_{a}^{b} \delta(x-t)f(x)dx & a\leq t\leq b\\ 0=\int_{a}^{b} \delta(x-t)f(x)dx & \text{otherwise} \end{cases} $$ for all functions $f$ and all $a<b$. Specifically I'm interested in case of $t=0$: $$ \begin{cases} f(0)=\int_{a}^{b} \delta(x)f(x)dx & a\leq 0\leq b\\ 0=\int_{a}^{b} \delta(x)f(x)dx & \text{otherwise} \end{cases}\qquad(2) $$ If you stare at the above equation long enough, trying to rationalize and reconcile it with ordinary intuition of function, you would realize that, the integrad $\delta(x)f(x)$ is somehow completely annihilating all the details of the function $f(x)$ except at point $x=0$. It is as if this $\delta$ "function" is zero everywhere but at the origin. If you want to push the "function" agenda even further you then ask: what is the value of $\delta(0)$? Well we know that $1=\int_{-\infty}^\infty \delta(x) dx$. This would be impossible if $\delta(0)$ is any finite number, since then the integral is zero! This actually immediately means $\delta(x)$ is NOT a function. But if one is really attached to their functions, then you can say $\delta(0)=\infty$.
Exercise: Start from (2) and prove (1), i.e. one can equivalently take $(2)$ as the definition of the delta function.
So am I saying "essentially that I am being taught a totally contradictory framework for the Dirac delta function"? No, not really! I hope you never have to teach Dirac delta function to people who are seeing it for the first time. Because, boy, it is challenging from an educational point of view! No matter what approach your teacher chooses to go something goes wrong. If the teacher does everything completely rigorously then the essential intuition of Dirac delta function will be completely lost in all of the integral manipulations which the students are still not completely comfortable with. If however, the teacher chooses to do the "everywhere zero except at the origin, there it is infinity" then confusions like what you just had (and a lot of other ones I've seen over the years) are born. My suggestion: Learn both approaches at the same time, struggle to reconcile them together and figure out how far you can bend the misleading and wrong function-theoretic picture until it breaks.
Finally the Laplace transform. By definition the Laplace transform of a function is $$ L[f]=\int_0^\infty f(x)e^{-sx}dx $$ If one insists on doing the same thing to delta "function" (which actually has a very precise meaning in distribution theory), then $$ L[\delta] = \int_0^\infty \delta(x) e^{-sx}dx = 1 $$ Now let us understand what $\delta'(x)$ means. One defines $\delta'(x)$ via $$ \int_{-\infty}^\infty \delta'(x) f(x)dx=-f'(0) $$ again the function-theoretic motivation of this definition is integration-by-parts. A more rigorous derivation of the above definition as the "derivative" of delta function needs for us to say "what the heck does the derivative of a not-really-a-function-thingy (distribution) $\delta(x)$ mean?". For that you need to read-up a bit and I can't possibly contain it here. Here the "wrong" function-theortic picture and integration-by-parts let us be sloppy and not deal with this delicate issue.
With that being said, now the Laplace transform becomes $$ L[\delta']=\int_0^\infty \delta'(x) e^{-sx}dx =s $$ Note that quite generally for a function $f:[0, \infty)\to \mathbb{R}$, one has $L[f']=sL[f]-f(0)$. In a sense, and do not take this too seriously, the failure of $\delta$ of being a function is captured in that $L[\delta']=sL[\delta]$ which is off only up to a "constant" (as polynomial in $s$), although allegedly that constant is $\delta(0)=\infty$! Again do not read too much into this last part.