$d\lambda$ in the convolution integral vs its counterpart the convolution sum

56 Views Asked by At

So for a bit of background, in my courses I have been presented to convolution integral and then convolution sum with the following formulas :

Continuous case : $$x(t)*h(t) = \int_{-\infty}^{\infty} x(\lambda).h(t-\lambda)\ d\lambda $$ Discontinuous case :

$$x[n]*h[n] = \sum_{n = -\infty}^{\infty} x[k].h[n-k] $$

They differ by a "factor" of $\ d\lambda$, loosly speaking.

I put the quotes on the parts where teachers mention that this is "loosly speaking".

After doing convolution in both discrete and continuous modes, I find that $d\lambda$ term to be a noise in the equation as I couldn't see intuitively what was going on. Which led me to question is $\ d\lambda$ really as it is taught for instance in coordinate systems contexts where it is "a very small piece of $\lambda$".

Am i correct to think of, loosly speaking, of the convolution integral as :

$$x(t)*h(t) = \int_{\lambda = -\infty}^{\infty} x(\lambda).h(t-\lambda)\ $$

Because here it is clear exactly what is going on, as we're not "multiplying by lambda", which is supposed to be a dummy variable to begin with.

And what exactly are these , dt .. etc ??

In contexts such as describing the integral as the area,

$$Area = \int_{a}^{b} f(x) dx $$

It makes perfect sense that we would multiply the value of the function with "small pieces of x".

So what am I missing in all this ?