If we have,
$\int_{0}^{a} f(x)\delta (x-a)dx$
By definition you say,
$\int_{A}^{B} f(x)\delta (x-a)dx=\Bigg\{ \begin{split}f&(a)&,\, if\,A<a<B \\ &0&,\, {\rm otherwise}\end{split}$
But then the interval of integration "must" not have $a$ at its endpoints, but several times I've seen (in my engineering classes) that they just,
$\int_{0}^{a} f(x)\delta (x-a)dx=f(a)$
I would've not noticed if I weren't trying to solve this
$\int_{0}^{a} f(x)\delta''(x-a)dx$
Which the result of the problem (I'm expanding $f(x)=x^4$ in a Fourier series) is just correct if I do
$\int_{0}^{a} f(x)\delta''(x-a)dx=f''(a)$
But that property of the dirac delta is derived by again assuming $a$ does not belong to the endpoints.
So why are these still possible?
$\int_{0}^{a} f(x)\delta (x-a)dx=f(a)$
And
$\int_{0}^{a} f(x)\delta''(x-a)dx=f''(a)$
You are correct. The expressions are not welldefined. Therefore one often writes $\int_0^{a-}$ or $\int_0^{a+}$ to mark whether $a$ should be included or not.
Since you are talking about Fourier series, I assume that you have an $a$-periodic continuation of $\delta,$ i.e. $\sum_{k\in\mathbb Z} \delta(x-ka).$ Then $\delta$ is located at the end of the interval $[0,a],$ which is a problem. But you can fix this by translating it a small distance $\epsilon$ and finally let $\epsilon\to0$. However, you will get the same result if you include $\delta$ at one end of the interval, but not both.