This is a more conceptual question than anything else, and I haven't found an answer to exactly my question in previous similar posts. In some sense performing a separation of variables in a linear differential equation becomes an eigenvalue problem for some linear operator. If the spectrum of the operator is countable then we can use the notions of orthonormal basis in a Hilbert space and prove that the general solution can be written as a sum over the different possible eigenvalues of the solutions we obtained.
Suppose now that the spectrum was "continuous", now of course I would like to use the integral to sum over the different eigenvalues. Under suitable assumptions, this approach is manifestly equivalent to taking the Fourier transform of the original equation and working down, but my question is the following: is there any rigorous way of justifying the backwards approach, i.e. solving the eigenvalue problem and then stating that a general solution should be written as an integral? Physicists use words like "continuous basis" and proceed by analogy with the discrete case, (which works surprisingly well) but is there anyway to justify this?
Start with the most basis case of a singular one-dimensional ordinary differential operator $$ Lf = -f''+qf,\;\;\;\; 0 \le x < \infty. $$ Assume that $q$ is continuous and real on $[0,\infty)$. You can always impose one linear endpoint condition at $0$ of the form $$ \Phi_{0}(f)=\cos\alpha f(0)+\sin\alpha f'(0) = 0. $$ You may or may not need an additional condition at $\infty$ in order to fully define a domain $\mathcal{D}(L)$ on which $L$ is selfadjoint. Physical problems should not require a condition at $\infty$ other than the natural restriction that $f \in L^{2}[0,\infty)$, but general Mathematical problems may require a second condition $\Phi_{\infty}(f)=0$.
For every $\alpha\in\mathbb{R}$, there is a unique eigenfunction $\varphi_{\lambda}$ such that $$ \varphi_{\lambda}(0)=\sin\alpha,\;\;\varphi_{\lambda}'(0)=-\cos\alpha, $$ which gives a normalized eigenfunction solution where $\Phi_{0}(\varphi_{\lambda})=0$. Then there is a unique Borel measure $\mu$ such that $$ f = \int_{-\infty}^{\infty}\left(\int_{0}^{\infty}f(y)\varphi_{\lambda}(y)dy\right) \varphi_{\lambda}(x)d\mu(\lambda), \\ Lf = \int_{-\infty}^{\infty}\left(\int_{0}^{\infty}f(y)\varphi_{\lambda}(y)dy\right)\lambda \varphi_{\lambda}(x)d\mu(\lambda). $$ The spectral density measure $\mu$ is determined by the condition at $\infty$: there is a different measure for every possible condition at $\infty$, even though the eigenfunctions don't change. You get Parseval's equality $$ \int_{-\infty}^{\infty}\left|\int_{0}^{\infty}f(y)\varphi_{\lambda}(y)dy\right|^{2}d\mu(\lambda) = \int_{0}^{\infty}|f(y)|^{2}dy. $$ The Fourier sin and cosine transforms can be derived from this general theory, for example. In general, the measure $\mu$ may have discrete, singular, and absolutely continuous components. So the integrals may reduce to discrete Fourier sums with Fourier integrals, and even allow for possible singular continuous spectrum (singular continuous really should not occur in Physical problems.) This gives rise to a unitary Fourier transform $\mathscr{F} : L^{2}[0,\infty)\rightarrow L^{2}_{\mu}(\sigma)$, where $\sigma$ is the spectrum of $L$, and $$ (\mathscr{F}f)(\lambda) = \int_{0}^{\infty}f(t)\varphi_{\lambda}(t)dt $$ And you end with an explicit inversion integral as well. But you can view the iterated integrals as an expansion along a continuous eigenfunction basis that has the appears of orthogonality in the same way that the ordinary Fourier integral expansion appears to be a continuous expansion.
Dirac almost surely modeled his notation after general Sturm-Liouville theory, which was known before Dirac's work in Quantum. There's nothing like this theory for general selfadjoint operators, but this type of formalism works well for Quantum problems where separation of variables can be used to reduce the operators reduce to ordinary differential operators.
When working with two singular endpoints, you can break at a regular point in between, and then tie the problems together with a transition matrix.
Reference: Coddington and Levinson, Theory of Ordinary Differential Equations. Look at the chapters on Singular Sturm-Liouville problems.