Consider the following Sturm-Liouville boundary value problem:
$\text{Given parameters }c > 0 \text{ and } \beta > 0, \text{ let } y=y(x) \text{ for } 0 \leq x \leq c. \text{ We have }$
$$y''+ \lambda y=0$$
$\text{ with boundary conditions }$
$$y'(0)=\beta y(0),$$
$$y'(c)=\beta y(c),$$
To start, I want to show that this has exactly one negative eigenvalue $\lambda_0$. Furthermore this eigenvalue will be independent on the parameter $c$ > 0.
Considering the negative eigenvalues $\lambda < 0$, let $\lambda = -\alpha^2$
This is where I run into trouble. I am not sure what shape we should assume the general solution $y(x)$ to have. I know of Sturm-Liouville cases where the general solution we use in the case of the negative eigenvalues is assumed to be $y(x)=A\cosh(\alpha x)+B\sinh(\alpha x)$. I also know of Sturm-Liouville problems where the general we use in the case of the negative eigenvalues is assumed to be $y(x)=Ae^{\alpha x}+Be^{-\alpha x}$ (I can provide specific details upon request). So it seems that, depending on the boundary conditions, we have to assume $y(x)$ to have a certain shape. (I know of more "shapes" that I listed here, I could provide more details if interested.)
So how can I figure out what I should assume $y(x)$ to be in this case? After I have done so and found $y'(0), y'(c), y(0), y(c)$, where do I go after applying this to the boundary conditions? What exactly am I looking to do to show that there is exactly one negative eigenvalue $\lambda_0$ independent of $c>0$? After that, how can I go about finding an associated eigenfunction $y_0(x)$?
In a similar manner, I am interested in other eigenvalues. How can I determine whether or not $\lambda = 0$ is an eigenvalue? How do I determine its associated eigenfunction? The same goes for positive eigenvalues and associated eigenfunctions.
The boundary conditions in this problem are what make it particularly difficult, nothing really "cancels" out nicely with what I tried for $y(x)$. Do I have to assume a different form of $y(x)$ for every case $\lambda < 0$, $\lambda = 0$, and $\lambda > 0$? Why is this? Does this change the manner in which the associated eigenfunctions are found?
In general, for simple linear ordinary differential equations, a standard ansatz(together with the uniquess theorem for ODEs), tell us that $$y(x)=e^{kx}$$ for some complex $k$ (associated with the eigenvalues) span the space of solutions. So all you have to do is to plug it in, and see what that tells you about $k$. You get $$(k^2+\lambda)y=0$$ or $k^2=-\lambda$.
Now, apply the boundary conditions to $y(x)=e^{\pm i\sqrt{\lambda}\,x}$, to obtain $$\pm \sqrt{\lambda}=-i\beta \hspace{3mm}$$ (both boundary conditions yield this equality). You conclude that $\lambda=-\beta^2$ is the unique (and negative) eigenvalue for this problem.
As for other eigenvalues, you're free to investigate them but I don't see the point, since you already demonstrated that there can be only one. However, for the fun of it, investigate the zero eigenvalue: $$y''=0\implies y(x)=ax+b$$ for real $a$ and $b$. The first boundary condition gives us $a=\beta b$, and the second one $a=\beta(ac+b)=\beta b (\beta c+1)$. So unless $\beta c+1=1$ you can have no solution; but $c$ and $\beta$ are both strictly positive, so zero is not an eigenvalue
In general trying to guess the eigenfunctions and the eigenvalues is really hard. There is a robust branch of functional analysis that dedicates itself, among other things, to that task.