I have a boundary value problem: \begin{equation*} \begin{cases} L(y) = \ddot{y} + A(t)\dot{y} + B(t)y = f(t) \\ \gamma(y) = y(t_0) = 0 \\ \Gamma(y) = y(t_1) = 0 \end{cases} \end{equation*}
And i need to prove theorem:
Necessary and sufficient condition
uniqueness and solvability of inhomogeneous boundary
problem is the orthogonality condition of the right side
equations for the eigenfunction $y_2(t)$.
\begin{equation}
\int_{t_0}^{t_1}f(t)y_2(t) = 0
\end{equation}
In this case, the solution is represented through the generalized Green's function in the form:
\begin{equation}
y(t) = \int_{t_0}^{t_1} G_0(t,\xi)f(\xi) \,d\xi
\end{equation}
and it is orthogonal to $y_2(t)$.
Can you tell me how to prove it?
Did you see typical theorem before?
What is $y_1$ and $y_2$?
During the solution, the question arises about the possibility of the existence of a solution to a homogeneous boundary value problem ($f(x)$ = 0). In the general case, a homogeneous equation has two linearly independent solutions
$y_1(x)$ and $y_2(x)$, which are solutions to the Cauchy problems:
\begin{equation*}
\begin{cases}
L(y_1(t)) = 0, &\text{$x \in [t_0,t_1]$} \\
y_1(t = t_0) = 1,\\
y_1'(t = t_0) = 0,
\end{cases}
\begin{cases}
L(y_2(t)) = 0, &\text{$x \in [t_0,t_1]$} \\
y_2(t = t_0) = 0,\\
y_2'(t = t_0) = 1,
\end{cases}
\end{equation*}
Then the general solution of the homogeneous equation will have the form $$y(t) = C_1y_1(t) + C_2y_2(t)$$
The constants $C_1$ and $C_2$ must be determined from the boundary conditions. Substituting the general solution into the boundary conditions $$\gamma(y) = C_1 = 0$$ $$\Gamma(y) = C_2y_2(t_1) = 0$$ The determinant of this system is equal to $$D = y_2(t_1)$$
Based on the initial condition of my scientific work, it is assumed that $D = 0$. Consequently, the solutions are linearly dependent. Then we obtain a solution to the homogeneous boundary value problem $$y(t) = C_2y_2(t))$$
The constant $C_2$ is usually found from normalization $$\int_{t_0}^{t_1} y_2^2(t)dt = 1$$ Or they are assumed to be equal to one, i.e. $C_2$ = 1.
Your additional thoughts stop making sense at the point where you use constant coefficients for the coefficients in the linear combination. You have to use variation-of-parameters, so $C_1,C_2$ are functions themselves that satisfy the system $$ C_1'y_1+C_2'y_2=0, \\ C_1'y_1'+C_2'y_2'=f. $$ So you only get $C_1(t_0)=0$ from the first boundary condition, the second boundary condition remains full, $C_1(t_1)y_1(t_1)+C_2(t_1)y_2(t_1)=0$.
The general idea is that if you vary $C_2(t_0)$, then the pair $(C_1(t_1),C_2(t_1))$ follows a line. The same is then true for the pair $(y(t_1),y'(t_1))$. The case to avoid is that this second line is vertical but nor on the coordinate axis, so that $y(t_1)=0$ can never be satisfied.
The variation result is that $$ W[y_1,y_2](t)\pmatrix{C_1'(t)\\C_2'(t)}=\pmatrix{-y_2(t)\\y_1(t)}f(t) $$ so that $$ C_1(t_1)=-\int_{t_0}^{t_1}\frac{y_2(t)f(t)}{W(t)}\,dt, \\ C_2(t)=C_2(0)+\int_{t_0}^{t_1}\frac{y_1(t)f(t)}{W(t)}\,dt, \\~\\ y(t_1)=C_2(0)y_2(t_1)+\int_{t_0}^{t_1}\frac{y_1(t)y_2(t_1)-y_1(t_1)y_2(t)}{y_1(t)y_2'(t)-y_1'(t)y_2(t)}f(t)\,dt $$ As is visible, the coefficient for $C_2(0)$ in the formula for $y(t_1)$ is $y_2(t_1)$. If that is non-zero, then a solution for $c_2(0)$ exists. If it is zero, then as usual also the constant term has to be zero and $C_2(0)$ can take any value. If the constant term is zero, then the solution $C_2(0)=0$ exists independent of the value of $y_2(t_1)$.
The Wronski determinant can be disregarded if it is constant, which requires $A(t)=0$.