I am considering the Greens function approach to solving linear 2nd order ODEs, although my question is- I think!- applicable for solving ODEs in general.
In my classes, we considered the initial and two point boundary value problems separately. For the initial value problem, we had an example where the initial conditions given were $y(0)=0, y'(0)=0$. It was then stated that the greens function $G(x, \eta)$ is defined for $x\geq 0$ and $\eta \geq 0$, so presumably we can only solve the differential equation for $y(x)$ for $x\geq 0$. Now I cannot motivate why we have this constraint. First of all, surely it is arbitrary as to whether we solve for $x\leq 0$ or $x \geq 0$, if a side must be chosen (I understand if the independent variable is time, we might take $t\geq 0$ simply because this is the only thing we are interested in; although this doesn't explain at all why this might be the only region for which we can validly solve the equation for! And surely if we can solve for $x\geq 0$ or $x\leq 0$, then there is nothing stopping us fro giving a solution for all of $x$!!!)
For the two point boundary value problem, we only found a solution for y in the interval given, and of course the same interval applied for the values of $x$ and $\eta$ over which $G$ was defined.
Now in the two point boundary value problem case, I can kind-of motivate this from the ideas of uniqueness given Dirichlet or Neumann, or Cauchy, boundary conditions, although I would think that I might also be able to use infinity as a point for a boundary for some physical problems due to the physical constarints on the solution. In that case, it would seem that I could solve for all x, although I don't think I would be able to do that given conditions at $\pm \infty$ alone?
In any case, I am not at all sure about why we have the certain constraints on the interval within which we find a solution to the ODE.
It's not that we cannot deal with about both $x\le 0$ and $x\ge 0$; it's that we choose not to. A lot of statements in textbooks could be made in greater generality, but the author chooses the setting in which ideas come across better.
The idea of Green's function $G(x, \eta)$ is that it measures the amount by which the source term (right-hand side of the equation, say $y''+y=f$) at $\eta$ affects the solution at $x$. So, to what degree does the value $f(3)$ affect the value $y(-5)$? It has no effect at all. The boundary conditions at $0$ effectively disconnect the real line into two components $(0,\infty)$ and $(-\infty, 0)$, and the ODE is solved separately on each component. There is no interaction between them.
We could express this lack of interaction by writing $G(x, \eta)=0$ when $x$ and $\eta$ have opposite signs. But what would be the point? Instead of one formula for $G(x, \eta)$ we would have to write a case-defined formula for $x\ge 0, \eta\ge 0$ and $x\ge 0, \eta\le 0$ and $x\le 0, \eta\ge 0$ and $x\le 0, \eta\le 0$ — achieving absolutely nothing new.
A lot of facts, like Stokes / divergence theorems, could be stated for general open sets (not necessarily connected) but we stick to connected sets because the added generality adds nothing new — it's just repeating what we already know, for each component.
And so, we study Green's functions for $x,\eta\in \Omega$, where $\Omega$ is a connected open set, such as $(0,\infty)$, with some boundary where boundary conditions are prescribed (the point $\{0\}$ qualifies as the boundary, although this is really an initial value problem).