I'm required to solve the following first order ODE -
$$y' = \frac{2\sqrt y}{3}$$ given $$y(0)=0$$
So, I chose the zeroth approximation as the constant function zero,i.e.$u_0(x)=0$ and this makes all successive approximations equal zero. This is not surprising, since $y=0$ is a solution of the above ODE, but so is $y=\frac{x^2}{9}$, which is obtained by direct integration of the above relation (it's in variable separable format)
Why am I missing this solution? Does Picard's approximation not guarantee being able to find all possible solutions to the ODE? Is it because of my choice of the zeroth approximation?
In several places, the constant function is used for the zeroth approximation, which motivated me to do so here as well - but I landed up in a mess.
For a different choice of zeroth approximation, say $u_0(x)=x$, I end up with a different solution (weird), which isn't the expected one. This is probably fine, since $u_0(x)=x$ doesn't satisfy the constraint that $y\geq0$ - since it's inside the square root.
Moving to a better choice of $u_0(x)$, say $u_0(x)=x^2$ -- actually works! The approximations converge to $y=\frac{x^2}{9}$, which is the desired result.
Why does this happen, and how does the choice of the zeroth approximation affect the algorithm? Do we've to pay attention to what approximation we choose to start with everytime, or are there certain cases wherein Picard's method might land us in a mess?
Thanks in advance!
P.S. This is the algorithm I'm using to compute the $k^{th}$ approximation, $u_k(x)$, given $y' = f(x,y)$ and $y_0 = y(x_0)$ -

If the conditions for Picard's theorem are satisfied, choosing u_0(x) = y_0 will converge to a solution of the initial value problem. This follows from Banach's fixed point theorem. Moreover, in this case the solution is unique in some sufficiently small interval around $x_0$.
The function you're dealing with here does not satisfy those conditions, so there is no guarantee that the approximations converge to a unique solution.
In more general terms, when you do fixed point iteration, $x_0 \to x_1 = f(x_0) \to x_2 = f(x_1) ...$ the iteration will converge to a fixed point of $f$, depending on some conditions. If $f$ has multiple attractive fixed points, it depends on the initial point to which fixed point it will converge. (In general, it's not easy to figure out which points will go where.)