I have implemented an almost plain vanilla algorithm to find the MLE estimates of 3 parameters in a log-likelihood function (in R.) When I test my algorithm with some simulated data it does pretty well PROVIDED that the starting values for the parameters are all 3 below the actually values (that I used to simulate the data.) As soon as even one of the starting values is larger than the actual value (which I used in my simulator) than the algorithm goes crazy and ends up giving some totally wrong estimates.
My question is: where is the problem? Is it a mathematical problem related to some weakness of the Newton Raphson method or is it a coding/implementation problem?