This question is mostly related to programing issues, but I want to understand what is going on inside. Suppose I have a likelihood function with "# of parameters $\propto$ # of observations". That is, in order to find best parameters for my model I should maximize this function with respect to all parameters. However, if # of observations is big enough (say, $30$, which is very small from statistical perspective though) all optimization algorithm I have used (I deal mostly with R) do not work in this case (see https://stats.stackexchange.com/questions/182179/measurement-errors for example).
The question is how to optimize likelihood function in this case and is it even possible (or one should use another approach which does not require such high-dimensional optimization)?