I have $n$ observations of variable $X$ and now I want to estimate parameters $a$ and $\lambda$ of gamma distribution ($a$ is more known as $\alpha$ and $\lambda$ as $\beta$ but that is how I was thought) that would fit the given sample. I used two methods for finding these parameters, method of moments and maximum likelihood method. The problem is, that estimations I got from MLM are 2x larger than the ones I got from MM. I would understand that they might not be completely the same but they are incidentally different by factor 2 (numerically it is 1.996...).
Also, if you check $a$'s and $\lambda$'s you see that they form quite a different shape of PDF plot. Red is the one from MM, green the one from MLM.
So here are my calculations if someone spots an error, I just could not.
Method of momements
For gamma distribution we have $\text{E}(\bar{X}) = \frac{a}{\lambda}$ and $\text{E}(\bar{X}^2) = \frac{a(a+1)}{\lambda^2}$. On the other hand we have formulas $\text{E}(\bar{X}) = \frac{1}{n} \sum_{i=1}^n x_i = \bar{x}$ and $\text{E}(\bar{X}^2) = \frac{1}{n} \sum_{i=1}^n x_i^2$. Combining these we get \begin{equation} \hat{a} = \hat{\lambda} \bar{x}~~~~\text{and}~~~~\hat{\lambda} = \frac{\bar{x}}{\frac{1}{n}\sum_{i=1}^n x_i^2 - \bar{x}^2}. \end{equation} From here I got $$a \approx 0.799174075139 ~~~~\text{and}~~~~ \lambda \approx 1.31877117345. $$
Maximum likelihood method
For gamma distribution we have
$$f_{X_i}(x_i) = \frac{\lambda^a}{\Gamma(a)} x_i^{a-1} e^{-\lambda x_i}.$$
Taking the product from $1$ to $n$ we get the likelihood function
$$L = \prod_{i=1}^n \frac{\lambda^a}{\Gamma(a)} x_i^{a-1} e^{-\lambda x_i}$$
and log-likelihood function
$$\ell = \log L = \sum_{i=1}^n \left(a \log\lambda - \log\Gamma(a) + (a-1)\log x_i -\lambda x_i\right) =\\
=na \log\lambda - n \log\Gamma(a) + (a-1)\sum_{i=1}^n \log(x_i) - \lambda \sum_{i=1}^n x_i.$$
Calculating partial derivatives with respect to $a$ and $\lambda$ and equating with $0$ we obtain
\begin{align}
\frac{\partial \ell}{\partial a} &= n \log \lambda - n \psi(a) + \sum_{i=1}^n \log(x_i) = 0,\\
\frac{\partial \ell}{\partial \lambda} &= \frac{na}{\lambda} - n \bar{x} = 0,
\end{align}
where $\psi(a)$ is a digamma function ($\psi(a) = \frac{\Gamma'(a)}{\Gamma(a)}$). Now I solved this system by plugging the second equation into first, threw everything on one side and using scipy.optimize.newton I found the root $a$. Plugging $a$ into second equation I got
$$a \approx 1.59540858049 ~~~~\text{in}~~~~ \lambda \approx 2.63269156405.$$
Note: I used scipy.special.digamma(a) for $\psi(a)$ which I think is correct according to documentation.

Your derivations of the estimators are correct, but without the data, it is not possible to verify your numeric calculations. That said, one should not expect these two estimators to perform roughly equally--for example, even the corresponding estimators for a $\operatorname{Uniform}(0,\theta)$ distribution behave quite differently: $$\hat\theta_{MM} = 2\bar X, \quad \hat\theta_{ML} = X_{(n)},$$ and although both are consistent, the small sample behavior can be drastically different.
In your case, using the sample $$\boldsymbol x = (5,7,3,2,5,6,11,55,32,19),$$ I get $$(\hat a, \hat \lambda)_{MM} = (0.463719, 0.0319806), \\ (\hat a, \hat \lambda)_{ML} = (1.10246, 0.0760318).$$ One is more than twice the other. I would conjecture that the method of moments estimator is less robust, and I am sure an investigation of this has been performed, but I haven't bothered to search the literature at this time.