Let $\gamma_n=\sum_{k=1}^{n}\frac{1}{k}-\ln(n)$ and $\gamma=\lim_{n}\gamma_n$
From the fact $\frac{1}{2(n+1)}\leq\gamma_n-\gamma\leq\frac{1}{2n}$, we have $\gamma_n-\gamma \sim \frac{1}{2n}$
By using a computer program, I'm trying to find the constants $\alpha$ and $\beta$ such that $\gamma_n-\gamma \sim \frac{\alpha}{n^\beta}$ Hence I used the following properties; $$\frac{\ln(\gamma_n-\gamma)}{\ln(n)} \sim \frac{\ln(\alpha)}{\ln(n)}-\beta\sim-\beta$$
Since the term $\frac{\ln(\alpha)}{\ln(n)}$ converges to $0$ very slowly, I failed to get those constants despite of using big $n$ like $10^8$. $\beta$ should converge to $1$, but my computed values were $1.15, 1.05, 1.098 \cdots$.
Could anybody get $\alpha$ and $\beta$ with an 'elementary' numerical method?
Any help will be appreciated.
I think this is elementary enough: linear regression (well, power regression, but you already transformed it into linear case).
Take all the pairs $(\ln n, \ln (\gamma_n - \gamma)) = (x_n, y_n)$. Then perform linear regression and get $A,\ B$, such that $y_n \approx A + Bx_n$. Then, with your notation, $B = -\beta$, $A=\ln\alpha$.
Linear regression is fitting the linear function (as "close" as possible) to the given data. If you suspect the relationship is power function (as in your case), you take logarithms of the data, fit linear function to them, and then transform it to power function for the original data as I wrote above – this is called power regression.
Linear regression is a standard statistical procedure and can be done in many programs: Excel or Calc, CAS-es like Matlab, Mathematica, Maxima, even on some scientific calculators. However, you'd probably like to do it within your program. I don't know much about programming, but I'm fairly sure there are libraries/packages for regression for many languages. But anyway, there are formulas for this: on Wikipedia I found
\begin{gather} B = \frac{\sum_{n=1}^N (x_n - \bar{x})(y_n-\bar{y}) }{ \sum_{n=1}^N (x_n-\bar{x})^2 } = \frac{ \sum_{n=1}^N (x_n y_n) - \frac1N \big(\sum_{n=1}^N x_n\big) \cdot\big(\sum_{n=1}^N y_n\big) } {\sum_{n=1}^N x_n^2 - \frac1N \big(\sum_{n=1}^N x_n \big)^2 } \\ A = \bar{y} - B\bar{x} = \big(\sum_{n=1}^N y_n\big) - B\big(\sum_{n=1}^N x_n \big) \end{gather} where I denote by $(x_n)_{n=1}^N$ and $(y_n)_{n=1}^N$ your series of data (as described above) and $\bar x$, $\bar y$ means arithmetic mean, ie. $$ \bar x = \sum_{n=1}^N x_n, \qquad \bar y = \sum_{n=1}^N y_n $$ (I adjusted the formulas slightly to fit your notation).
Two important points:
Also, the final result might be somewhat "skewed", because the distribution of data is different after taking logarithm. But this actually should be an advantage, because in this procedure the data with greater logarithms (in absolute value, so very big or very small) mean more. In your case, both $n$ and $\gamma_n - \gamma$ have bigger logarithm for large $n$. On the other hand, this means that my suggestion to "even" the values (ie. examine $n(\gamma_n - \gamma)$) is probably not that good of an idea after all. But I'll try it anyway, if possible.
Final word: it may be worth to read more about linear and power regression on your own.
PS. Only after I wrote all of this, I did what I should begin with: check the math.SE for other answers on power regression. It turns out, that there are standard methods to pursue a better fit having done linear regression of logarithmized data: a nonlinear least-squares algorithm like Levenberg-Marquardt, which I unfortunately know nothing about. So I advise you to read this and this question. (In fact, part of my answer is already there, but nevertheless the problem of asymptotics is unique here.) In particular, Guess who it is. in his answer gives better provisional values for later polishing than from linear regression of logarithms (obtained by weighted linear regression), which you should definitely check out, and later a way to refine the approximation.