I have a data sequence of length $10^6$ which I know it can be approximately modeled as $a(n) = n^{\alpha}+ n^{1-\alpha}+b$ ($b$ is an unknown constant, $0<\alpha <1$). But how to estimate $\alpha$ here?
Update:
Sorry, I made some mistakes above. I sincerely apologize for the mistakes. I revise the problem and explain my concerns more in this update.
The data sequence is generated from simulations of one algorithm. It should have the order $O(n^{\max{(\alpha, 1-\alpha)}})$. I know it can be approximately modeled as $a(n) = b_1n^{\alpha}+ b_2n^{1-\alpha}+b_3$ ($b_1,b_2,b_3$ are unknown constants) when $n$ is suffciently large.
I only care about the order $\max{(\alpha, 1-\alpha)}$ to justify the performance of the algorithm.
I have tried using $\frac{1}{t_1-t_0}\sum_{t=t_0+1}^{t_1}\log_2 \frac{a(2t)}{a(t)}$. The problem is the graph I got does not get the minimum at $\alpha = 0.5$. How to eliminate the numerical errors?
Here's a link of the new post of the problem.

If we look at the problem as a regression, you have $p$ data points $(n_i,a_i)$ and you want to adjust the model $$a=n^{\alpha}+n^{1-\alpha}+b$$ In the least square sense, you need to minimize $$SSQ=\sum_{i=1}^p \left(n_i^{\alpha}+n_i^{1-\alpha}+b -a_i\right)^2$$ Computing the partial derivatives,we have $$\frac{\partial SSQ}{\partial b}=2\sum_{i=1}^p \left(n_i^{\alpha}+n_i^{1-\alpha}+b -a_i\right)=0\tag 1$$ $$\frac{\partial SSQ}{\partial a}=2\sum_{i=1}^p \left(n_i^{\alpha}+n_i^{1-\alpha}+b -a_i\right)\left(n_i^{\alpha}-n_i^{1-\alpha}\right)\log(n_i)=0 \tag 2$$ which is hard to solve.
However, from $(1)$, you can get $b$ as a function of $\alpha$ $$b(\alpha)=-\frac 1p \sum_{i=1}^p \left(n_i^{\alpha}+n_i^{1-\alpha} -a_i\right)$$ and then $(2)$ is "just" an equation in $\alpha$.
The simplest would be to plot equation $(2)$ for $0 \leq \alpha \leq 0.5$ and see where it does cancels. Zoom more and more to have more accurate results; when your accuracy has been reached, recompute $b$.
All of that can be done using Excel.
Edit
For illustration purposes, let us use the following data set $$\left( \begin{array}{ccc} i & n_i & a_{n_i} \\ 1 & 10 & 10 \\ 2 & 20 & 14 \\ 3 & 30 & 16 \\ 4 & 40 & 19 \\ 5 & 50 & 21 \\ 6 & 60 & 23 \\ 7 & 70 & 25 \\ 8 & 80 & 26 \\ 9 & 90 & 28 \\ 10 & 100 & 30 \end{array} \right)$$ A first run, using $\Delta \alpha=0.05$ would give $$\left( \begin{array}{cc} \alpha & (2) \\ 0.05 & -36241.2 \\ 0.10 & -19913.1 \\ 0.15 & -10372.8 \\ 0.20 & -4943.28 \\ 0.25 & -1981.30 \\ 0.30 & -483.658 \\ 0.35 & 158.953 \end{array} \right)$$ So, the solution is between $0.30$ and $0.35$. Repeat using $\Delta \alpha=0.005$ to get $$\left( \begin{array}{cc} \alpha & (2) \\ 0.315 & -221.502 \\ 0.320 & -148.821 \\ 0.325 & -82.8318 \\ 0.330 & -23.1689 \\ 0.335 & 30.5170 \end{array} \right)$$ So, the solution is between $0.330$ and $0.335$. Repeat using $\Delta \alpha=0.0005$ to get $$\left( \begin{array}{cc} \alpha & (2) \\ 0.3305 & -17.537 \\ 0.3310 & -11.9644 \\ 0.3315 & -6.45098 \\ 0.3320 & -0.996267 \\ 0.3325 & 4.40004 \end{array} \right)$$ You will finish with $\alpha=0.332092$ to which correspond $b=3.54869$.
Update
If the model is instead $$a=b_1\,n^{\alpha}+b_2\,n^{1-\alpha}+b_3$$ consider that $\alpha$ is fixed at a given value. For this value, define two parameters $x_i=n_i^ \alpha$, $y_i=n_i^{1-\alpha}$ and so, you face, for a given $\alpha$ a linear regression $$a=b_1\,x+b_2\,y+b_3$$ which is easy to solve using matrix calculation or simpler the normal equations $$\sum_{i=1}^p a_i=b_1 \sum_{i=1}^p x_i+b_2 \sum_{i=1}^p y_i+b_3\,p$$ $$\sum_{i=1}^p a_ix_i=b_1 \sum_{i=1}^p x_i^2+b_2 \sum_{i=1}^p x_iy_i+b_3\sum_{i=1}^p x_i$$ $$\sum_{i=1}^p a_iy_i=b_1 \sum_{i=1}^p x_iy_i+b_2 \sum_{i=1}^p y_i^2+b_3\sum_{i=1}^p y_i$$ or just multilinear regression.
The resulting parameters are $b_1(\alpha), b_2(\alpha), b_3(\alpha)$ and now consider $$SSQ(\alpha)=\sum_{i=1}^p \left(b_1(\alpha)\,n_i^{\alpha}+b_2(\alpha)\,n_i^{1-\alpha}+b_3(\alpha) -a_i\right)^2$$ which needs to be minimized with respect to $\alpha$. As before, plot to locate more or less the minimum and zoom more and more until you reach the desired accuracy.
For example, considering the data set $$\left( \begin{array}{ccc} i & n_i & a_{n_i} \\ 1 & 10 & 254 \\ 2 & 20 & 357 \\ 3 & 30 & 442 \\ 4 & 40 & 516 \\ 5 & 50 & 584 \\ 6 & 60 & 646 \\ 7 & 70 & 705 \\ 8 & 80 & 761 \\ 9 & 90 & 814 \\ 10 & 100 & 865 \\ 11 & 110 & 914 \\ 12 & 120 & 961 \\ 13 & 130 & 1007 \\ 14 & 140 & 1052 \\ 15 & 150 & 1096 \end{array} \right)$$ we should have $$\left( \begin{array}{cc} \alpha & SSQ(\alpha) \\ 0.0 & 13251.5 \\ 0.1 & 126.579 \\ 0.2 & 33.3071 \\ 0.3 & 3.82528 \\ 0.4 & 1.38533 \\ 0.5 & 1745.69 \end{array} \right)$$ Continue zooming in the area of the minimum and finixh with $$a=10.3547 \,n^{0.363391}+40.1755\, n^{0.636609}+55.9928$$ which will give as final results $$\left( \begin{array}{cccc} i & n_i & a_{n_i} & \text{predicted} \\ 1 & 10 & 254 & 253.908 \\ 2 & 20 & 357 & 357.274 \\ 3 & 30 & 442 & 441.825 \\ 4 & 40 & 516 & 516.136 \\ 5 & 50 & 584 & 583.675 \\ 6 & 60 & 646 & 646.276 \\ 7 & 70 & 705 & 705.055 \\ 8 & 80 & 761 & 760.751 \\ 9 & 90 & 814 & 813.890 \\ 10 & 100 & 865 & 864.856 \\ 11 & 110 & 914 & 913.946 \\ 12 & 120 & 961 & 961.392 \\ 13 & 130 & 1007 & 1007.38 \\ 14 & 140 & 1052 & 1052.06 \\ 15 & 150 & 1096 & 1095.57 \end{array} \right)$$