Consider $N$ particles with massa $m$ spaced out on a $\Omega = [0,1]^2$. The force acting on each particle reads $$F_i = \sum_j C\frac{m_i m_j}{r_{ij}^2}$$ for some constant $C$. The problem here is that we would need to do $O(N^2)$ calculations to compute it fully (since there are $N$ forces to calculate each containing a sum over $N-1$ elements).
Therefore I try to use some handy approximation that will reduce the computational complexity. We subdivide the particles into a quadtree data structure in order to use Barnes - Hut algorithm (alternative) on the problem. All the sources I see state the complexity to be $O(N \log N)$ but I am unable to find any kind of proof for this statement. Also it seems a wrong statement in general - BH has a free parameter $\theta$ which should be included in the analysis because if $\theta = 0$ the complexity is $O(N^2)$.
I'd like to have an (derivation of) expression (or at least a sketch of a derivation) for the complexity as a function of $N$ and $\theta$. Also some bound on error of the approximation as a function of these two variables would be nice to have for case $m_i = 1$ for all $i$ and for case $m_i = \pm 1$ distributed evenly.
Asymptotics often ignore parameters - it's wrong to assume an asymptotic is uniform in all parameters by default.
For masses with consistent signs the multiplicative error is quadratic in $\theta.$ So if the estimate for $F_i$ is $F_i'$ then $F_i'/F_i$ will lie in $[1-c\theta^2,1+c\theta^2],$ for some absolute constant $c$ (and $0<\theta<1/2$ say). By scaling it suffices to look at the error of estimating the sum of $f(x)=1/|x|^2$ for particles with center of mass $(1,0,0)$ and within distance $O(\theta).$ By Taylor series each value is $f(1,0,0)+(x-(1,0,0))\cdot (\nabla f)(1,0,0)+O(|x-(1,0,0)|^2).$ The linear term cancels out because we're using the center of mass (monopole approximation). So the multiplicative error is bounded by $O(\theta^2).$ (I suspect there's a corresponding lower bound given by a uniform distribution.)
For mixed mass signs my instinct is to say there's isn't a sensible a priori error bound - if the forces cancel, the error could be much larger than the actual value.
The $O(N\log N)$ in the Barnes-Hut paper comes from considering particles in a grid (at least as I read it - they just say uniformly distributed). Suppose the universe size is $S$ and any pair of particles are separated by at least $s.$ Consider running Barnes-Hut to calculate the force at a particle at position $x.$ Points at distance $r>0$ from $x$ will be put in a cube of radius at least $\tfrac{1}{100}\theta r$ (being sloppy about constant). So we can upper bound the number of cubes by integrating $(\tfrac{1}{100}\theta r)^{-3}$ over points at distance between $s$ and $S$ from $x.$ Integrating using spherical co-ordinates gives a bound of $O(\theta^{-3})\int^S_s 1/r=O(\theta^{-3}\log(S/s)).$ For a grid we can put $S=1$ and $s=1/N^{1/3}$ for example, giving a bound of $O(\theta^{-3}\log N)$ cubes, for each of the $N$ iterations.