Most fractals can be seen as attractors of a given set of affine transformations $\{T_1,\cdots,T_N \}$. There are different ways one can generate a fractal by using this information. The most two common methods are the Deterministic IFS algorithm and the Random IFS algorithm.
The Random IFS Algorithm is a generalization of the Chaos game and essentially works as follows:
Determine the affine transformations $T_1,\cdots,T_N$ that characterize the fractal, and given an initial point $P_0$, iteratively compute $$ P_n= T_i(P_{n-1}), $$ where $i$ is randomly and uniformly chosen among the set $\{1,\cdots,N \}$.
This is quite a slow process, but it can be improved by adjusting the probabilities $p(T_i)$ of choosing transformation $T_i$ at each iteration. More specifically, it is known that the following probabilities are very efficient to speed up convergence: $$ p(T_i) = \frac{\det(M_{T_i})}{\sum_{j=1}^N\det(M_{T_j})} \tag{1} $$
Here, $M_{T_i}$ denotes the matrix of transformation $T_i$, and $det$ is the determinant operator of a matrix. Roughly speaking, this guarantees $p(T_i)$ to be the fraction of the fractal $F$ occupied by $T_i(F)$.
My question(s):
Is there a better strategy ? Is there an optimal one? Is this an open problem ?
I don't think this determinant based formula is really the best way to pick the probabilities. As a concrete example, let's consider the IFS with functions \begin{align} f_1(\vec{x}) &= 0.98\;R(137.5^{\circ})\;\vec{x} \\ f_2(\vec{x}) &= 0.1\,\vec{x}+\langle 1,0 \rangle. \end{align} Note that $R(\theta)$ represents a rotation matrix through the angle $\theta$ so that $f_1$ represents a rotation and slight contraction centered at the origin while $f_2$ represents a strong contraction and shift to right. The result, as we'll see, is a nice self-similar spiral.
Now, the determinants are $0.98^2=0.9604$ and $0.1^2=0.01$ and their sum is $0.9704$. For the probabilities, this yields \begin{align} p_1 &= 0.9604/0.9704 \approx 0.989695 \\ p_2 &= 0.01/0.9704 \approx 0.010305 \end{align} If we apply the Random IFS Algorithm to generate 30000 points approximating the attractor using these choices of probabilities, we get the following image:
Definitely, much better than choosing equal probabilities (which yields this image) but it now seems to be too heavily weighted towards the center and too light on the edges.
A better approach accounts for the fractal dimension of the object. If you have an IFS that satisfies the open set condition and consists of similarities with ratios $\{r_1,r_2,\ldots,r_m\}$, then the similarity dimension of the object is the unique number $s$ such that $$r_1^s + r_2^s +\cdots+ r_m^s = 1.$$ As a result, $\{r_1^s,r_2^s,\ldots,r_m^s\}$ is a good probability list and, in fact, is the correct choice of probabilities if you want a uniform distribution of points throughout the attractor. In the example above with $r_1=0.98$ and $r_2=0.1$, we get $s\approx 1.51953$ and \begin{align} p_1 &= r_1^s \approx 0.969768 \\ p_2 &= r_2^s \approx 0.0302322 \end{align} Note that this scheme weights $f_2$ about three times heavier than the determinant based scheme so that we expect to trace the outer portion out more. The resulting picture looks much better:
The reason this works is rooted in the proof that the similarity dimension agrees with the Hausdorff dimension. Central to that proof is the construction of a self-similar measure on the attractor that is uniformly distributed in terms of density and it is exactly this choice of probability weights that work.