Analysis of Gaussian mixture probability densities and Gaussian mixture approximations

28 Views Asked by At

I am looking for advice on the analysis of Gaussian mixture densities and the approximation of non-parameterizable densities via Gaussian mixture models. Say, for instance, I have some target Bayesian posterior density that does not possess an analytical parameterization (i.e., does not belong to some known family of distributions). I then apply some tractable approximation, e.g., variational inference, moment matching, etc., and the result is a Gaussian or Gaussian mixture pdf that minimizes some measure of dissimilarity for the intractable true Bayesian posterior, typically the Kullback-Leibler (KL) divergence. Is there a method for measuring the accuracy of said approximation that intuits any convenient statistical meaning? For example, say I am comparing approximating the Bayesian posterior by either a three- or five-component Gaussian mixture. The five-component mixture improves the KL divergence to the true Bayesian posterior by 10% vs. the three-component approximation. But does this improvement have any useful interpretation (beyond the abstract decrease in "information loss")? Hypothetically, I could keep adding components and always improve the KL divergence (or any other divergence/distance). Most papers I see just put a cutoff on improvement, e.g., when the change in KL divergence is less than some threshold stop adding components, but the convergence criteria always seem arbitrary. Is there a measure that facilitates a greater understanding of the accuracy of such approximations?