When applying the Metropolis-Hastings algorithm, one natural way to compute the mean of the probability distribution at question is averaging over the "results" of each iteration of the algorithm. This clearly (under mild assumptions I guess) converges to the actual mean of the probability distribution as the number of iterations goes to infinity.
My question is whether there is a better way to calculate the mean in the sense that we need less iterations to get a better estimate. My intuition behind this would be that we should not weight the first few steps as much as later steps, since our approximation improves with each iteration. Is there a way to estimate the mean quicker and more precisely?
The key to efficiency is the step size, and there are many papers describing adaptive step-size algorithms. "Umbrella sampling" methods that simultaneously change weighting and direct the search can increase efficiency and are often used when something is known about the function of interest in advance. I would guess that early iterations might be used to define an umbrella in an adaptive manner, thereby changing both sampling pattern and weighting at later stages.