What is the intuition or motivation about Translation-invariant priors?

155 Views Asked by At

When i read a book about machine learning.The following statement confused me:

A location-scale family is a family of probability distributions parameterized by a location µ and scale σ. If x is an rv in this family, then y = a + bx is also an rv in the same family. When inferring the location parameter µ, it is intuitively reasonable to want to use a translation- invariant prior, which satisfies the property that the probability mass assigned to any interval, [A,B] is the same as that assigned to any other shifted interval of the same width, such as [A−c,B−c].

for example: If the likelihood is in the location parameters family which has following form: $$p(x|\theta) = p(x-\theta)$$ obviously, it is translation-invariant. why we need to expect the prior distribution has translation-invariant property?

i don't know intuition or motivation of it,can u explain above by some example or some mathmatics formulae to let me understand the reason why i need to indicate invariant property to prior because of the same property of likelihood $p(x|\theta)$?

Thanks for u help in advance.

1

There are 1 best solutions below

0
On BEST ANSWER

If we have absolutely no knowledge of the location of the distribution, the principle of indifference states that we should assign equal probability to any location. This is equivalent to translation-invariance of our prior.

Put another way: if our prior were not translation-invariant, that would be equivalent to expressing some amount of knowledge about the location of the distribution (in particular, we'd know that the distribution is more likely to be located in some region than in some other identically-sized region).

Note that a translation-invariant prior over an unbounded region is necessarily improper.