Does Deriving probability distributions using the Principle of Maximum Entropy involve with Euler-Lagrange equation?

204 Views Asked by At

I am reading the article Deriving probability distributions using the Principle of Maximum Entropy, https://sgfin.github.io/2017/03/16/Deriving-probability-distributions-using-the-Principle-of-Maximum-Entropy/

I don't understand this part,

1. Derivation of maximum entropy probability distribution with no other constraints (uniform distribution)

First, we solve for the case where the only constraint is that the distribution is a pdf, which we will see is the uniform distribution. To maximize entropy, we want to minimize the following function: $$ J(p)=\int_a^b p(x) \ln p(x) d x-\lambda_0\left(\int_a^b p(x) d x-1\right) $$ . Taking the derivative with respect ot $p(x)$ and setting to zero, $$ \frac{\delta J}{\delta p(x)}=1+\ln p(x)-\lambda_0=0 $$

How does the second equation get derived from the first equation? Is it using the Euler–Lagrange equation in calculus of variations or just the fundamental theorem of calculus of a single variable?

1

There are 1 best solutions below

0
On BEST ANSWER

Community wiki answer so the question can be marked as answered:

As noted in the comments, this is the Euler–Lagrange equation for the given variational problem. Because the given functional doesn’t contain the derivative of $p$, it consists simply in setting the variation with respect to $p$ to $0$.