Given a compact set $K\subset \mathbb{R}^3$, we consider $f:K^3\subset\mathbb{R^9}\to \mathbb{R}_0^+$ such that $f(x_1,x_2,x_3)=f(x_{\tau(1)},x_{\tau(2)},x_{\tau(3)})$ for every permutation $\tau$ and $\int_{K^3} f=1$ and we define $g:K^3\to \mathbb{R}_0^+,\;g(x_1,x_2,x_3)=\int_{K^2}f(x_1,x_2,x_3)dx_2dx_3\int_{K^2}f(x_1,x_2,x_3)dx_1dx_3\int_{K^2}f(x_1,x_2,x_3)dx_1dx_2$.
Today during my statistical physics class the teacher said that it is easy to show that $\int_{K^3}f(x_1,x_2,x_3)\ln\frac{f(x_1,x_2,x_3)}{g(x_1,x_2,x_3)}dx_1dx_2dx_3\geq0 $. He said that we just need to add $g-f$ inside the integral. I know that this doesn't change the integral because $\int g= \int f=1$, but I don't know how to see that $\int_{K^3}f(x_1,x_2,x_3)\ln\frac{f(x_1,x_2,x_3)}{g(x_1,x_2,x_3)}+g(x_1,x_2,x_3)-f(x_1,x_2,x_3)dx_1dx_2dx_3\geq0 $.
Due to the normalization $\int_{K^3}f=1$, we know that $f$ defines a probability measure $P$ via $P(\mathcal E)=\int_{\mathcal E}f(x)\mathrm dx$ for an event $\mathcal E\subseteq K^3$. Similarly, $g$ defines a probability measure $Q$. In this context, the integral $D(P\|Q)=\int_{K^3} f\ln\frac{f}{g}$ is the relative entropy. In terms of probability theory, the map $f$ is the Radon-Nikodym derivative of $P$, and $g$ the derivative of $Q$ with respect to the Lebesgue measure on $K^3$, so $r=\frac{f}{g}$ is the derivative of $P$ with respect to $Q$. Let $X=(X_1,X_2,X_3)$ have the distribution $P$ and $Y=(Y_1,Y_2,Y_3)$ have the distribution $Q$. Then we can rewrite $D(P\|Q)=\mathbb E[\ln(r(X))]=\mathbb E[r(Y)\ln(r(Y))]=\int_{K^3} gr\ln r$, which are equivalent (general) definitions of the relative entropy (where we have to ensure that $P$ is absolutely continuous with respect to $Q$, which holds here because of the definition of $g$). Since the map $x\ln(x)$ is convex, Jensen's inequality yields that $D(P\|Q)\ge\mathbb E[r(Y)]\ln(\mathbb E[r(Y)])=1\ln(1)=0$, using that $\mathbb E[r(Y)]=\int_{K^3} g\frac{f}{g}=1$, or in general because $r$ is a Radon-Nikodym derivative.