The PDHG algorithm solves a saddle point problem of the following form. Let $X,Y$ be finite dimensional Hilbert spaces, with the two proper, convex and l.s.c. functions $G:X\rightarrow (-\infty,\infty]$ and $F:Y\rightarrow (-\infty,\infty]$ and the linear operator $K:X\rightarrow Y$. Then the saddle point problem is given by $$ \min_{x\in X}\max_{y\in Y} \langle Kx,y\rangle + G(x)- F^*(y). $$ Is it convention to assume that $F^*$ is the conjugate of some $F$ that is proper, convex and lsc? For example for the saddlepoint formulation in this paper (Theorem 1) this $F$ which gives the $F^*$ is (as far as I understand) not given. They only use a function that is convex, proper and lsc. They then write the saddlepoint problem for $n\in \mathbb{N}$ with $F$ instead of $F^*$ as $$ L: \mathbb{R}^{n,n,2}\times \mathbb{R}^{n,n}\rightarrow \mathbb{R}\\ L(m,\phi) = ||m||_{1,2} + \langle\phi, \nabla \cdot m + \rho^1 - \rho^0\rangle \\ =||m||_{1,2} + \langle\phi, \nabla \cdot m\rangle - \langle \phi , \rho^0 - \rho^1\rangle\\ =G(m) + \phi^TKm - F(\phi)\\ $$ where $||\cdot||_{1,2}$ is a finite dimensional vectornorm, and $F(\phi) = \sum_{ij}\phi_{ij}(\rho^0_{ij} - \rho^1_{ij})$ with $\rho^1,\rho^0$ non negative square matrices. The linear operator operator $K:\mathbb{R}^{n,n,2}\rightarrow \mathbb{R}^{n,n}$ is the discrete divergence.
My question is, how does this saddlepoint formulation match with the one given above? The function $F(\phi)$ is just pulled out of thin air to match the assumptions of PDHG. I wonder, if this function has to be the conjugate of some other function that matches the assumptions of PDHG. Or would it be enough for $F(\phi)$ to be lsc, convex and proper?