There is a similar question on MSE but I don't think this is a duplicate, as my question is not addressed there.
Suppose $\varphi:\mathbb R^2\to \mathbb R$ is convex. Then, assuming the integrals exist, it seems reasonable that the following would be true:
$$\int^1_0 \varphi ((f(t),g(t))dt \ge \varphi\left ( \int^1_0 f(t)dt,\int^1_0g(t)dt \right )$$.
Let $\alpha= \int^1_0 f(t)dt$ and $\beta= \int^1_0 g(t)dt.$ Then $h(x,y)=x-\alpha+y-\beta+\varphi (\alpha,\beta)$ is a supporting plane so $\varphi ((f(t),g(t))\ge f(t)-\alpha+g(t)-\beta+\varphi (\alpha,\beta)$ and then integration across the inequality gives the result.
My question is how to show formally that $h$ is indeed a supporting plane. Is it enough to observe that the slope $m_u,$ of the line on the plane through the point $(\alpha, \beta)$ in the direction of a vector $u,$ lies between the left- and right directional derivative, $D\varphi_u((\alpha,\beta)^-)$ and $D\varphi_u((\alpha,\beta)^+); $ that is, $D\varphi_u((\alpha,\beta)^-)\le m_u\le D\varphi_u((\alpha,\beta)^+)?$
Let the probability measure $\mu$ on $\mathbb R^2$ be the image of Lebesgue measure on $[0,1]$ induced by the map $t\mapsto (f(t),g(t))$, so $\mu(A)=\lambda(\{t: (f(t),g(t))\in A\}).$ Then the barycenter of $\mu$ is $\left( \int_0^1 f, \int_0^1 g\right)=\int z\, \mu(dz)$ and the ordinary Jensen's inequality gives $$ \int \varphi(z)\,\mu(dz) \ge \varphi\left ( \int z\, \mu(dz)\right).\tag{*}$$ But the left hand side is $\int_0^1\varphi(f(t),g(t))\,dt$, so the desired result holds.
Here by ordinary Jensen's inequality I mean the finite dimensional vector version of what the Wikipedia article call the "general inequality in a probabilistic setting". Here, one has a probability measure $\mu$ on a finite dimensional vector space $T$ supported in a closed convex set $C\subset T$ for which the vectorial expectation (aka center of gravity or barycenter) $\beta=\int z \mu(dz)$ exists. One observes that $\beta\in C$, at least when $T$ is finite dimensional. (This is the delicate point in the theory, and requires a Pettis integral formulation in infinite dimensional cases. Clearly $\beta\in H$ for each closed half-space containing $C$, and hence in the intersection of all such $H$.) If $\varphi$ is convex on $C$, the inequality (*) displayed above holds. Here convex means that for all $x_1, x_2\in C$ and all $t\in[0,1]$, one has $$\varphi(tx_1+(1-t)x_2)\le t\varphi(x_1)+(1-t)\varphi(x_2).$$
One way to see this form of Jensen's inequality is to note that the epigraph $$E_\varphi=\{(x,y):x\in C, y\ge\varphi(x)\}\subset T\times\mathbb R$$ is convex. To see this, suppose $v_i=(x_i,y_i)\in E_\varphi$, for $i=0,1$, and $t\in[0,1]$. We have $y_i\ge \phi(x_i)$, so $t y_1+(1-t)y_2\ge t \varphi(x_1) + (1-t)\varphi(x_2)\ge \varphi(tx_1+(1-t)x_2)$ where the first inequality is trivial and the second follows from convexity of $\varphi$. But this already implies $tv_1+(1-t)v_2 \in E_\varphi,$ verifying the convexity of $E_\varphi.$ Finally, note that (*) follows from $\gamma=\int_C (z,\varphi(z))\mu(dz)\in E_\varphi$. This last holds because $\gamma$ is the barycenter of the probability measure $\nu$ on $E_\varphi$ that is the image of $\mu$ under the map $z\mapsto (z,\varphi(z))$.