This question comes from the article Deforming convex hypersurfaces by the $n$-th root of the Gaussian curvature, where they study the $K^\beta$-curvature flow $$ \frac{\partial F}{\partial t} =- K^\beta \nu$$ for a family of compact hypersurfaces $F_t$ in the Euclidean space. Here $\nu$ is the outward unit normal of $F_t$ and and $\beta>0$.
Under the above evolution equation, the Gaussian curvature $K$ satisfies $$\tag{1} \partial_{t}{K}=\beta{K^{\beta}}(\Box{K}+\frac{\beta-1}{K}|\nabla{K}|^{2}_{h}+\frac{1}{\beta}HK),$$ where $\Box=h_{ij}^{-1}\nabla_{i}\nabla_{j}$, $h_{ij}$ is second fundamental form on manifold, $H$ the mean curvature and $\nabla$ the covariant derivative.
I want to know how to use the maximum principle to get if $K\geq{C}$ at $t=0$, then $K\geq{C}$ for $t>0$.
The author claims (in Corollary 4.1) that convexity of $F_t$ follows from applying maximum principle to (1). It is interesting to note that the usual tensor maximum principle of Hamilton is not applicable (the sign of the gradient term in the evoluation equation of $h_{ij}$ is not right)
In my opinion if we let $U(t)=\min_{x\in{M}}{K(x,t)}$ then we can get from (1) $$\partial_{t}U\geq{H}U^{\beta+1}.$$ So if $H\geq{0}$ we are done, but I don’t how to get it since you do not know yet if $F_t$ is convex for all $t$ and I don't know if this method is feasible.
Let $F_t$ be a solution to the flow, which is defined in $[0, T)$ for some $T>0$. Assuming that $F_0$ is strictly convex, there is $s\in (0,T]$ so that $F_t$ is also strictly convex for all $t\in [0,s)$. Let $$T_1 = \sup\{ s\in [0,T): F_t \text{ is strictly convex for all }t\in [0, s)\}.$$ We want to show that $T_1 = T$.
Assume the contrary that $T_1 <T$. Then $F_t$ remains strictly convex for all $t\in [0,T_1)$ and $F_{T_1}$ is not strictly convex at $T_1$. Hence in $[0,T_1)$ we have $H,U>0$. Thus $$ \partial_{t}U\geq{H}U^{\beta+1} >0$$ in $[0,T_1)$. Hence $K ge C$ for all $t\in [0,T_1)$. But this is a contradiction to $T_1<T$, since $K(x, T_1) = 0$ for some $x$.
As a result, $T_1 = T$. Hence $H>0$ in $[0,T)$ and the same argument implies that $K \ge C$ for all $t\in [0,T)$.
Remark instead of using $U=\min K$, one can also argue directly that $K \ge C$ in $[0, T_1)$, which looks more like maximum principle: For each $s<T_1$, assume that $K$ attains a minimum in $(x, t) \in M \times [0,s]$. We show that $t =0$: if not then $t>0$ and at $(x, t)$ we have $$ \partial _t K \le 0, \Box K \ge 0, \nabla K = 0, $$ which by (1) implies $$ 0 \ge H K^{\beta+1}, \ \ \text{ at } (x, t). $$ But this is impossible since $H, K>0$ (as it is convex). Thus the maximum must be attained at $t=0$.