I am wondering if a "reverse necessary condition" implies sufficiency. Suppose your objective is to solve $$ \min_{x\in X} f(x) $$
where $X$ is a convex and compact subset of $\mathbb{R}^n$ and $f(\cdot)$ is a continuously differentiable function. The necessary condition for $x^*\in X$ to be optimal is that $$ (x-x^*)f'(x^*)\geq 0 $$ for all $x\in X$. However, suppose that $f$ is not convex so we don't know if the necessary conditions are also sufficient. Instead, consider the following condition: $$ \tag A f'(x)(x^*-x)\leq 0 $$ for all $x\in X$. Essentially, (A) implies that starting from any $x \in X$ and moving in the "direction" of $x^*$ is always beneficial. More formally, assuming $f$ is Malliavin differentiable, (A) is implied by $f(\epsilon x^*+(1-\epsilon)x)\leq f(x)$ for small enough $\epsilon >0$.
Does (A) imply that $x^*$ is optimal?
I must be frank, I am not up to speed on the consequences of Malliavin differentiability. But it seems that under some mild conditions on $f$ it ought to be so. Suppose that there exists a point $y$ such that $f(y) < f(x^*)$. Define: $$g(t) = f((1-t)x^* + ty) = f(x^* + t(y-x^*))$$ Then for $t\in[0,1]$, with $g(0) = f(x^*)$, and $g(1) = f(y)$, with $g(1) < g(0)$. Then for $t>0$, we have \begin{aligned} g'(t) &= \langle \nabla f(x^* + t(y-x^*)), y-x^*\rangle \\ &= -t^{-1} \langle \nabla f(x^* + t(y-x^*)), x^* - (x^* + t(y-x^*)) \rangle \geq 0. \end{aligned} (and the continuity of the derivative ensures that $g'(0)\geq 0$ as well). Therefore, $$0 \leq \int_0^1 g'(t) dt = g(1) - g(0) < 0$$ which is a contradiction.