Convexity at a point

884 Views Asked by At

This is more likely a fact-finding question. Consider a function $f:[a,b]\rightarrow\mathbb{R}$ and $c\in[a,b]$. Is there any notion of convexity at the point $c$?

2

There are 2 best solutions below

2
On

There are a few notions of convexity at a point that I am aware of:

Let $I$ be an open interval and $f$ be a real-valued function defined on $I$. Then with respect to $I$

  1. a point $x_0 \in I$ is a point of convexity of $f$ if \begin{equation} f(x_0) \le tf(x_1) + (1-t)f(x_2) \end{equation} for any $x_1,x_2 \in I$ and $0 < t < 1$ such that $x_0=tx_1 + (1-t)x_2$.

  2. $f$ is convex at $x_0\in I$ if for any $x_1 \in I$ other than $x_0$, and $0 < t < 1$, \begin{equation} f(tx_0+(1-t)x_1) \le tf(x_0) + (1-t)f(x_1). \label{eq:convexbaz} \end{equation}

  3. $f$ is punctually convex (or p-convex, for short) at $x_0\in I$ if \begin{equation} f(x_0) + f(x_1+x_2-x_0) \le f(x_1) + f(x_2) \label{ineq:punctual-convex} \end{equation} whenever $x_0$ is strictly between $x_1,x_2 \in I$.

  4. $f$ is totally convex at $x_0\in I$ if $\varphi(x,x_0):=\dfrac{f(x)-f(x_0)}{x-x_0}$ is an increasing function of $x$ on $I\setminus \{x_0\}$. Equivalently, $\Psi(x_0,x_1,x_2):=\dfrac{\varphi(x_2,x_0)-\varphi(x_1,x_0)}{x_2-x_1} \ge 0$ for any distinct $x_1,x_2$ in $I \setminus\{x_0\}$.

The first one appears in "The mechanics and thermodynamics of continuous media" by M.Šilhavý. It also appears in "Relative convexity and its applications, Aequat. Math., 89, 1389–1400 (2015)" by C. P. Niculescu and I.Roventa.

The second one, as pointed out in a previous reply, appears in "Nonlinear programming. Theory and algorithms" by M.S. Bazaraa, and C.M. Shetty.

The third one appears in "On a class of punctual convex functions, Math. Inequal. Appl., 17, 389–399 (2014)" by A. Florea and E Paltanea.

The fourth one appears in "Bull. Math. Soc. Sci. Math. Roumanie (N.S.) 66(114) (2023), no. 2, 235–250" by W.Pong and S. Raianu. The relations between these notions are discussed in the article.

0
On

I have encountered it in 'Nonlinear Programming: Theory and Algorithms' by Bazaraa, where it is defined as follows:

$$ f(x) \textrm{ is convex at } x' $$ $$ \iff $$ $$ \forall \lambda \in [0,1], \forall x: f(\lambda x' + (1-\lambda)x) \leq \lambda f(x')+ (1-\lambda)f(x) $$ So essentially the same definition as normal convexity, with the difference that one of the ends of the line segment is fixed at $x'$.

In the book I mentioned above the concept is used in some KKT-related optimality conditions.