I am reading the book Variational Analysis and Generalized Differentiation I by B. Mordukhovich. On page 6 it is stated that the following inclusion: $$ \hat{N}_{\varepsilon }\left( \bar{x};\Omega \right) \supset \hat{N}\left( \bar{x};\Omega \right) +\varepsilon \mathbb{B}^{\ast }. $$ $\mathbb{B}^{\ast }$ denotes the closed unit ball in the dual space $X^{\ast }$, and if $\Omega $ is convex, then for any $\varepsilon \geq 0$ we have: $$ \hat{N}_{\varepsilon }\left( \bar{x};\Omega \right) =\{x^{\ast }\in X^{\ast }\mid \langle x^{\ast },x-\bar{x}\rangle \leq \varepsilon \Vert x-\bar{x} \Vert \text{ whenever }x\in \Omega \}. $$ Furthermore $\hat{N}\left( \bar{x};\Omega \right) :=\hat{N}_{0}\left( \bar{x} ;\Omega \right) $. Mordukhovich says that for convex set $\Omega $ the above inclusion holds as equality. Unfortunately, I can't see why the reverse inclusion holds. I would be very grateful for the advice.
2026-02-23 23:03:29.1771887809
$\epsilon$-normals to convex sets
352 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in CONVEX-ANALYSIS
- Proving that: $||x|^{s/2}-|y|^{s/2}|\le 2|x-y|^{s/2}$
- Convex open sets of $\Bbb R^m$: are they MORE than connected by polygonal paths parallel to the axis?
- Show that this function is concave?
- In resticted domain , Applying the Cauchy-Schwarz's inequality
- Area covered by convex polygon centered at vertices of the unit square
- How does positive (semi)definiteness help with showing convexity of quadratic forms?
- Why does one of the following constraints define a convex set while another defines a non-convex set?
- Concave function - proof
- Sufficient condition for strict minimality in infinite-dimensional spaces
- compact convex sets
Related Questions in CONVEX-OPTIMIZATION
- Optimization - If the sum of objective functions are similar, will sum of argmax's be similar
- Least Absolute Deviation (LAD) Line Fitting / Regression
- Check if $\phi$ is convex
- Transform LMI problem into different SDP form
- Can a linear matrix inequality constraint transform to second-order cone constraint(s)?
- Optimality conditions - necessary vs sufficient
- Minimization of a convex quadratic form
- Prove that the objective function of K-means is non convex
- How to solve a linear program without any given data?
- Distance between a point $x \in \mathbb R^2$ and $x_1^2+x_2^2 \le 4$
Related Questions in VARIATIONAL-ANALYSIS
- Calculus of Variation - Minimize a Functional
- Stationary Condition of Variational Iteration Method
- Existence and uniqueness of weak solutions to the homogeneous biharmonic equation.
- An example for a stable harmonic map which is not a local minimizer
- Similarity between the differential of functionals and functions
- Functional form of an Ordinary Differential Equation
- Beam equation is neccesary condition for minimum of specific functional.
- First variation of coordinate transformation
- Do there exist energy-minimizing immersions?
- Is there always a non-trivial homotopy class of maps which minimizes the Dirichlet energy?
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
Proposition 1.
(Boris S. Mordukhovich, Variational Analysis and Generalized Differentiation I, page 5)
Let $\Omega$ be a nonempty convex set in a real Banach space $X$. Given $\bar{x}\in \Omega$ and $\varepsilon\geq 0$. Then $$ \widehat{N}_\varepsilon(\bar{x}; \Omega):= \{x^*\in X^*| \limsup_{x\overset{\Omega}{\rightarrow}\bar{x}}\frac{\langle x^*, x-\bar{x}\rangle}{\|x-\bar{x}\|}\leq \varepsilon\} $$ is convex and closed in the norm topology of $X^*$. Moreover, if $X$ is reflexive then it is weak$^*$ closed in $X^*$.
Prososition 2.
(Boris S. Mordukhovich, Variational Analysis and Generalized Differentiation I, Proposition 1.3)
Let $\Omega$ be a nonempty convex set in a real Banach space $X$. Then $$ \widehat{N}_\varepsilon(\bar{x}; \Omega)=\{x^*\in X^*| \langle x^*, x-\bar{x}\rangle\leq \varepsilon\|x-\bar{x}\| \; \text{whenever}\; x\in\Omega\} $$ for any $\varepsilon\geq 0$ and $\bar{x}\in \Omega$. In particular, $\widehat{N}(\bar{x}; \Omega)$ agrees with the normal cone of convex analysis, i.e. $$ \widehat{N}(\bar{x}; \Omega)=\{x^*\in X^*| \langle x^*, x-\bar{x}\rangle\leq 0 \; \text{whenever}\; x\in\Omega\}. $$
By using Proposition 1. and Proposition 2. we obtain the following result.
Proposition 3.
If $\Omega$ is a nonempty convex subset in a real Banach and reflexive space $X$ then $$ \widehat{N}_\varepsilon(\bar{x}; \Omega)= \widehat{N}(\bar{x}; \Omega)+\varepsilon \mathbb{B}^*. $$ Proof. $(\supset)$ Suppose that $x_0^*\in \widehat{N}(\bar{x}; \Omega)$ and $u^*\in \varepsilon \mathbb{B}^*$. Then, it follows from Proposition 2. that $$ \langle x_0^*, x-\bar{x}\rangle\leq 0 \quad \forall x\in \Omega. $$ Hence \begin{equation*} \begin{array}{lll} \langle x_0^*+u^*, x-\bar{x}\rangle&=&\langle x_0^*, x-\bar{x}\rangle+\langle u^*, x-\bar{x}\rangle\\ &\leq&0+\|u^*\|\|x-\bar{x}\|\\ &\leq& \varepsilon\|x-\bar{x}\| \end{array} \end{equation*} for all $x\in \Omega$. This implies that $x_0^*+u^*\in \widehat{N}_\varepsilon(\bar{x}; \Omega)$. Therefore $\widehat{N}_\varepsilon(\bar{x}; \Omega)\supset\widehat{N}(\bar{x}; \Omega)+\varepsilon \mathbb{B}^*$.
$(\subset)$ Let $\widehat{N}^*:=\widehat{N}(\bar{x}; \Omega)+\varepsilon \mathbb{B}^*$. Since $X$ is reflexive, it follows from Proposition 2. that $\widehat{N}(\bar{x}; \Omega)$ is convex and weak$^*$ closed in $X^*$. Moreover $\varepsilon \mathbb{B}^*$ is convex and weak$^*$ compact in $X^*$. Hence $\widehat{N}^*$ is nonempty ($0\in \widehat{N}^*$), weak$^*$ closed and convex in $X^*$.
Suppose that there exists $x^*\in X^*$ such that $$ x^*\in \widehat{N}_\varepsilon(\bar{x}; \Omega) \; \text{and} \; x^*\notin \widehat{N}^*. $$ By the separation theorem (see W. Rudin, Functional Analysis, Theorem 3.4(b)) there exists $x\in X$ such that $$ \langle x^*, x\rangle>\sup_{f^*\in \widehat{N}^*}\langle f^*, x\rangle $$ It follows from the above inequality that $$ \begin{cases} \langle x^*, x\rangle>\langle f_0^*, x\rangle \quad \forall f_0^*\in \widehat{N}(\bar{x}; \Omega),&\\ \langle x^*, x\rangle>\langle f_1^*, x\rangle \quad \forall f_1^*\in \varepsilon\mathbb{B}^*.& \end{cases} $$ Since $\widehat{N}(\bar{x}; \Omega)$ is cone, we have $$ \begin{cases} 0\geq\langle f_0^*, x\rangle \quad \forall f_0^*\in \widehat{N}(\bar{x}; \Omega),&\\ \langle x^*, x\rangle>\varepsilon\|x\|.& \end{cases} $$ By Proposition 2. $\widehat{N}(\bar{x}; \Omega)$ agrees with normal cone in convex analysis and so $$ \begin{cases} x\in (\widehat{N}(\bar{x}; \Omega))^*=T(\bar{x}; \Omega)=\overline{\text{cone}(\Omega-\bar{x})},\\ \langle x^*, x\rangle>\varepsilon\|x\|,& \end{cases} $$ where $T(\bar{x};\Omega)$ is the tangent cone of $\Omega$ at $\bar{x}$. Then, there exist $\{t_k\}\subset\mathbb{R}^+$ and $\{x_k\}\subset\Omega$ such that $t_k(x_k-\bar{x})\rightarrow x$. Hence, for sufficiently large $k$ we have $$ \langle x^*,t_k(x_k-\bar{x}) \rangle>\varepsilon\|t_k(x_k-\bar{x})\| $$ or equivalently $$ \langle x^*,x_k-\bar{x} \rangle>\varepsilon\|x_k-\bar{x}\|. $$ This implies that $x^*\notin \widehat{N}_\varepsilon(\bar{x}; \Omega)$, which is an absurd.