i'm reading a book of "Ulbrich" about Nonlinear Optimization. In the chapter about convex optimization he says, given a NLP $$\min f(x), s.t. g(x) \leq 0, h(x) = 0,$$ the NLP is convex if $f,g_i$ are convex functions and $h$ is affine ly linear.
After that he shows that the feasible set $X := \{x \in \mathbb{R}^n \mid g(x) \leq 0, h(x) = 0\}$ is then convex via: $\forall x,y \in X, \lambda \in [0,1]$ it holds $$g(x + \lambda (y-x)) \leq g(x) + \lambda (g(y) - g(x)) \leq 0 \text{ clear!}$$
and
$$h(x + \lambda (y-x)) \overset{?}{=} h(x) + \lambda (h(y) - h(x)) = 0.$$
The equality is wrong, right? I mean affinely linear does not mean that we behave linearly.
But we can say that the first derivative does so, and additionally is constant, right?
Greetings,
Dom