Edit: as pointed out by dcolazin in the comments, this is indeed a mistake in the 2nd version of the book. The example of the piecewise-linear function was removed completely in the 3rd edition of the book.
I'm reading Simon Haykin's Neural Networks book. In chapter 1, there is a section on 3 standard activation functions, one of which is a piecewise-linear function, defined as follows:
$$\phi(v) = \begin{cases} 1, & \text{if $v\ge1/2$} \\ v, & \text{if $1/2\gt v\gt-1/2$} \\ 0, & \text{if $v\le-1/2$} \end{cases}$$
The following graph of this function is also presented in the book:
piecewise-linear function graph taken from aforementioned textbook
The first and third pieces of the function are represented well in the graph, while the second one, where $\phi(v)=v$, is not. As a quick example, $\phi=0.6$ for $v=0.1$ and $\phi=0.5$ for $v=0$. It is, therefore, quite clear that the function that is actually represented in the graph is the one with its second piece being $\phi(v)=v+1/2$.
This mistake surprises me, since it appears in a very popular textbook, on a very well-defined topic (basic activation functions in neural networks). In my research about the problem, I happened upon a paper, showing the same exact graph while referring to the same function (just by name, however, no definition given).
I haven't been able to find a reasonable explanation online, not even in this duplicate StackOverflow question. I just cannot ignore this (apparent) error, since it's so fundamental.