While reading this paper on page 4 I encounter a upper bound named Hellman Raviv. I know the lower bound in the picture, but could anyone please tell me how I can get the upper bound which is half of the conditional entropy?
And why the author can obtrain the lower bound with only the conditional entropy?

I try to explain how the upper bound is derived.
First we prove a simpler version. Note that in $P(g(X)\neq Y)$, $g$ is the Bayes decision rule where $Y$ is approximated based on the observation $X$ as: $$ g(x)=\arg\max_{y\in\mathcal Y} P(Y=y|X=x). $$
Next we condition the error probability on $X=x$ and find an upper bound: $$ P(Y\neq g(x)|X=x)\leq 1-P(Y=g(x)|X=x) \leq -\log P(Y=g(x)|X=x) $$ where we used $1-x\leq -\log(x)$. But $P(Y=g(x)|X=x)\geq P(Y=y|X=x)$ for all $y\in\mathcal Y$. Therefore: $$ P(Y\neq g(x)|X=x)\leq -\log P(Y=y|X=x). $$ Taking the expectation from the both sides with respect to the joint distribution of $X$ and $Y$ results in
The entropy in the above inequality is in base $e$ which means that if the base is changed into 2, we get: $$ P(Y\neq g(X))\leq \log(2) H(Y|X) \quad \text{bits}. $$
But still $\log(2)\approx 0.7$ and the factor $\frac 12$ missing compared to the result in the paper. Hellman-Raviv bound is derived for the case where $\mathcal Y=\{0,1\}$ and we prove it for that case. The trick is too see that for binary case: $$ P(Y=g(x)|X=x)\geq P(Y=y|X=x). $$ and therefore $P(Y\neq g(x)|X=x)\leq \frac 12$. For this case we can use the following inequality: $$ \min(p,1-p)\leq\frac 12 h_b(p)=\frac 12[-p\log_2(p)-(1-p)\log_2(1-p)]. $$ The inequality can be obtained simply by seeing that $h_b(p)$ is a concave function achieving maximum 1 for $p=\frac 12$.
Using this inequality, we have: $$ P(Y\neq g(x)|X=x)\leq \frac 12 h_b(P(Y\neq g(x)|X=x))= \frac 12 H(Y|X=x). $$ Taking the expectation with respect to $X$ yields the desired inequality.
For the non-binary case, the proof is more demanding. Check page 5 of this paper.