I have a question about Lemma 26.6 of the book, "Understanding machine learning," by Shalev-Shwartz and Ben-David, (online version: https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/copy.html)
Lemma 26.6 $\quad$ For any $A\subset \mathbb R^m, c\in \mathbb R$ and $\mathbf a_0\in \mathbb R^m$, we have $$R(\{c\mathbf a+\mathbf a_0:\mathbf a\in A\})\le \vert c\vert R(A),$$
where $R(\cdot)$ denotes the Rademacher complexity of a set, i.e. $$R(A)=\frac{1}{m}\underset{\pmb \sigma\sim \{\pm1\}^m }{\mathbb E}\sup_{\mathbf a\in A} \sum_{i=1}^m \sigma_ia_i,$$
where $\sigma_i$'s (components of $\pmb \sigma$) are i.i.d. uniformly distributed on $\{-1, +1\}$. My question: Isn't the inequality above in fact an equality, i.e.
$$R(\{c\mathbf a+\mathbf a_0:\mathbf a\in A\})= \vert c\vert R(A)?$$
For simplicity, let's consider $R(\{c\mathbf a:\mathbf a\in A\})$ and denote $\{c\mathbf a:\mathbf a\in A\}$ by $cA$. I think $R(cA)=\vert c\vert R(A)$. I argue as follows:
First of all, if $c>0$, clearly $R(cA)=cR(A)$, since $$R(cA)=\frac{1}{m}\underset{\pmb \sigma\sim \{\pm1\}^m }{\mathbb E}\sup_{\mathbf a\in A} \sum_{i=1}^m \sigma_i(ca_i)=\frac{1}{m}\underset{\pmb \sigma\sim \{\pm1\}^m }{\mathbb E}c\left (\sup_{\mathbf a\in A} \sum_{i=1}^m \sigma_i a_i\right)=cR(A).$$
So the claim holds, if $R(-A)=R(A)$. But this is true, because
$$\begin{align*}R(-A)&=\frac{1}{m}\underset{\pmb \sigma\sim \{\pm1\}^m }{\mathbb E}\sup_{\mathbf a\in A} \sum_{i=1}^m \sigma_i(-a_i) \\ &=\frac{1}{m}\underset{\pmb \sigma\sim \{\pm1\}^m }{\mathbb E}\sup_{\mathbf a\in A} \sum_{i=1}^m (-\sigma_i)a_i \\ &= R(A),\end{align*}$$
since $\sigma_i$'s are i.i.d. uniformly distributed on $\{-1, +1\}$. Am I mistaken somewhere?