I have the following basis: { x, 1}. Using this basis, I define the following linear transformation:
$$S: \mathbb{R}_1[x] \rightarrow \mathbb{R}_2[x]$$ $$S(x) = x^2\\S(1) = 0$$
I have a theorem which says that if a linear transformation is defined for each element of its basis, then it is well defined. However:
$$S(-x) = x^2 \neq-S(x) = -x^2$$
This contradicts the theorem, because the transformation as I have defined it does not allow for $S(-x) = -S(x)$. Why?
The notation here is strongly suggestive of, unfortunately, the wrong thing: that $S$ is an operator which takes in an arbitrary term and outputs the square of that term.
By changing notation it's easier to see what's going on. We have two basis elements $v_1,v_2$, and a function defined on this basis by $f(v_1)=w_1$ and $f(v_2)=w_2$.
Now we ask what $f(-v_2)$ - or rather, $\hat{f}(-v_2)$, where $\hat{f}$ is the extension of $f$ to all of our space (note that you have a common minor abuse of notation, using $S$ to denote both the function defined on the basis elements only and the extension of that function to the whole space) - is equal to. We need to write $-v_2$ as a linear combination of basis elements, and we do this as $$-v_2=0\cdot v_1 + (-1)\cdot v_2.$$ We now apply $f$ to each of the basis vectors in this expression and look at what we get: $$\hat{f}(-v_2)=0\cdot f(v_1)+(-1)\cdot f(v_2).$$ This is just $-f(v_2)$ - and shifting back to our original context, this gives the desired result that $S(-x)$ does indeed equal $-S(x)$.