The full question asks whether $s_A > s_B$, $s_A < s_B$, $s_A = s_B$, or it is impossible to determine.
By doing hand calculations, I found that $s_A\approx 13.8$ and $s_B\approx 16.45$. Is there a way to reason that the standard deviation of B should be bigger than A without doing many calculations?

Intuitively, standard deviation is supposed to give a measure of how much the datapoints deviate from the mean (and therefore from each other). If datapoints are all clustered together, the standard deviation is small; if spread out, it's large.
In Distribution A, there are more datapoints clustered near the mean (the middle value, 30) than spread out at the edges. Distribution B is exactly the opposite situation: most of the data is at the edges, and very little in the middle. Thus Distribution B is more spread out and should have a higher standard deviation.