I recently read Flaxman et.al's paper 'Online convex optimization in the bandit settings: gradient descent without a gradient'. They proposed a smoothed version of the function f, such that: $$\hat{f}(x) = E_{v\in B}[f(x+\delta v)]$$
where $\delta$ is a fixed number, $v$ is a unit vector randomly selected from the unit ball $B$
My question is if f(x) is a convex function, how to prove the smoothed version, $\hat{f}(x)$, is convex? Any help could be appreciated!
Suppose $\lambda \in [0,1]$ then $f(\lambda x + (1-\lambda)y + \delta v ) = f(\lambda (x+\delta v) + (1-\lambda)(y + \delta v )) \le \lambda f (x + \delta v) + (1-\lambda) f(y + \delta v)$.
Now take expectations of both sides and used linearity.