I have the following projected gradient descent online linear programming problem which has been well studied in www.cs.cmu.edu/~maz/publications/techconvex.pdf
$\mathbf{y}_{t+1}=\mathbf{w}_t - \eta \mathbf{l}_t $
$\mathbf{w}_{t+1}=\operatorname{argmin}_{\mathbf{w} \in \Omega} \|\mathbf{w}-\mathbf{y}_{t+1}\|_2 $
where $\|.\|_2$ stands for $l_2$ norm, $\mathbf{l}_t$ is the (gradient of the cost function) cost occurred at round t and $\Omega$ is a given convex set where we hope our weights lay in.
Now I want to add a normalization step to this approach as follows:
$\mathbf{D}_{t+1} = \frac{\mathbf{w}_{t+1}}{\|\mathbf{w}_{t+1}\|_1}$
The question is: Is it now possible to prove any non trivial regret bound for this approach that is:
$R_T = \sum_{t=1}^T \langle \mathbf{l}_t,D_t \rangle - \min_\mathbf{D} \langle \mathbf{D},\mathbf{l}_t \rangle $
where $R_T$ is the regret of this algorithm.
OK. 3 years later. It turns out that under certain conditions it is in fact possible to prove a regret bound for the above mentioned online convex programming. See here for more details:
http://papers.nips.cc/paper/5512-a-boosting-framework-on-grounds-of-online-learning