I'm going to use scale-based preconditioning in a quadratic optimization problem: minimize $ x^T Q x + p^T x$ such that $ A x + b = 0$ and $D x + E \leq 0$, I want to speed up finding the optimal $x$ (note that $x$ is a vector with size $n$)by scaling the coefficient matrices ,I have studied the article "The ADMM algorithm for distributed quadratic problems: parameter selection and constraint preconditioning" and "Optimal parameter selection for the alternating direction method of multipliers (ADMM): quadratic problems" and so many other articles on this issue, but all of them recommend an algorithm that costs time and computation , so it is not applicable in my case,I simply need to scale coefficient matrices to decrease the number of iterations needed to reach optimal point,Can you suggest an article or some thing that gives the best scaling rules?Or some hints on how should I choose the scaling matrix ?
Edit : I've used @littleO approach,and I also tried $Q$ diagonalization ,both methods caused a little speed up.(I guess it can get better by using more sophisticated scaling).
This is too long for a comment. It's not meant to be a definitive answer, just something to consider.
My impression is that simply scaling each row $q_i^T$ of $Q$ by $1/\|q_i\|_2$ can be a simple and effective preconditioning strategy.
If you look at the POGS implementation of ADMM, which works very well in my experience testing the code on radiotherapy problems, there is a step where they normalize $A$. In the file pogs.m we see this code:
So what is this
equilfunction that is being called here? When I read the paper this code is based on, I thought thatequilmust be some fancy iterative algorithm, such as the equilibration algorithm 2 in section 4.4.2 here. Yet, here is the code I see for theequilfunction:Look how simple that is. So, perhaps something like this is worth trying as a preconditioning strategy.