If one wants to solve many linear systems with the same matrix, $$\mathbf A\mathbf x_1 = \mathbf b_1, \quad \mathbf A\mathbf x_2 = \mathbf b_2, \quad \ldots, \quad \mathbf A\mathbf x_k = \mathbf b_k,$$ it is preferable to first precompute a factorization of the matrix $\mathbf A$, so that each system with different right-hand sides can be solved efficiently. In my case, $\mathbf A$ is symmetric and positive definite, so the Cholesky factorization works well.
Of course, solving $\mathbf A\mathbf x_i = \mathbf b_i$ is equivalent to solving the unconstrained quadratic minimization $\min \tfrac12\mathbf x_i^T\mathbf A\mathbf x_i - \mathbf b_i^T\mathbf x_i$. I am interested in solving many such problems, but with additional inequality constraints, $$\begin{gather} \min & \tfrac12\mathbf x_i^T\mathbf A\mathbf x - \mathbf b_i^T\mathbf x_i \\ \text{s.t. } & \mathbf C_i\mathbf x \succeq \mathbf d_i. \end{gather}$$ The constraints and the linear term of the objective both vary between problems, but the quadratic term remains constant. Is there an analogous method for speeding up such problems by exploiting a factorization of $\mathbf A$? For equality constraints ($\mathbf C_i\mathbf x = \mathbf d_i$) we have tried the Uzawa method, but it does not support inequalities.
More details: The matrix $\mathbf A$ is sparse, symmetric, and positive definite. Its sparse Cholesky factorization is available, but I can also precompute other factorizations if necessary. If a general solution is not possible, I am also interested in the special case where each constraint acts on only one degree of freedom, i.e. each row of $\mathbf C_i$ has only one nonzero entry.
I don't give the details, only some ideas.
Let $f(x)=1/2x^TAx-b^Tx,g(x)=Cx-d=[g_1,\cdots,g_n]^T$ and $L_i$ be the rows of $C$.
Then the equations in the $2n$ real unknowns $x,\lambda_1,\cdots,\lambda_n$, are $Df(x)-\sum_i\lambda_i Dg_i(x)=0,\lambda_i g_i(x)=0$, that is to say $x^TA-b^T-\sum_i\lambda_i L_i=0,\lambda_i g_i(x)=0$. Finally, if $\Lambda=[\lambda_1,\cdots,\lambda_n]^T$, then the $2n$ linear equations are $$Ax=b+C^T\Lambda,\lambda_i g_i(x)=0.$$ Kuhn-Tucker gives more precise conditions...
Of course, there are $2^n$ choices for the column $\Lambda$.
For example, assume that $n=4$ and $\lambda_3=\lambda_4=0$. Then the matrix of the system in $x_1,x_2,x_3,x_4,\lambda_1,\lambda_2$ is directed by the $6\times 6$ symmetric matrix: $$\begin{pmatrix}A&L_1^T&L_2^T\\L_1&0&0\\L_2&0&0\end{pmatrix}.$$ Unfortunately, this matrix is not necessarily $>0$. Yet, we can proceed as follows:
$x=A^{-1}(-\lambda_1L_1^T-\lambda_2L_2^T+b)$ where $\lambda_1,\lambda_2$ satisfy the (corrected) system: $\lambda_1L_1A^{-1}L_1^T+\lambda_2L_1A^{-1}L_2^T-L_1A^{-1}b+d_1=0$
$\lambda_1L_2A^{-1}L_1^T+\lambda_2L_2A^{-1}L_2^T-L_1A^{-1}b+d_2=0$.
Finally, the key is to calculate $A^{-1}L_1^T,A^{-1}L_2^T,A^{-1}b$, that is easy, using the Choleski decomposition of $A$.
EDIT. I did not read the hypothesis: "$A$ symmetric $>0$"... Then $f$ is strictly convex and the set $S$ defined by $Cx\geq d$ is convex. Thus, if you obtain a local minimum on $S$, it is the unique global minimum on $S$. Moreover, if $x_0=A^{-1}b\notin S$, then the minimum is reached on the edge of $S$, that is, some $\lambda_i$'s are not $0$. Since the degree of $f$ is $2$, you can use QPSolve, a library in Maple or the equivalent in Matlab. Anyway, all available methods are iterative; in particular, choose for the initial point, a point on the edge of $S$ that is close to $x_0$.