Disclaimer: I discovered that the FAQ suggests to post research-level to mathoverflow instead of math.stackexchange. I "moved" the question accordingly, cp. post at mathoverflow. Sorry for the confusion!
Question: Given a function $q: \mathbb R^{N\times N}\mapsto \mathbb R$,
$$ q(\Omega) = \inf_{\vec r}\,\,\vec r^T\vec r - \vec w^T\vec r\quad\quad \text{s.t.}\quad\quad (\Omega + \lambda I)\vec r = \vec w\quad\quad \text{and}\quad\quad \vec r > 0. $$
where $\Omega\in\mathbb R^{N\times N}, \vec r, \vec w\in\mathbb R^{N}$ and $I\in R^{N\times N}$ is the identity matrix. What is the "gradient" $\partial q$ that minimizes $q(\Omega)$?
First attempt: My current hope is to minimize $q$ using subgradient calculus. In a slightly related problem, the function
$$ q(\lambda) = \inf_x \left(f(x) + \lambda g(x)\right) $$
can be shown to be minimized by the subgradient $\partial q = g(x)$ (and convergence can be proofed). However, in my problem the argument of $q$ is in the constraint. For parameters in the constraint an approach like multi-parametric quadratic constrained optimization might go in the right direction. However, they rely on branch-and-cut algorithms whereas I want a distributed gradient approach.
Thanks!