Scaling and Adding Mathematical Programs

46 Views Asked by At

I understand the notion of Linearity typically applied to define Linear Programs (here I will capitalize "Linear" when and only when I use it in this sense). In contrast, this question is focused on the notion of linearity applied in math more broadly, with an eye toward scaling and adding together multiple Mathematical Programs.

For example, if each stock trader in a group is independently solving her own Mathematical Program, then it seems like it should be possible to scale each of them by some constant weight related to the proportion of the group's capital allocated to her, and add all of these into an overall Mathematical Program which the group is solving (not the one it wishes it could solve by ignoring incentives, although this could be related by the Price of Anarchy). Similarly, there should be some expected Mathematical Program for optimizing over a state of $20\%$ certainty that one is solving Mathematical Program A and $80\%$ certainty that one is solving Mathematical Program B.

From reasoning about some simple cases, I believe that for any not-necessarily-Linear Mathematical Programs all with identical constraints, it is possible to scale the objective functions of any of them by constants and/or add the objective functions of any number of them, in the senses ordinarily permitted by linearity, leaving the constraints unchanged, with a single modification. Namely, scaling by a negative constant toggles between a $min()$ and a $max()$, i.e., $(min_{\vec x}\ (-3\ f(\vec x))$, $A\vec x \leq \vec b$, $\vec x \geq \vec 0) = (-3\ max_{\vec x}\ f(\vec x)$, $A\vec x \leq \vec b$, $\vec x \geq \vec 0)$.

Perhaps a similar toggling move is required when adding Mathematical Programs, since despite typically being listed as an independent condition of linearity, additivity actually entails degree $1$ homogeneity (at least for continuous functions). I'm tentatively calling this "vague linearity," as $(extr_{\vec x}\ (k\ f(\vec x))$, $A\vec x \leq \vec b$, $\vec x \geq \vec 0) = (k\ extr_{\vec x}\ f(\vec x)$, $A\vec x \leq \vec b$, $\vec x \geq \vec 0)$ and $(extr_{\vec x}\ g(\vec x)$, $A\vec x \leq \vec b$, $\vec x \geq \vec 0) + (extr_{\vec x}\ h(\vec x)$, $A\vec x \leq \vec b$, $\vec x \geq \vec 0) = (extr_{\vec x}\ (g(\vec x) + h(\vec x))$, $A\vec x \leq \vec b$, $\vec x \geq \vec 0)$, where $extr()$ magically picks out whichever of $min()$ or $max()$ is needed to make the linearity conditions true. Note that $f()$, $g()$, $h()$, and the constraints here could be non-Linear, and matrix-vector form is only used for a convenient example.

Secondly, I believe that specifically in case of a Linear objective function (and I'm not sure if I should require Linearity or some weaker condition on the constraints), it should be possible to "scale" the entire set of constraints by a single constant, leaving the objective functions unchanged, and translate from one solution to the other. This appears to follow from the fact that all partial derivatives of a Linear objective function surface are constant, and it is easy to tell what the height of a flat sheet of paper positioned in the air would be above any spot on the ground if it were extended in the appropriate directions. However, this "scaling" operation may involve slightly more basic algebra than just pulling the constant out, and I'm not sure if there should be some similar notion of an "additivity" operation for the constraints of multiple Mathematical Programs.

I was surprised not to find any properties like this listed on Wikipedia. Does "vague linearity" have a real name, and if not, what implications does the $min()$ $max()$ toggle have on relevant linear algebra? What are the precise rules for what operations one is allowed to do with Mathematical Programs (in general, and for Linear Programs) in this vein?