Defining a Non-Linear System of Equations involving max and min with as a contraction

267 Views Asked by At

Suppose I have a vector of strictly positive variables $X \in \mathbb{R}_{++}^{J}$ and a vector of strictly positive variables $Y \in \mathbb{R}_{++}^{I}$ and I want to find the values for which a function $F(X,Y):\mathbb{R}^{J+I}\rightarrow \mathbb{R}^{J+I}$ is equal to $0$, $F(X^{*},Y^{*})=0$, where the function F is defined as

\begin{align} F(X,Y) &= \begin{bmatrix} X_{1} - a_{1} \left[ \min_{i} \left\{ c_{i1} Y_{i}^{\frac{1-\eta}{\eta}} \right\}\right]^{-\delta} \\ \vdots \\ X_{J} - a_{J} \left[ \min_{i} \left\{ c_{iJ} Y_{i}^{\frac{1-\eta}{\eta}} \right\}\right]^{-\delta} \\ Y_{1} - b_{1} \left[ \max_{j} \left\{ d_{1j} X_{j}^{-\frac{1}{\delta}} \right\}\right]^{\frac{\eta}{1-\eta}} \\ \vdots \\ Y_{I} - b_{I} \left[ \max_{j} \left\{ d_{Ij} X_{j}^{-\frac{1}{\delta}} \right\}\right]^{\frac{\eta}{1-\eta}} \end{bmatrix} \end{align}

where $a_{j},b_{i},c_{ij}$ and $d_{ij}$ are positive constants and $\eta\in (0,1) $ and $\delta>1$ are parameters. As you can see, all the values of $X$ depend on all the values of $Y$ and vice-versa.

Is there a way to solve this system of nonlinear equations analytically and get a closed-form expression, or the only way is plugging it in the computer. If the only way is numerical, are there any useful properties that one can exploit?

My first impression is that it is not possible to solve this analytically. Computationally, my first hunch was that one could show that this is a Contraction Mapping and, then, an iterative procedure would yield the unique solution. However, I have not been able to show this.


Solution Attempt #1 This solution is clearly wrong. I am not deleting it to show my own try.

Notice that we can write the whole system of equations in terms of just one of the vectors. I am going to pick vector $X$.

Then,

\begin{align} \tilde{F}(X) &= \begin{bmatrix} X_{1} - a_{1} \left[ \min_{i} \left\{ c_{i1} b_{i}^{\frac{1-\eta}{\eta}} \max_{\ell} \left\{ d_{i\ell} X_{\ell}^{-\frac{1}{\delta}} \right\} \right\}\right]^{-\delta} \\ \vdots \\ X_{J} - a_{J} \left[ \min_{i} \left\{ c_{iJ} b_{i}^{\frac{1-\eta}{\eta}} \max_{\ell} \left\{ d_{i\ell} X_{\ell}^{-\frac{1}{\delta}} \right\} \right\}\right]^{-\delta} \\ \end{bmatrix} \end{align}

Clearly, this system of equations needs to have a fixed point, i.e for the $i$ that minimizes the value in the $j$th equation it must be that $a_{j}\left(c_{i(j)j}b_{i(j)}^{\frac{1-\eta}{\eta}}d_{i(j)j}\right)^{-\delta}=1$. So this is a condition, on the values of the parameters that must hold. I know it seems weird that I am assuming it ex-post, but I am collapsing a more complicated function of the parameters into these constants. So then, this function $\tilde{F}$ indeed has a fixed point.

Then, the next step would be showing that this mapping is a contraction, which means I could solve it iteratively on the computer. However, here is where I am getting stuck.

However, I am wondering if there is a way to solve this in a more efficient or a way to get a closed-form solution or, at least, a sharper characterization of the solution.


Solution Attempt #2 Incomplete

Another approach that might be worth mentioning is that this system can be written linearly (still with the max and the min) as follows. Let $\hat{Y}_{i} = Y_{i}^{\frac{1-\eta}{\eta}}$ and $\hat{X}_{i} = X_{i}^{-\frac{1}{\delta}}$. Then, the system of equations becomes

\begin{align} \hat{F}(\hat{X},\hat{Y}) &= \begin{bmatrix} \hat{X_{1}} - a_{1} \left[ \min_{i} \left\{ c_{i1} \hat{Y_{i}} \right\}\right] \\ \vdots \\ \hat{X_{J}} - a_{J} \left[ \min_{i} \left\{ c_{iJ} \hat{Y_{i}} \right\}\right] \\ \hat{Y_{1}} - b_{1} \left[ \max_{j} \left\{ d_{1j} \hat{X_{j}}\right\}\right] \\ \vdots \\ \hat{Y_{I}} - b_{I} \left[ \max_{j} \left\{ d_{Ij} \hat{X_{j}}\right\}\right] \end{bmatrix} \end{align}


Any suggestions are very much appreciated!

Thanks!

1

There are 1 best solutions below

0
On BEST ANSWER

Numerical aspects of your 2nd approach

For notational simplicity, here $X_1$ is your $\hat{X}_1$, etc.

Consider a simple example: \begin{align*} X_1 &= a_1 \min(c_{11}Y_1, c_{21}Y_2, c_{31}Y_3), \\ X_2 &= a_2 \min(c_{12}Y_1, c_{22}Y_2, c_{32}Y_3), \\ Y_1 &= b_1 \max(d_{11}X_1, d_{12}X_2), \\ Y_2 &= b_2 \max(d_{21}X_1, d_{22}X_2),\\ Y_3 &= b_3 \max(d_{31}X_1, d_{32}X_2). \qquad (S) \end{align*}

Consider the following optimization problem: \begin{align*} &\min_{X_1, X_2, Y_1, Y_2, Y_3} ~ Y_1 + Y_2 + Y_3 - X_1 - X_2 \qquad (P)\\ &\mathrm{subject ~ to}\quad X_1 \le a_1 c_{i1}Y_i, \quad i = 1, 2, 3;\\ &\qquad\qquad\quad\ X_2 \le a_2 c_{i2}Y_i, \quad i = 1, 2, 3; \\ &\qquad\qquad\quad\ Y_1 \ge b_1 d_{1j}X_j, \quad j = 1, 2;\\ &\qquad\qquad\quad\ Y_2 \ge b_2 d_{2j}X_j, \quad j = 1, 2;\\ &\qquad\qquad\quad\ Y_3 \ge b_3 d_{3j}X_j, \quad j = 1, 2;\\ &\qquad\qquad\quad\ X_1, X_2, Y_1, Y_2, Y_3 \ge 0; \\ &\qquad\qquad\quad\ X_1 + X_2 \ge 1. \end{align*} The additional constraint $X_1 + X_2 \ge 1$ is to avoid the zero solution (since the system is homogeneous).

If the system (S) has a solution, then the feasible region of the problem (P) is non-empty. Let $(X_1, X_2, Y_1, Y_2, Y_3)$ be the solution to the problem (P), then it is a solution to the system (S).

I used Matlab to do some numerical experiment.