Calculating $\min(x_1,x_2)$ and $\max(x_1,x_2)$ using a two-layer neural network

299 Views Asked by At

Suppose that $\vec{x}\in\mathbb{R}^2$ is a vector and we want to find the minimum and the maximum of its components, using a two-layer neural network: $$\vec{y} = f_2(W_2f_1(W_1\vec{x}+b_1)+b_2), \ \ W_1,W_2 \in \mathbb{R^{2x2}},b_1,b_2\in \mathbb{R^2} \tag{1}\\ y_1 = \min(\vec{x}) = \min(x_1 , x_2) \\ y_2 = \max(\vec{x}) = \max(x_1 , x_2)$$ It's well-known that $$\max(x_1 , x_2) = \frac12(x_1+x_2)+\frac12|x_1-x_2| \\ \min(x_1 , x_2) = \frac12(x_1+x_2)-\frac12|x_1-x_2|,$$ but these formulas can't be constructed using $(1)$ due to its architecture. The problem is that $f_1(.)$ and $f_2(.)$ operate element-wisely and the same function should be applied to the components. Also this problem prevents us to use $$\max(x_1,x_2) = \max(x_1-x_2,0) + x_2 = \text{ReLU}(x_1-x_2) + x_2 \\ \min(x_1,x_2) = -\max(x_1-x_2,0) + x_1 = -\text{ReLU}(x_1-x_2) + x_1$$ I've tried other activation functions, such as sign function and step function, but it didn't work. So the question is: what's the appropriate values for $W_1,W_2,b_1,b_2$ and functions $f_1(.),f_2(.)$ such that $y_1=\min(x_1,x_2)$ and $y_2=\max(x_1,x_2)$?

Edit: The instructor corrected the question: the using of different activation functions, in a single layer of neural network is allowed. Therefore, with this correction the question can be answered easily, with the help of mentioned formulas for $\text{max}$ and $\text{min}$, but I think it's interesting to show that $(1)$ can't be used to construct $\text{max}$ and $\text{min}$, so I keep the question open.