Given the over-determined linear system $A\cdot x = b$, the least-squares solution (minimise $||A\cdot x - b||_2$) can be obtained by matrix multiplication: $$x_\text{ls} = A^\dagger \cdot b $$ where $A^\dagger = (A^T A)^{-1}A^T$ is the Moore-Penrose inverse.
The solution that minimises the sum of absolute residuals (minimise the $L^1$-norm $||A\cdot x - b||_1$) can be formulated as a linear programming problem as [1]: $$ \begin{array}{ll} \operatorname{minimise} & \mathbf{1}^T t \\ \text {subject to } & -t \preceq A x-b \preceq t \end{array} $$
Why is it not possible to define a "pseudoinverse" matrix that gives the minimum $L^1$-norm solution, as $A^\dagger$ does for the minimum $L^2$-norm solution?
[1] Boyd, Stephen; Vandenberghe, Lieven, Convex optimization., Cambridge University Press (ISBN 0-521-83378-7/hbk). xiii, 716 p. (2004). ZBL1058.90049.