A common activation function in Machine Learning is ReLU, defined as: $$ ReLU: \mathbb{R} \rightarrow \mathbb{R}\\ ReLU(x) := \begin{cases} 0 \mbox{ if } x < 0\\ x \mbox{ otherwise } \end{cases} $$
However, people in ML always like to apply functions componentwise to column matrices of real numbers. For example, they will write:
$ReLU(\begin{bmatrix}1\\ -1\end{bmatrix}) = \begin{bmatrix}1\\ 0\end{bmatrix}$
Is there a way to formalize this? For example, by defining a Matrix as some sort of container object, and allowing functions to operate on elements of containers? Or is the present mathematical definition of a function too strict already, and this will forever remain an abuse of notation? (i.e what people mean is that $ReLU$ is a function from column matrices to column matrices?)