Note: I'm using the terms "covariant" and "contravariant" a bit loosely in this question.
The standard function notation seems to be naturally codomain-covariant and domain-contravariant. And we have seen examples of this since the very first algebra class. In order to translate a function upwards, we add to the result; in order to translate a function to the right, we subtract from the argument.
This generalizes to general compositions of functions. In order to transform the $y$ axis of the graph of some function $y=f(x)$ by some function $g(\cdot)$, we can just compose it as such: $y_{[y]}=g(f(x))=(g\circ f)(x)$. However, if we want to transform the $x$ axis of the graph of the function $y=f(x)$ by some function $h(\cdot)$, we need to compose the argument with the inverse of that function: $y_{[x]}=f(h^{-1}(x))=f\big((I\circ h^{-1})(x)\big)$. Note that, strictly speaking, the transformation is first and foremost done on the graph rather than the functions, and only then it translates into functional identities through domain contravariance and codomain covariance. This resembles basis transformation and just screams "Jacobian" to me, although I don't know where exactly it's hiding and how the covariance/contravariance factors in.
The question is, what exactly is it about functional notation in its essence, as a representation of an abstract mathematical object, that makes it naturally act this way? And is it possible to construct a function notation that is covariant with respect to both the domain and the codomain?